AI Platform Engineer at Cellebrite
Petah Tikva, Center District, Israel -
Full Time


Start Date

Immediate

Expiry Date

25 Apr, 26

Salary

0.0

Posted On

25 Jan, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Node.js, Backend Architecture, APIs, Docker, LLMs, RAG Pipelines, Embedding Models, Vector Databases, Linux, Container Runtimes, Kubernetes, AI Integration, DevOps, Open-Source, Analytical Engines

Industry

Public Safety

Description
Company Overview: Cellebrite’s (Nasdaq: CLBT) mission is to enable its global customers to protect and save lives by enhancing digital investigations and intelligence gathering to accelerate justice in communities around the world. Cellebrite’s AI-powered Digital Investigation Platform enables customers to lawfully access, collect, analyze and share digital evidence in legally sanctioned investigations while preserving data privacy. Thousands of public safety organizations, intelligence agencies and businesses rely on Cellebrite’s digital forensic and investigative solutions—available via cloud, on-premises and hybrid deployments—to close cases faster and safeguard communities. To learn more, visit us at www.cellebrite.com, https://investors.cellebrite.com/investors and find us on social media @Cellebrite. Position Overview: We're assembling a small-scale team of innovators committed to a transformative mission: advancing generative AI from conceptual breakthrough to tangible product reality. As an AI Platfrom Engineer, you will be a critical architect of the technological infrastructure that brings our most ambitious GenAI concepts to life, transforming our digital intelligence solutions through cutting-edge AI innovation. Build advanced AI platform that operates both as a cloud SaaS and as a fully self-contained on-prem / edge deployment, designed for privacy-sensitive and security-critical environments, at the intersection of backend development, AI integration, DevOps, and open-source systems engineering. You will be part of the core team responsible for adapting, hardening, and operating our SaaS architecture in on-prem and single-node environments (on Prem servers, laptops). Working closely with architecture and product management and play a key role in making complex AI systems deployable, reliable, and operable outside the cloud. Key Responsibilities Platform & Application Engineering Adapt cloud-native AI services to on-prem and edge deployments (single node, no managed cloud services). Build and maintain full-stack components: Backend APIs (Python / Node.js), Lightweight UIs or internal tools when needed, Ensure services are stateless, configurable, and portable across environments. AI & Open-Source Integration Integrate and operate open-source LLMs for: RAG pipelines Agentic workflows Tool calling and orchestration Work with: Embedding models, Vector databases (local and embedded modes), Analytical engines (e.g., embedded SQL / columnar systems), Optimize inference for CPU / single-GPU environments (quantization, batching, caching). DevOps & Runtime Engineering (Strong Focus) Package services into portable Docker containers usable in: On-prem servers (Kubernetes) laptop / edge devices Implement in-process scaling strategies (worker pools, task queues, batching). Build simple, reliable deployment and startup flows (no heavy orchestration). Manage configuration, secrets, logging, and observability in constrained environments. Systems & Reliability Design for: Offline operation, Limited resources, Predictable performance Implement graceful degradation between: SaaS mode, On-prem server mode, Single-node / laptop device mode, Debug complex interactions across AI models, storage, and runtime systems. Requirements Core Engineering 6+ years of progressive full-stack development experience Strong experience with Python and/or Node.js in production systems Solid understanding of backend architecture, APIs, and service boundaries. Experience building containerized applications with Docker. AI / Data Systems Hands-on experience integrating: LLMs (open-source preferred), RAG pipelines, Embedding models and vector search Understanding of AI performance constraints (latency, memory, batching). DevOps / Platform Skills Practical experience with: Linux environments, Container runtimes, Local and on-prem deployments Comfortable operating systems without managed cloud services. Ability to reason about CPU/GPU utilization, memory limits, and scaling trade-offs. Open-Source Mindset Strong familiarity with open-source ecosystems. Ability to read, debug, and extend third-party code. Preference for pragmatic solutions over heavy frameworks. Nice to Have Experience with: On-prem, air-gapped, or regulated environments, Embedded or edge deployments, Analytical engines (DuckDB, ClickHouse, Trino, etc.), Vector DBs (Qdrant, Milvus, pgvector) Exposure to Kubernetes (not required for edge devices). Experience in security-sensitive domains. Personal Characteristics null
Responsibilities
The AI Platform Engineer will build and maintain a cloud-native AI platform for on-prem and edge deployments, ensuring services are stateless and portable. They will also integrate open-source LLMs and manage deployment strategies in constrained environments.
Loading...