Data Platform Engineer - Applied AI Engineering Group at FHH Experienced Professionals
Tel-Aviv, Tel-Aviv District, Israel -
Full Time


Start Date

Immediate

Expiry Date

04 Apr, 26

Salary

0.0

Posted On

04 Jan, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Software Engineering, Data Engineering, Platform Engineering, Distributed Systems, Streaming, Ingestion, Open Data Lake, Lakehouse, OLAP, OLTP, Databases, Kubernetes, AWS, CI/CD, Python, Java, Scala

Industry

Computer and Network Security

Description
At Dream, we redefine cyber defense vision by combining AI and human expertise to create products that protect nations and critical infrastructure. This is more than a job; it’s a Dream job. Dream is where we tackle real-world challenges, redefine AI and security, and make the digital world safer. Let’s build something extraordinary together. Dream's AI cybersecurity platform applies a new, out-of-the-ordinary, multi-layered approach, covering endless and evolving security challenges across the entire infrastructure of the most critical and sensitive networks. Central to our Dream's proprietary Cyber Language Models are innovative technologies that provide contextual intelligence for the future of cybersecurity. At Dream, our talented team, driven by passion, expertise, and innovative minds, inspires us daily. We are not just dreamers, we are dream-makers. The Dream Job It starts with you - an engineer who cares about building modern, real-time data platforms that help teams move faster with trust. You value great service, performance, and doing things right. You’ll work alongside experienced engineers to build a top-of-the-line open streaming data lake/lakehouse and data stack, turning massive threat signals into intuitive, self-serve data and fast retrieval for humans and AI agents - powering a unified foundation for AI-driven mission-critical workflows. If you want to grow your skills on problems that matter, join Dream’s mission and help build best-in-class data systems that move the world forward - this role is for you. The Dream-Maker Responsibilities Develop and maintain platform surfaces (APIs, CLI/UI) for streaming and batch pipelines with correctness and safe replay/backfills. Contribute to the open data lake/lakehouse across cloud and on-prem; work with schema evolution, partitioning, and compaction trade-offs. Build and operate serving layers across OLAP, OLTP, document engines, and vector databases. Support the data layer for AI - datasets for training and inference, feature and embedding storage, RAG-ready retrieval paths, and foundational building blocks that accelerate AI development. Contribute to AI-native capabilities - support agentic pipelines, self-tuning processes, and secure sandboxing for model experimentation. Help maintain catalog, lineage, observability, and governance - contributing to freshness SLAs and access controls. Profile and improve performance; assist with capacity planning and cost optimization. Ship tooling - libraries, templates, CI/CD, and runbooks - while collaborating across AI, ML, Data Science, Engineering, Product, and DevOps. The Dream Skill Set 3+ years in software engineering, data engineering, platform engineering, or distributed systems with meaningful exposure to data systems, pipelines, or infrastructure. Streaming & ingestion - Exposure to technologies like Kafka, Spark, Flink, or Debezium; batch/stream concepts; orchestration basics with tools like Airflow or Dagster. Open data lake/lakehouse - Familiarity with table formats like Iceberg, Delta, or Hudi; columnar formats like Parquet; understanding of partitioning and schema evolution concepts. Serving & retrieval - Exposure to OLAP engines like ClickHouse or Trino; familiarity with vector databases like Milvus, Qdrant, or LanceDB; experience with caching layers like Redis is a plus. Databases - Experience with OLTP systems like Postgres or MySQL; exposure to document/search engines like MongoDB or Elastic; familiarity with serialization formats like Avro or Protobuf. Platform & infra - Kubernetes basics, AWS services, CI/CD pipelines, Infrastructure-as-Code concepts with tools like Terraform. Performance & cost - Awareness of performance profiling, query optimization concepts, and cost considerations in data systems. Engineering craft - Proficiency in Python, Java, or Scala; testing practices; comfort with code review; experience with AI coding tools like Cursor, Claude Code, or Copilot is a plus. Never Stop Dreaming... If you think this role doesn't fully match your skills but are eager to grow and break glass ceilings, we’d love to hear from you! Requirements null
Responsibilities
Develop and maintain platform surfaces for streaming and batch pipelines, and contribute to the open data lake/lakehouse. Build and operate serving layers and support the data layer for AI, while maintaining catalog, lineage, observability, and governance.
Loading...