Data Engineer - Applied AI Engineering Group at Two Maids - Fulton County
Tel-Aviv, Tel-Aviv District, Israel -
Full Time


Start Date

Immediate

Expiry Date

21 Mar, 26

Salary

0.0

Posted On

21 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Python, Distributed Systems, Machine Learning, Cloud Computing, Kubernetes, Docker, AWS, Event-Based Pipelines, Data Pipelines, Observability, Data Validation, Cybersecurity, Data Architecture, Data Governance, Self-Serve Data Systems

Industry

Computer and Network Security

Description
At Dream, we redefine cyber defense vision by combining AI and human expertise to create products that protect nations and critical infrastructure. This is more than a job; it’s a Dream job. Dream is where we tackle real-world challenges, redefine AI and security, and make the digital world safer. Let’s build something extraordinary together. Dream's AI cybersecurity platform applies a new, out-of-the-ordinary, multi-layered approach, covering endless and evolving security challenges across the entire infrastructure of the most critical and sensitive networks. Central to our Dream's proprietary Cyber Language Models are innovative technologies that provide contextual intelligence for the future of cybersecurity. At Dream, our talented team, driven by passion, expertise, and innovative minds, inspires us daily. We are not just dreamers, we are dream-makers. The Dream Job We are on an expedition to find you, someone who is passionate about creating intuitive, out-of-this-world data platforms. You'll architect and ship our streaming lake-house and data platform, turning billions of raw threat signals into high-impact, self-serve insights that protect countries in real time – all while building on top-of-the-line technologies, such as Iceberg, Flink, Paimon, Fluss, LanceDB, ClickHouse and more. The Dream-Maker Responsibilities Design and maintain agentic data pipelines that adapt dynamically to new sources, schemas, and AI-driven tasks Build self-serve data systems that allow teams to explore, transform, and analyze data with minimal engineering effort Develop modular, event-based pipelines across AWS environments, combining cloud flexibility with custom open frameworks Automate ingestion, enrichment, and fusion of cybersecurity data including logs, configs, and CTI streams Collaborate closely with AI engineers and researchers to operationalize LLM and agent pipelines within the CLM ecosystem Implement observability, lineage, and data validation to ensure reliability and traceability Scale systems to handle complex, high-volume data while maintaining adaptability and performance Own the data layer end-to-end including architecture, documentation, and governance The Dream Skill Set 5+ years of experience building large-scale distributed systems or platforms, preferably in ML or data-intensive environments Proficiency in Python with strong software engineering practices, familiarity with data structures and design patterns Deep understanding of orchestration systems (e.g., Kubernetes, Argo) and distributed computing frameworks (e.g., Ray, Spark) Experience with GPU compute infrastructure, containerization (Docker), and cloud-native architectures Proven track record of delivering production-grade infrastructure or developer platforms Solid grasp of ML workflows, including model training, evaluation, and inference pipelines Never Stop Dreaming... If you think this role doesn't fully match your skills but are eager to grow and break glass ceilings, we’d love to hear from you! Requirements null
Responsibilities
The Data Engineer will design and maintain dynamic data pipelines and build self-serve data systems for teams to analyze data. They will also collaborate with AI engineers to operationalize pipelines and ensure data reliability and traceability.
Loading...