Expert Data Engineer at CTERA
Petah Tikva, Center District, Israel -
Full Time


Start Date

Immediate

Expiry Date

30 Mar, 26

Salary

0.0

Posted On

30 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Flink, Spark, Kafka, S3, APIs, RDBMS, OpenSearch, Elasticsearch, AWS, Kubernetes, Terraform, GitOps, Argo Workflows, ArgoCD, Big Data, AI Tools

Industry

Software Development

Description
CTERA is looking for a Senior Data Engineer to build and operate a multi-tenant analytics platform on AWS + Kubernetes (EKS), delivering streaming and batch pipelines via GitOps as a Platform-as-a-Service (PaaS). Responsibilities: Ingestion pipelines: Build and operate Flink / Spark streaming and batch jobs ingesting from Kafka, S3, APIs, and RDBMS into OpenSearch and other data stores. Platform delivery: Provide reusable, multi-tenant pipelines as a self-service PaaS. Workflow orchestration: Manage pipeline runs using Argo Workflows. GitOps delivery: Deploy and operate pipelines via ArgoCD across environments. IaC & AWS: Provision infrastructure with Terraform and secure access using IAM / IRSA. Reliability: Own monitoring, stability, and troubleshooting of production pipelines. Collaboration: Work with product, analytics, and infra on schemas and data contracts. Requirements Requirements: Software skills: Senior-level, hands-on data engineering experience building and operating production systems with ownership of reliability and scale. Processing: Strong experience with Flink and Spark (streaming + batch). Data sources & sinks: Experience integrating with Kafka, S3, REST APIs, and RDBMS, and publishing to OpenSearch / Elasticsearch, data warehouses, or NoSQL databases. Big Data: Familiarity with big-data systems; Iceberg / PyIceberg a plus. Cloud & DevOps: Hands-on experience with EKS, RBAC, ArgoCD, and Terraform for infrastructure and delivery workflows. Datastores: Hands-on experience with OpenSearch / Elasticsearch including indexing strategies, templates/mappings, and operational troubleshooting. AI tools: Experience with AI-assisted development tools. (e.g., CursorAI, GitHub Copilot, or similar). Responsibilities null
Responsibilities
The Senior Data Engineer will build and operate ingestion pipelines and provide reusable, multi-tenant pipelines as a self-service PaaS. They will also manage workflow orchestration and ensure the reliability of production pipelines.
Loading...