Lead Data Engineer at Ciklum
Yerevan, Erevan, Armenia -
Full Time


Start Date

Immediate

Expiry Date

02 Jun, 26

Salary

0.0

Posted On

04 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

ClickHouse, Data Engineering, Distributed Systems, Schema Design, High-Throughput Ingestion, Kubernetes, Terraform, Helm, OpenTelemetry, Kafka, Azure Event Hub, Azure Data Explorer, SQL Performance Tuning, Cloud Provider (Azure, GCP, or AWS), Analytical Rigor, Technical Documentation

Industry

IT Services and IT Consulting

Description
Ciklum is looking for a Lead Data Engineer to join our team in Argentina. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Lead Data Engineer, you'll become a part of a cross-functional development team engineering experience of tomorrow.  Responsibilities: * Lead the migration of large-scale logs and distributed traces from existing Analytics Databases to ClickHouse * Design and tune distributed ClickHouse clusters, including sharding, replication, partitioning, and storage layout * Architect high-throughput ingestion pipelines and scalable schemas optimized for real-time telemetry workloads * Establish monitoring, alerting, and operational best practices, including Kubernetes deployment and TTL policies * Partner with platform and SRE teams to ensure production readiness, reliability, and security of the platform * Document architecture decisions, performance tuning approaches, and operational runbooks for the team Requirements: We know that sometimes, you can't tick every box. We would still love to hear from you if you think you're a good fit! * Systems Design: * 12+ years of experience in backend, infrastructure, or data platform engineering * Strong understanding of distributed systems and high-ingestion telemetry architectures * Experience designing schemas for billion-row-scale analytical or telemetry datasets * Database & Infrastructure: * Hands-on production experience with ClickHouse (MergeTree engines, indexing, and compression) * Expertise in SQL performance tuning and migrating data between large-scale analytical systems * Experience deploying and operating stateful systems on Kubernetes using Terraform and Helm * Observability & Data: * Practical experience with OpenTelemetry logs and traces * Familiarity with ingestion pipelines like Kafka, Azure Event Hub, or Azure Data Explorer (Kusto) * Proven ability to optimize query latency and cost for high-cardinality datasets * Cloud & Operations: * Proficiency in at least one major cloud provider: Azure, GCP or AWS * Experience implementing backup/restore strategies and resolving complex performance bottlenecks Personal skills: * Analytical Rigor: Strong ability to deconstruct complex telemetry problems and identify core technical requirements * Commitment: Dedicated to the timely delivery of high-quality, reliable, and production-ready data platforms * Collaboration: A proactive team player who thrives in partnering with SRE and Platform teams in dynamic environments * Documentation Flair: Natural talent for creating clear, organized, and useful technical documentation and runbooks * Growth Mindset: Eagerness to learn and adapt to the evolving ClickHouse and Observability ecosystem * English Proficiency: Ability to communicate clearly in global environments and understand complex technical documentation What`s in it for you? * Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance and mental health programs, 5 undocumented sick-leave days per year * Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy license, language courses and company-paid certifications * Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally * Long-term employment with 20 working-days paid vacation and local bank holidays * Flexibility: 100% remote work mode * Opportunities: we value our specialists and always find the best options for them. Our Internal Mobility Program helps change a project if needed to help you grow, excel professionally and fulfill your potential * Global impact: work on large-scale projects that redefine industries with international and fast-growing clients * Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. As we expand into Latin America, every Ciklumer is helping to shape our story. Collaborate with seasoned experts and make a global impact backed by two decades of industry leadership. Want to learn more about us? Follow us on Instagram [https://www.instagram.com/ciklum/], Facebook [https://www.facebook.com/Ciklum/], LinkedIn [https://www.linkedin.com/company/ciklum/]. Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Responsibilities
The Lead Data Engineer will spearhead the migration of large-scale logs and distributed traces to ClickHouse, while also designing and tuning distributed ClickHouse clusters, including sharding and replication. Responsibilities also involve architecting high-throughput ingestion pipelines, establishing operational best practices like Kubernetes deployment, and partnering with SRE teams for production readiness.
Loading...