Lead Data Engineer at Ciklum
, , Canada -
Full Time


Start Date

Immediate

Expiry Date

16 Jun, 26

Salary

0.0

Posted On

18 Mar, 26

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

ClickHouse, Data Migration, Distributed Systems, Schema Design, High-Throughput Ingestion, Kubernetes, Terraform, Helm, OpenTelemetry, Kafka, Azure Event Hub, Azure Data Explorer, SQL Performance Tuning, Cloud Provider (Azure, GCP, AWS), Systems Design, Telemetry

Industry

IT Services and IT Consulting

Description
Ciklum is looking for a Lead Data Engineer to join our team in Canada. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Lead Data Engineer, you'll become a part of a cross-functional development team engineering experience of tomorrow.  Responsibilities: * Lead the migration of large-scale logs and distributed traces from existing Analytics Databases to ClickHouse * Design and tune distributed ClickHouse clusters, including sharding, replication, partitioning, and storage layout * Architect high-throughput ingestion pipelines and scalable schemas optimized for real-time telemetry workloads * Establish monitoring, alerting, and operational best practices, including Kubernetes deployment and TTL policies * Partner with platform and SRE teams to ensure production readiness, reliability, and security of the platform * Document architecture decisions, performance tuning approaches, and operational runbooks for the team Requirements: We know that sometimes, you can't tick every box. We would still love to hear from you if you think you're a good fit! * Systems Design: * 12+ years of experience in backend, infrastructure, or data platform engineering * Strong understanding of distributed systems and high-ingestion telemetry architectures * Experience designing schemas for billion-row-scale analytical or telemetry datasets * Database & Infrastructure: * Hands-on production experience with ClickHouse (MergeTree engines, indexing, and compression) * Expertise in SQL performance tuning and migrating data between large-scale analytical systems * Experience deploying and operating stateful systems on Kubernetes using Terraform and Helm * Observability & Data: * Practical experience with OpenTelemetry logs and traces * Familiarity with ingestion pipelines like Kafka, Azure Event Hub, or Azure Data Explorer (Kusto) * Proven ability to optimize query latency and cost for high-cardinality datasets * Cloud & Operations: * Proficiency in at least one major cloud provider: Azure, GCP or AWS * Experience implementing backup/restore strategies and resolving complex performance bottlenecks Personal skills: * Analytical Rigor: Strong ability to deconstruct complex telemetry problems and identify core technical requirements * Commitment: Dedicated to the timely delivery of high-quality, reliable, and production-ready data platforms * Collaboration: A proactive team player who thrives in partnering with SRE and Platform teams in dynamic environments * Documentation Flair: Natural talent for creating clear, organized, and useful technical documentation and runbooks * Growth Mindset: Eagerness to learn and adapt to the evolving ClickHouse and Observability ecosystem * English Proficiency: Ability to communicate clearly in global environments and understand complex technical documentation What`s in it for you? * Strong community: Work alongside top professionals in a friendly, open-door environment * Growth focus: Take on large-scale projects with a global impact and expand your expertise * Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications * Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies * Care: Healthcare, Basic Life Insurance, Short and Long-term disability insurance according to the Company’s Benefit Plans About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. Now expanding across Canada, we’re looking for talented professionals to strengthen our North American footprint. Join us to innovate at scale and deliver world-class solutions to global clients. Want to learn more about us? Follow us on Instagram [https://www.instagram.com/ciklum/], Facebook [https://www.facebook.com/Ciklum/], LinkedIn [https://www.linkedin.com/company/ciklum/]. Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Responsibilities
The Lead Data Engineer will spearhead the migration of large-scale logs and distributed traces to ClickHouse, while also designing and tuning distributed ClickHouse clusters, including sharding and replication. Responsibilities also involve architecting high-throughput ingestion pipelines, establishing operational best practices like Kubernetes deployment, and documenting all architecture decisions and runbooks.
Loading...