Senior Data Engineer at RZR Global Inc.
San Francisco, California, United States -
Full Time


Start Date

Immediate

Expiry Date

16 Jun, 26

Salary

0.0

Posted On

18 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Pipelines, Kafka, Spark, ClickHouse, HDFS, Fluentd, Distributed Systems, Data Modeling, Streaming Processing, Batch Processing, Performance Tuning, Capacity Planning, Data Quality, Monitoring, Alerting, RTB

Industry

Advertising Services

Description
Who are we? RZR Global is an AI-driven company specializing in mobile advertising solutions designed to fuel revenue growth. We leverage AI to discover audiences in a privacy-first environment through trillions of contextual bidding signals and proprietary behavioral models. Our audience engagement platform includes creative strategy and execution. We handle 5 million mobile ad requests per second from over 10 billion devices, driving performance for both publishers and brands. We are headquartered in San Francisco, CA, with a global presence across the United States, EMEA, and APAC. Role Overview RZR Global is seeking a talented Data Engineer to join our growing engineering team. This role is ideal for an engineer with a strong background in building and operating large-scale, high-performance data pipelines. As a Data Engineer, you will design, develop, and maintain the data infrastructure that powers our programmatic Demand-Side Platform (DSP), enabling real-time and batch processing of massive volumes of event, log, and campaign data. You will work with technologies such as ClickHouse, Kafka, Spark, HDFS, and Fluentd to ensure data is reliable, scalable, and accessible for analytics, reporting, and machine learning. You will collaborate closely with backend engineers, data scientists, and product teams to deliver high-quality data solutions that support real-time bidding (RTB), optimization, and business insights. Key Responsibilities Architect, design, and own highly scalable, fault-tolerant data pipelines for real-time and batch processing of large-scale event and campaign data. Lead the development of data processing systems using Kafka, Spark, ClickHouse, HDFS, and Fluentd, with a strong focus on performance, reliability, and data correctness. Partner closely with backend engineers, data scientists, and product teams to define data models, SLAs, and end-to-end data flows that support real-time bidding (RTB), analytics, and machine learning use cases. Drive performance optimization, capacity planning, and cost efficiency across streaming and batch data platforms. Establish and enforce best practices around data quality, monitoring, alerting, testing, and operational readiness. Conduct design and code reviews, mentor junior engineers, and provide technical leadership across data engineering initiatives. Evaluate and introduce new data technologies, frameworks, and architectural improvements to evolve the data platform at scale. Required Skills / Experience 6+ years of experience in data engineering, backend engineering, or distributed systems development. Strong proficiency in building and operating large-scale data pipelines using technologies such as Kafka, Spark, ClickHouse, HDFS, and Fluentd. Solid understanding of distributed systems concepts, including data partitioning, fault tolerance, consistency, and scalability. Experience designing efficient data models and schemas for analytical and real-time workloads. Strong experience with streaming and batch processing architectures. Experience with performance tuning, capacity planning, and troubleshooting production data systems. Familiarity with data quality, monitoring, alerting, and operational best practices. Knowledge of real-time systems, ad tech, programmatic advertising, RTB, or large-scale analytics platforms is a plus. Excellent problem-solving skills, strong ownership mindset, and ability to operate effectively in a fast-paced, high-scale environment. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Responsibilities
The Data Engineer will architect, design, and own highly scalable, fault-tolerant data pipelines for real-time and batch processing of large-scale event and campaign data, leading development using technologies like Kafka, Spark, and ClickHouse. They will partner with various teams to define data models and flows supporting real-time bidding, analytics, and machine learning use cases.
Loading...