Big Data Engineer at Globaldev Group
, , Poland -
Full Time


Start Date

Immediate

Expiry Date

25 Jan, 26

Salary

0.0

Posted On

27 Oct, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Big Data, Data Engineering, Real-Time Data Pipelines, Distributed Systems, Performance Optimization, Java, Scala, Go, Flink, Spark, Kafka, Kubernetes, CI/CD, Cloud Infrastructure, Observability, Monitoring

Industry

IT Services and IT Consulting

Description
Our client develops a mobile marketing and audience platform that empowers the mobile app ecosystem. With direct integration into over 500,000 monthly active mobile apps, our platform processes enormous volumes of first-party data to drive intelligent, real-time decisions and fuel growth for our partners. We’re looking for a Big Data Engineer to join our Data Platform team — someone who thrives on solving complex problems at scale, enjoys working across technologies, and is passionate about real-time data pipelines, distributed systems, and performance optimization. Responsibilities: Develop and own real-time streaming and batch data pipelines that process billions of records per day Own services and infrastructure for request enrichment, data exports, campaign analytics, fraud detection, and more - some handling hundreds of thousands of requests per second with strict low-latency SLAs Build end-to-end solutions - from design to deployment and monitoring. Work with technologies like Flink, Spark, Kafka, Protobuf, Kubernetes, Argo WF, Jenkins, Github writing in Golang, Java/Scala and Python. Development of monitoring and alerting over critical Big Data pipelines Comfortable using modern AI-assisted development tools (e.g., GitHub Copilot, Cursor) to enhance coding, testing, and debugging Participate in a shared on-call rotation to support production systems and ensure high availability Requirements: 4+ years of experience in backend or data engineering Strong experience with Java or Scala+Go. Hands-on experience with streaming platforms (e.g. Flink, Spark , Kafka) Production experience with Kubernetes and containerized services Solid knowledge of cloud infrastructure (AWS / GCP / OCI) Familiarity with CI/CD, observability (Prometheus/Grafana), and distributed systems Versatile and able to context-switch effectively across projects and technologies Self-learner, responsible, independent and strong team player Excellent communication skills Will be a plus: Experience with Scala Why Join Us? You’ll work on mission-critical systems that handle billions of events per day Join a high-caliber team solving large-scale, low-latency, and business-critical challenges You’ll have real ownership, autonomy, and room to grow technically and professionally You’ll be part of a great team that values collaboration, knowledge sharing, supporting each other while having fun along the way. We embrace modern AI tools (like GitHub Copilot and Cursor) to help boost development speed, uncover edge cases, and support creative problem-solving What we offer: Polish public holidays. 20 working days per year is Non-Operational Allowance and settled to be used for personal recreation matters and are compensated in full. These days have to be used within the year, with no rollover to the next calendar year. Health Insurance. Gym Subscription (Multisport).

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
Develop and own real-time streaming and batch data pipelines that process billions of records per day. Build end-to-end solutions from design to deployment and monitoring.
Loading...