Senior Data Engineer at The Rec Hub
Bucharest, , Romania -
Full Time


Start Date

Immediate

Expiry Date

28 Apr, 26

Salary

0.0

Posted On

28 Jan, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, PySpark, AWS, SQL, Terraform, CI/CD, DataDog, S3, ECS, Lambda, Glue, Terragrunt, CircleCI, Pipeline Optimization, Infrastructure as Code, Mentoring

Industry

Staffing and Recruiting

Description
We are looking for a Senior Data Engineer specialized in Python, PySpark, AWS, and data to join our clients dynamic team. You will play a key role in designing, optimizing, and improving data pipelines for the ingestion, enrichment, and exposure of classified and transactional data on AWS. You will work directly under the supervision of a Data Engineering Team Lead, who will organize tasks and ensure the smooth delivery of the project. Tasks Pipeline Optimization: Analyze and improve existing data pipelines to optimize performance, cost efficiency, and scalability. Pipeline Transformation: Transition batch/snapshot pipelines to delta-based data processing pipelines. Infrastructure as Code: Develop and maintain Terraform modules for efficient infrastructure management on AWS. Pipeline Migration: Migrate data pipelines from Google Cloud Platform to AWS while minimizing downtime and ensuring high reliability. Monitoring and Alerts: Design and implement DataDog dashboards and alerting systems to enable proactive monitoring of data pipeline performance. Technology Watch & Innovation: Stay up to date with emerging technologies and actively promote relevant innovations and improvements within the team. Mentoring & Team Support: Support and guide junior team members, share expertise and best practices, and contribute to the overall growth of the team. Requirements Required Technical Skills: AWS: Strong hands-on experience with AWS services such as S3, ECS, etc., with particular emphasis on Lambda and Glue. Python: Advanced Python skills for data manipulation, scripting, and pipeline development. PySpark: Solid experience building scalable and distributed data pipelines using PySpark. SQL: Strong command of SQL for querying and transforming large datasets. Terraform: Experience designing and managing infrastructure using Terraform. Knowledge of Terragrunt is a plus. CI/CD (CircleCI): Experience configuring and maintaining pipelines to support automation and deployment workflows. DataDog: Knowledge of DataDog for monitoring, alerting, and dashboard creation. Desired Qualities: Autonomous in project management, with strong ownership of production infrastructure and data access. Open-minded and driven by innovation and continuous improvement. Pragmatic, solution-oriented, and able to adapt quickly. Strong focus on code quality and adherence to best practices. Ensures high-quality testing and maintains a customer-focused approach. Join a fast-scaling tech platform revolutionizing marketplaces at scale. Work with elite engineering teams on mission-critical AI, data, and optimization challenges. Extremely competitive comp + equity. Bucharest or Belgrade offices. Apply to build the future of intelligent platforms.
Responsibilities
The Senior Data Engineer will be responsible for designing, optimizing, and improving data pipelines for ingesting, enriching, and exposing classified and transactional data on AWS. Key tasks include transitioning pipelines to delta-based processing, developing Terraform modules, and migrating pipelines from GCP to AWS.
Loading...