Junior Data Engineer at Entain
Hyderabad, Telangana, India -
Full Time


Start Date

Immediate

Expiry Date

25 Jun, 26

Salary

0.0

Posted On

27 Mar, 26

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

SQL, Python, Snowflake, DBT, ETL, APIs, Git, CI/CD, Unit Tests, Data Quality, Troubleshooting, Data Pipelines, Cloud Data Warehouse, Airflow, Docker, MLOps

Industry

Entertainment Providers

Description
Company Description We’re Entain. Our vision is to be the world leader in sports betting and gaming entertainment by creating the most exciting and trusted experience for our customers, revolutionizing the gambling space as we go. We're home to a global family of more than 25 well-known brands, and with a focus on sustainability and growth, we will transform our sector for our players, for ourselves and for the industry. Job Description Data intelligence and AI plays a transformational role in how we create value today and how we shape our future. We have a bold ambition to be a world-class data and AI driven business for our customers across all our regions and business functions. This is a hands-on engineering role that will work within our within our ASE Data & AI function. You will support cross-functional teams by building and maintaining data infrastructure that enables efficient analysis and insights generation and generative AI and ML applications. The purpose of this role is to develop and maintain robust data pipelines and systems that serve our diverse data consumers across the business spanning our brands in the Americas and Southern Europe. By leveraging your data engineering skills, you'll enable data-driven decision making throughout the organization, supporting business growth and operational excellence. Key Responsibilities Assist in developing and maintaining data pipelines and ETL processes to support reporting, analytics, engineering and AI use cases. Work with internal and external data sources to ingest, process, and store datasets using existing connectors and APIs. Support the team in managing data within cloud DWH platforms such as Snowflake (or equivalent). Help build and maintain data transformation workflows using DBT or similar tools. Assist in deploying data pipeline artifacts to production environments and support the team in managing releases. Contribute to maintaining and improving CI/CD pipelines for data workflows, ensuring reliable and automated deployments. Write and maintain automated unit tests and data quality tests to support reliable deployments and ensure the correctness of data pipelines and transformations. Perform data validation checks and basic monitoring to help maintain data quality and reliability. Collaborate with senior data engineers and analysts to troubleshoot data pipeline issues and resolve data inconsistencies. Qualifications Essential: Bachelor’s degree in Computer Science, Software Engineering, Data Science, or a related field, or equivalent practical experience Working knowledge of SQL for querying and transforming data Basic to intermediate experience with Python for data processing and scripting Familiarity with cloud data warehouse platforms such as Snowflake, BigQuery, or similar Exposure to data transformation or orchestration tools such as DBT, Airflow, or similar Basic understanding of data pipeline concepts (ETL/ELT) Familiarity with Git or other version control systems Strong problem-solving skills and attention to detail Good communication skills and ability to work in a collaborative team environment Desired: Basic familiarity with data visualization tools such as PowerBI or Tableau Awareness of data quality validation or monitoring tools Basic understanding of data governance and data management principles Exposure to event-driven or streaming data systems (Kafka, Kinesis, etc.) Familiarity with containerization tools such as Docker Exposure to CI/CD workflows (GitHub Actions, GitLab CI, Jenkins, etc.) Interest in modern data engineering practices and AI/ML data workflows Basic familiarity with DevOps concepts, including CI/CD pipelines, deployment workflows, and automated testing practices Understanding of MLOps concepts, including how data pipelines support machine learning model training, versioning, and deployment workflows Familiarity with AI/LLM-based systems, such as RAG, embeddings, or vector databases Additional Information We’re looking for someone who: Approaches complex technical challenges with creativity and persistence Communicates technical concepts clearly to both technical and non-technical stakeholders Demonstrates adaptability when working with emerging technologies and changing requirements Takes initiative to improve system performance and identify innovative solutions Balances technical excellence with pragmatic delivery to meet business needs Collaborates effectively across multidisciplinary teams, especially with data engineers and AI SMEs Takes ownership of assigned projects from concept through implementation and maintenance Shows curiosity about AI advancements and applies relevant learnings to solve business problems Sets, achieves and is motivated by goals Able to collaborate and work well with others Equal Opportunities. If you need any reasonable adjustments at any stage of the recruitment process, please contact us and we'll support you. We're committed to creating a diverse, equitable and inclusive workplace where everyone feels valued, respected and able to be themselves. We're an equal opportunities employer. We welcome applications from everyone and we do not discriminate based on race, colour, nationality, ethnic or national origin, religion or belief, sex, gender identity or expression, sexual orientation, age, disability, marital or civil partnership status, pregnancy or maternity, or any other status protected by law. We comply with all applicable recruitment regulations and employment laws in the jurisdictions where we operate, ensuring ethical and compliant hiring practices globally. Advertising Department: Data & Analytics
Responsibilities
This role involves building and maintaining robust data pipelines and ETL processes to support reporting, analytics, engineering, and AI use cases across various business functions. Key tasks include ingesting and processing data from diverse sources, managing data in cloud data warehouses like Snowflake, and building data transformation workflows using tools such as DBT.
Loading...