EY - GDS Consulting - AI and DATA - Azure DBX - Senior at EY
Bidhannagar, west bengal, India -
Full Time


Start Date

Immediate

Expiry Date

19 Dec, 25

Salary

0.0

Posted On

20 Sep, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Databricks, Python, ETL, Data Pipelines, Data Quality, SQL, NoSQL, Cloud Platforms, Big Data Technologies, Problem-Solving, Attention to Detail, Collaboration, Technical Documentation, Data Architecture, Root Cause Analysis

Industry

Professional Services

Description
At EY, we’re all in to shape your future with confidence.  We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go.  Join EY and help to build a better working world.   EY GDS – Data and Analytics (D&A) – Azure Data Engineer - Senior     Job Summary:   We are seeking a skilled Data Engineer with expertise in Databricks and Python scripting to enhance our ETL (Extract, Transform, Load) processes. The ideal candidate will have a proven track record of developing and optimizing data pipelines, implementing data solutions, and contributing to the overall data architecture.     Key Responsibilities:   Design, build, and maintain scalable and efficient data pipelines using Databricks and Python. Develop ETL processes that ingest and transform data from various sources into a structured and usable format. Collaborate with cross-functional teams to gather requirements and deliver data engineering solutions that support business objectives. Write and optimize Python scripts for data extraction, transformation, and loading tasks. Ensure data quality and integrity by implementing best practices and standards for data engineering. Monitor and troubleshoot ETL processes, performing root cause analysis and implementing fixes to improve performance and reliability. Document data engineering processes, creating clear and concise technical documentation for data pipelines and architectures. Stay current with industry trends and advancements in data engineering technologies and methodologies.     Qualifications:   Bachelor's degree in Computer Science, Information Technology, or a related field. Minimum of 3-5 years of experience in data engineering, with a focus on Databricks and Python scripting for ETL implementation. Strong understanding of data warehousing concepts and experience with SQL and NoSQL databases. Proficiency in Python and familiarity with data engineering libraries and frameworks. Experience with cloud platforms (e.g., AWS, Azure) and big data technologies is a plus. Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a collaborative team.     Working Conditions:   Innovative and dynamic work environment with a strong emphasis on delivering high-quality data solutions. Opportunity to work with a diverse team of data professionals and contribute to impactful projects.   EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
The Data Engineer will design, build, and maintain scalable data pipelines using Databricks and Python, while developing ETL processes to transform data from various sources. They will also ensure data quality and integrity, monitor ETL processes, and document engineering processes.
Loading...