Software Engineer II- Python, Databricks, AWS, Spark, IDMC at JPMC Candidate Experience page
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

18 Jun, 26

Salary

0.0

Posted On

20 Mar, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Databricks, AWS, Spark, IDMC, Data Engineering, ETL, Delta Lake, S3, Lambda, EKS, Data Quality, CI/CD, PySpark, Data Modeling, Aurora DB

Industry

Financial Services

Description
We have an exciting opportunity for you to advance your data engineering career and make a meaningful impact by joining our innovative team. Job summary As a Data Engineer II at JPMorgan Chase within Corporate Data and Analytics Service team, you design and deliver trusted, scalable data solutions using modern technologies. You collaborate with us to drive critical technology initiatives that support business objectives and foster a culture of growth and inclusion. Job responsibilities * Design, develop, and maintain scalable data pipelines using Python and Spark * Build and optimize ETL workflows in Databricks, leveraging Delta Lake features * Integrate and manage data across AWS services such as S3, Lambda, and EKS * Collaborate with data analysts and business stakeholders to deliver solutions * Ensure data quality, integrity, and security across engineering processes * Monitor, troubleshoot, and optimize pipeline performance and resource usage * Document data flows, architecture, and processes for internal knowledge sharing Required qualifications, capabilities, and skills * Formal training or certification on software engineering concepts and 2+ years applied experience * Proficient in Python for data processing and automation * Strong experience with Apache Spark (PySpark) for distributed data processing * Hands-on experience with Databricks platform and Delta Lake * Solid understanding of AWS cloud services, including S3, Lambda, EKS, and Aurora DB * Experience with ETL design, data modeling, and data warehousing concepts * Familiarity with CI/CD tools and practices for data engineering Preferred qualifications, capabilities, and skills * Familiarity with modern front-end technologies * Exposure to cloud technologies * Experience with orchestration tools such as Airflow * Experience with REST APIs and data integration
Responsibilities
The role involves designing, developing, and maintaining scalable data pipelines utilizing Python and Spark, alongside building and optimizing ETL workflows within the Databricks platform using Delta Lake features. Responsibilities also include integrating data across various AWS services and ensuring data quality and security throughout the engineering processes.
Loading...