Data Engineer (PySpark) at GSSTech Group
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

29 Jun, 26

Salary

0.0

Posted On

31 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

PySpark, Big Data Processing, Informatica BDM Development, ETL/ELT, Data Warehousing, Data Mapping, Data Profiling, Impact Analysis, SQL, Data Modeling, Performance Tuning, Agile, Unit Testing, Deployment, Production Support, AI Tools

Industry

IT Services and IT Consulting

Description
We are looking for a highly skilled Data Engineer to join the Data Engineering Chapter supporting the Group Operations Team. The ideal candidate will work closely with business and technical stakeholders to understand data requirements, perform impact analysis, and build scalable data pipelines using modern technologies like PySpark. Key Responsibilities Collaborate with the Group Operations Team to gather and analyze data requirements Perform impact assessment, technical data mapping, and data profiling Design and develop data extraction, transformation, and loading (ETL) pipelines Build and optimize data pipelines using PySpark as part of the bank’s modern tech stack Develop data solutions aligned with AECB application data models Ensure data quality, integrity, and consistency across systems Participate in unit testing, deployment, and production support Leverage modern AI tools (e.g., Claude) to improve development efficiency and reduce operational errors Work in an agile environment and contribute to continuous improvement initiatives Required Skills & Qualifications Strong hands-on experience with PySpark and big data processing Expertise in Informatica BDM Development Solid understanding of ETL/ELT concepts and data warehousing Experience in data mapping, profiling, and impact analysis Knowledge of SQL, data modeling, and performance tuning Familiarity with banking/financial data systems is a plus Exposure to AI-assisted development tools is an added advantage Strong problem-solving and analytical skills Preferred Qualifications Experience working in banking or financial services domain Familiarity with AECB reporting/data standards Experience with cloud platforms (AWS/Azure/GCP) is a plus
Responsibilities
The Data Engineer will collaborate with stakeholders to gather data requirements, perform impact analysis, and design and develop scalable ETL pipelines primarily utilizing PySpark within the bank's modern technology stack. Key duties include building and optimizing data solutions aligned with AECB application data models while ensuring data quality and consistency.
Loading...