Data Engineer (PySpark/Informatica BDM) at GSSTech Group
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

29 Jun, 26

Salary

0.0

Posted On

31 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

PySpark, Informatica BDM, ETL, Data Profiling, Data Mapping, Impact Analysis, IFRS9, Data Modeling, SQL, Data Quality, Data Governance, Big Data Management, Distributed Data Processing, Unit Testing, Performance Optimization, AI-Assisted Tools

Industry

IT Services and IT Consulting

Description
We are seeking a skilled Data Engineer to join the Group Risk team, responsible for building and managing robust data pipelines to support IFRS9 reporting. The role involves close collaboration with business stakeholders to understand data requirements, perform impact analysis, and deliver high-quality data solutions using modern data engineering technologies such as PySpark and Informatica BDM. Key Responsibilities Collaborate with the Group Risk Team to gather and understand business and data requirements Perform impact assessment and technical data mapping for new and existing data sources Conduct data profiling to ensure data quality, consistency, and completeness Design, develop, and maintain ETL pipelines using PySpark and Informatica BDM Build scalable data transformation workflows aligned with IFRS9 data models Ensure accurate data extraction, transformation, and loading (ETL) into reporting systems Participate in unit testing, validation, and deployment of data pipelines Optimize data processing performance and troubleshoot production issues Adopt modern tools (e.g., AI-assisted tools like Claude) to improve productivity, reduce errors, and enhance development workflows Maintain proper documentation for data flows, mappings, and processes Required Skills & Qualifications Strong experience in PySpark for large-scale data processing Hands-on experience with Informatica BDM (Big Data Management) Solid understanding of ETL concepts, data warehousing, and data modeling Experience with data profiling, data mapping, and impact analysis Knowledge of IFRS9 or Risk/Banking domain is highly preferred Familiarity with distributed data processing frameworks and big data ecosystems Strong SQL skills and experience working with relational databases Good understanding of data quality and governance principles Preferred Skills Exposure to cloud platforms (AWS / Azure / GCP) Experience with AI-assisted development tools (e.g., Claude, GitHub Copilot) Knowledge of CI/CD pipelines in data engineering workflows Soft Skills Strong analytical and problem-solving skills Excellent communication and stakeholder management abilities Ability to work in a fast-paced, collaborative environment
Responsibilities
The Data Engineer will be responsible for building and managing robust data pipelines to support IFRS9 reporting for the Group Risk team. Key tasks include collaborating with stakeholders, performing impact analysis, and designing, developing, and maintaining ETL pipelines using PySpark and Informatica BDM.
Loading...