Sr. Data Engineer (Big Data & Analytics Engineering) at Mastercard
pune, maharashtra, India -
Full Time


Start Date

Immediate

Expiry Date

29 Jul, 26

Salary

0.0

Posted On

30 Apr, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, PySpark, Python, SQL, Hadoop, ETL/ELT, Data Modeling, Apache Airflow, Cloud Computing, Data Governance, CI/CD, Distributed Computing, Data Quality, Performance Tuning, GenAI, Big Data

Industry

IT Services and IT Consulting

Description
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Sr. Data Engineer (Big Data & Analytics Engineering) Job Posting Title: Sr. Data Engineer (Big Data & Analytics Engineering) ________________________________________ About Mastercard Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere—by making transactions safe, simple, smart, and accessible. Through secure data, trusted networks, partnerships, and innovation, we enable individuals, financial institutions, governments, and businesses to realise their greatest potential. Our culture is defined by our Decency Quotient (DQ), guiding how we work, collaborate, and create impact—inside and outside our company. With a presence across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. ________________________________________ About the Role The Sr. Data Engineer will design, build, and operate scalable data pipelines and curated datasets that power analytics products, reporting, and advanced modeling. Working closely with the Lead and cross-functional partners (Product, Data Science, and Platform teams), this role focuses on reliability, performance, data quality, and governance across batch and (where applicable) streaming workloads. Key Responsibilities • Build and maintain robust ETL/ELT pipelines for ingestion, transformation, and aggregation of large-scale datasets on Hadoop and enterprise data platforms. • Develop high-performance data processing jobs using PySpark/Spark, Python, and SQL (including engines such as Impala where applicable). • Partner with Product and Analytics stakeholders to translate requirements into reusable, governed data models (facts/dimensions, curated layers, and semantic-ready datasets). • Implement and automate data quality checks, reconciliation, lineage documentation, and monitoring to ensure trust in downstream analytics and AI use cases. • Optimize pipeline performance and cost through partitioning, file formats, compute tuning, and efficient query patterns. • Optimize pipeline performance and cost through partitioning strategies, columnar file formats (Parquet, ORC, Delta), compute tuning, caching, and efficient query patterns. • Contribute to CI/CD for data workflows (testing, code reviews, deployment automation), promoting engineering best practices and maintainable codebases. • Support data governance, privacy, and security requirements (PII handling, access controls, auditability) in collaboration with platform and risk partners. • Collaborate with data scientists to publish analysis-ready and ML-ready datasets, including feature generation and repeatable data preparation processes. • Troubleshoot production issues, participate in on-call/operational rotations, and drive root-cause fixes to improve reliability. • Communicate data platform capabilities, limitations, and trade-offs clearly to technical and non-technical stakeholders. • Strong problem-solving skills with ability to debug complex distributed data issues independently. • Clear written and verbal communication with both technical engineers and non-technical business stakeholders. All About You Technical Skills & Experience • Strong hands-on experience in data engineering building production-grade pipelines on big data platforms (Hadoop ecosystem and/or cloud data platforms). • Strong hands-on experience in data engineering building production-grade pipelines on big data platforms (Hadoop ecosystem: HDFS, Hive, Impala, YARN, Oozie). • Proficiency in PySpark and Python and strong SQL skills across distributed and relational data stores. • Experience with orchestration/integration tools such as Apache Airflow, Apache NiFi, Azure Data Factory, Pentaho, or Talend. • Solid understanding of data modeling, incremental processing patterns (CDC, SCD Type 1/2), and building curated datasets for analytics and reporting • Experience with cloud services (Azure/AWS/GCP) for data lakes, compute, and storage is preferred. • Proficiency in columnar and open table formats: Parquet, ORC, Delta Lake, Apache Iceberg, or Apache Hudi. • Strong knowledge of distributed computing patterns: partitioning, bucketing, broadcast joins, shuffle optimization. • Working knowledge of DevOps/CI-CD practices: version control (Git), automated testing, release pipelines, and observability. • Strong problem-solving skills with the ability to debug complex data issues and communicate clearly with technical and non-technical stakeholders. • Bachelor’s degree in computer science, Engineering, or equivalent practical experience. • 5+ years of relevant experience in data engineering or big data analytics engineering (flexible based on depth of expertise). GenAI / LLM Data Enablement (Preferred) • Experience preparing curated, governed datasets (including semi-structured/unstructured) for AI/GenAI consumption with attention to privacy, quality, and reproducibility ________________________________________ Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Everyone wants easier ways to pay; we invent them. Checkout lines are slow; we speed them along. Merchants want more sales; we give them data and insights. People need financial access; we connect them. Corporate purchasing is complicated; we make it simple. Commuters are busy; we speed them on their way. Governments need greater efficiencies; we help create them. Small businesses are virtual; we give them access to a world of buyers. Retailers want to fight fraud; we provide the tools.
Responsibilities
The Sr. Data Engineer will design, build, and maintain scalable data pipelines and curated datasets to support analytics, reporting, and advanced modeling. They will also collaborate with cross-functional teams to ensure data quality, governance, and performance optimization across distributed platforms.
Loading...