Azure Data QA Tester (Strong Databricks/Pyspark) at Ccube
Indore, Madhya Pradesh, India -
Full Time


Start Date

Immediate

Expiry Date

28 Jun, 26

Salary

200000.0

Posted On

30 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Azure, Data Testing, Databricks, Pyspark, Azure Data Factory, ETL, SQL, Python, Data Pipeline, Data Warehousing, Data Lake, SparkSql, T-SQL, Delta Lake, Agile, Data Quality

Industry

IT Services and IT Consulting

Description
Hello Azure Data Tester Professional! Hope you are doing well! We are looking for an "Azure Data QA Tester in Indore, MP." Below is the JD for review. Job Title: Sr Azure Data QA Tester (Databricks & Pyspark) Location: Indore, MP Full Time Experience: 5-10 Years Must have: ETL/Data Testing/Azure Databricks + Azure Data Factory: 5+ years (Required) Strong SQL + Python/Py Spark: 5+ years (Required) Experienced in end-to-end of ETL (Data Pipeline) QA testing using various technologies in Azure cloud involving ingestion from varied data sources. Should be familiar with data warehousing target and intermediate structures ideally in environments using data lakes. Well versed in Databricks, SQL Database, Azure Data Factory, Azure Data Lake Must have hands-on experience in PySpark, SparkSQL, T-SQL Should be able to go through requirements, design specs and create appropriate test scenarios across various layers/stages of data pipelines. Should be able to write reusable PySpark and T-SQL script to validate data across layers. Experience of working in Databricks Delta Lake will be highly desirable. Ideally should have worked in a team using Agile methodology. Appreciation of data, its quality, and its usage for business benefit Very good communication and written skills Nice to Have • Experience in the Banking/Finance domain 📩 Interested candidates can apply or share their profiles with us at "rahul@ccube.com"
Responsibilities
The role involves end-to-end QA testing of ETL data pipelines within the Azure cloud, covering data ingestion from various sources to warehousing targets. Responsibilities include creating test scenarios based on requirements and writing reusable PySpark and T-SQL scripts for data validation across pipeline stages.
Loading...