Data engineer (Azure) at Flintex Consulting Pte Ltd
, , Singapore -
Full Time


Start Date

Immediate

Expiry Date

26 Mar, 26

Salary

0.0

Posted On

26 Dec, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Azure Data Engineering, Python, Pyspark, Big Data Development, Azure Synapse Analytics, Power BI, SQL, Data Warehouse, Data Marts, Data Ingestion, ETL Processing, Azure DevOps, DAX Queries, Data Integration, Metadata Management, Data Governance

Industry

Staffing and Recruiting

Description
Data engineer (Azure) – Synapse and Pyspark, Python, Datawarehouse and Power BI , Azure Devops Skills & Experience • Bachelor’s Degree in Computer Science or Engineering with 3-5 years of experience in Azure Data engineering, Python, Pyspark or Big Data development • Sound Knowledge of Azure Synapse analytics for pipelines, orchestration, set up • 1-2 experience in Visualization design and development with Power BI. Knowledge on row-level security, access control • Sound experience in SQL, Datawarehouse, data marts, data ingestion with Pyspark and Python • Expertise in developing and maintaining ETL processing pipelines in cloud-based platforms such as AWS, Azure, etc. (Azure Synapse or data factory preferred) • Team player with good interpersonal, communication, and problem-solving skills. Job Scope • Design, review and development of Pyspark scripts. Testing, troubleshooting of data pipelines, orchestration • Designing and developing reports and dashboards in Power BI, setting up access control with row-level security, DAX query experience • Establishing connections to source data systems, including company internal systems e.g. SAP, Historians, Data Lake, etc. as well as external systems such as Web APIs, etc • Managing the collected data in appropriate storage/data-base solutions e.g. file systems, SQL servers, Big Data platforms such as Hadoop, HANA, etc. as required by the specific project requirements • Design, development of data marts and relevant data pipelines using pyspark, data copy activities for batch ingestion • Deployment of pipeline artifacts from one environment to the other using Azure Devops • Performing data integration e.g. using database table joins, or other mechanisms at an appropriate level as required by the analysis requirements of the project. Good to have • Data catalog with Purview enabling effective metadata management, lineage tracking, and data discovery • Candidates should demonstrate the ability to leverage Purview to ensure data governance, compliance, and efficient data exploration within Azure environments. Others • Able to work independently on assignment according to agreed schedule without much supervision • Own assignment and take initiative to resolve issues hinder completion of assignment Proactively reach out for help/guidance whenever required.
Responsibilities
The role involves designing, reviewing, and developing Pyspark scripts, as well as testing and troubleshooting data pipelines. Additionally, the engineer will design and develop reports and dashboards in Power BI and manage data connections to various source systems.
Loading...