Data Engineer
at Procom
Edmonton, AB, Canada -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 05 Jul, 2024 | Not Specified | 05 Apr, 2024 | N/A | Good communication skills | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
DATA ENGINEER
On behalf of our client, Procom is seeking a Data Engineer Analyst for a 12 month contract in Edmonton, Alberta.
Data Engineer Responsibilities
- Designing and implementing data ingestion pipelines to transfer Finance, Student, Staff, and Research data from PeopleSoft into a RAW zone within the data lake in Azure. This involves understanding the data structure of source data and using ADF for efficient data extraction.
- Developing processes to comply with our Medallion data lakehouse layers (bronze, silver, gold) and transform raw data into cleansed, conformed data, as well as provide aggregated and enriched data suitable for analytics and BI reporting.
- Creating and maintaining data products and datasets that are ready for consumption in data analysis and reporting for specific Business needs using ADF or Azure Synapse pipelines.
- Populating semantic data models that provide meaningful business context to the data, making it understandable for end-users to use BI tools like Tableau.
- Ensure efficient data processing, especially considering the costs of a scalable and flexible cloud environment such as Azure.
- Working closely with data modelers, data analysts, and data scientists to understand their data needs and requirements. Ensuring that the data pipeline aligns with these needs and the overall data strategy of the organization.
- Adherence to the University’s data pipeline architecture and standards, ensuring alignment with overall data governance and compliance.
- Streamlining data operations into an efficient and accessible data environment. Providing reports and insights on pipeline performance.
Data Engineer Mandatory Skills
Demonstrated proficiency with Azure data services, including Azure Synapse Analytics, Azure Data Lake, Azure Dedicated SQL Pool, ADF, data flow, and Azure Synapse Pipelines.
- Strong foundation and specialized knowledge in data integration and the development of data pipelines.
- Proven experience with Azure Data Factory (ADF) and Azure Synapse Analytics for data ingestion, transformation, and processing.
- Experience with Azure data storage solutions, including Azure Data Lake, SQL-based databases, and Lakehouse.
- Experience in developing data transformation processes in line with the Medallion architecture (bronze, silver, gold layers).
- Knowledge of optimizing data processing for cost-effectiveness in Azure’s scalable and flexible cloud environment.
- Strong problem-solving skills and the ability to work on complex data platform.
- Excellent communication skills, capable of engaging with both technical and non-technical stakeholders.
- Certifications in Azure Data Engineering or Azure Architecture are highly beneficial.
- Attention to detail and a focus on accuracy.
- Self-motivated with the ability to prioritize and manage workloads.
- Team player with a collaborative approach.
Data Engineer Assignment Length
12 months hybrid potential for extension
Responsibilities:
- Designing and implementing data ingestion pipelines to transfer Finance, Student, Staff, and Research data from PeopleSoft into a RAW zone within the data lake in Azure. This involves understanding the data structure of source data and using ADF for efficient data extraction.
- Developing processes to comply with our Medallion data lakehouse layers (bronze, silver, gold) and transform raw data into cleansed, conformed data, as well as provide aggregated and enriched data suitable for analytics and BI reporting.
- Creating and maintaining data products and datasets that are ready for consumption in data analysis and reporting for specific Business needs using ADF or Azure Synapse pipelines.
- Populating semantic data models that provide meaningful business context to the data, making it understandable for end-users to use BI tools like Tableau.
- Ensure efficient data processing, especially considering the costs of a scalable and flexible cloud environment such as Azure.
- Working closely with data modelers, data analysts, and data scientists to understand their data needs and requirements. Ensuring that the data pipeline aligns with these needs and the overall data strategy of the organization.
- Adherence to the University’s data pipeline architecture and standards, ensuring alignment with overall data governance and compliance.
- Streamlining data operations into an efficient and accessible data environment. Providing reports and insights on pipeline performance
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - DBA / Datawarehousing
Software Engineering
Graduate
Proficient
1
Edmonton, AB, Canada