Data Engineer

at  Kinross Gold Corporation

Toronto, ON, Canada -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate09 Aug, 2024Not Specified09 May, 20244 year(s) or aboveJenkins,Virtual Machines,Active Directory,Computer Science,Resource Management,Languages,Information Management,Data Warehousing,Design,Data Transformation,Data Engineering,Training,Data Processing,Technology,Scala,Statistics,Python,Git,SubnettingNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

Start Date ASAP
Hybrid Work Environment (3 days in office, 2 days remote with flexible hours)
Dress Code Business Casual
Location Downtown Toronto, Outside of Union Station (TTC & GO accessible)

WHO WE ARE

Kinross is a Canadian-based global senior gold mining company with operations and projects in the United States, Brazil, Mauritania, Chile and Canada. Our focus on delivering value is based on our four core values of Putting People First, Outstanding Corporate Citizenship, High Performance Culture, and Rigorous Financial Discipline. Kinross maintains listings on the Toronto Stock Exchange (symbol:K) and the New York Stock Exchange (symbol:KGC).
Mining responsibly is a priority for Kinross, and we foster a culture that makes responsible mining and operational success inseparable. In 2021, Kinross committed to a greenhouse gas reduction action plan as part of its Climate Change strategy, reached approximately 1 million beneficiaries through its community programs, and recycled 80% of the water used at our sites. We also achieved record high levels of local employment, with 99% of total workforce from within host countries, and advanced inclusion and diversity targets, including instituting a Global Inclusion and Diversity Leadership Council.
Eager to know more about us? Visit Home - Kinross Gold Corporation

MINIMUM QUALIFICATIONS AND EXPERIENCE

  • A bachelor’s degree in computer science, statistics, information management, or a related field; or an equivalent combination of training and experience.
  • At least 4 years post degree experience with design, implementation, and operationalization of large-scale data and analytics solutions.
  • A strong background in technology with experience in data engineering, data warehousing and data product development.
  • Strong understanding of reporting and analytics tools & techniques.
  • Strong knowledge of data lifecycle management concepts.
  • Excellent documentation skills, including workflow documentation.
  • Ability to adapt to a fast-paced, dynamic work environment.

REQUIRED TECHNICAL KNOWLEDGE

  • Expertise designing and implementing ETL/ELT processes using Azure Data Factory and Databricks, including data extraction from various sources, data transformation, and loading data into target systems such as data lakes or warehouses.
  • Solid understanding of the Databricks platform, including its core components, such as Databricks Runtime, Databricks Workspace, Catalog, and Databricks CLI. Comfortable navigating the Databricks environment and performing common tasks such as creating clusters, notebooks, and jobs.
  • Experience with Spark DataFrame API, Spark SQL, and Spark MLlib for data processing, querying, and machine learning tasks. Able to write efficient Spark code to process large volumes of data in distributed environments.
  • Proficient in developing and executing notebooks using languages like Python, Scala, or SQL. Experience with notebook features such as interactive visualization, Markdown cells, and magic commands.
  • Familiarity with the integration between Databricks and other Azure services, such as Azure Blob Storage for data storage, Azure Data Lake Storage (ADLS) for data lakes, Azure SQL Database or Azure Synapse Analytics for data warehousing, Azure Key Vault for secrets management, and Azure Event Hubs.
  • Experience with Source Code management tooling such as Git or Azure DevOps as well as a strong understanding of deployment pipelines using services such as Git Actions, Azure DevOps, or Jenkins
  • Understanding the basics of Azure services, including Azure Virtual Machines, Azure Storage (Blob Storage, Data Lake Storage), Azure Networking (Virtual Network, Subnetting), Azure Identity and Access Management (Azure Active Directory, Role-Based Access Control), and Azure Resource Management.
  • Solid foundation in SQL and able to write and optimize SQL queries efficiently to manipulate and analyze data effectively.

Responsibilities:

PURPOSE OF ROLE

Reporting to the Director of Development, Integration and Analytics, the incumbent will be a key member of the IT team, focusing primarily on data engineering, but also assisting with data architecture and management processes.
This person plays a critical role in enabling the organization to leverage data effectively for decision-making and strategic initiatives by ensuring the availability, reliability, and quality of data. In-depth knowledge on data processing, data modelling, data products for integration and visualization is required.


REQUIREMENT SUMMARY

Min:4.0Max:9.0 year(s)

Information Technology/IT

IT Software - DBA / Datawarehousing

Software Engineering

Graduate

Computer science statistics information management or a related field or an equivalent combination of training and experience

Proficient

1

Toronto, ON, Canada