Data Engineering Consultant at Accenture
Montréal, QC, Canada -
Full Time


Start Date

Immediate

Expiry Date

02 Dec, 25

Salary

0.0

Posted On

03 Sep, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Processing, Azure, Python, Data Engineering, Kafka, Communication Skills, Sql, Performance Tuning, Spark, Architecture, Apache Spark, Completion

Industry

Information Technology/IT

Description

WE ARE:

Accenture’s Data & AI practice, the people who love using data to tell a story. We’re also the world’s largest team of data scientists, data engineers, and experts in machine learning and AI. A great day for us? Solving big problems using the latest tech, serious brain power, and deep knowledge of just about every industry. We believe a mix of data, analytics, automation, and responsible AI can do almost anything—spark digital metamorphoses, widen the range of what humans can do, and breathe life into smart products and services. Want to join our crew of sharp analytical minds?

PREFERRED QUALIFICATIONS:

  • Familiarity with orchestration tools
  • Understanding of data warehousing concepts and architecture.
  • Strong communication skills and the ability to work cross-functionally.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

ABOUT THE ROLE:

We are looking for an experienced and motivated Data Engineer to help design, build, and optimize scalable data systems. This role focuses on developing robust data pipelines, enabling real-time data ingestion and processing, and supporting data-driven decision-making. You’ll work with cutting-edge technologies including Apache Spark, Databricks, Kafka, and cloud platforms like GCP and Azure.
As a subject matter expert, you will lead by example guiding technical decisions, mentoring team members, and collaborating with cross-functional teams to deliver high-impact solutions. Your contributions will help shape data architecture and ensure the availability, reliability, and performance of data infrastructure.

KEY RESPONSIBILITIES:

  • Design and develop end-to-end data pipelines, including real-time streaming and batch processing.
  • Build scalable and efficient solutions using Apache Spark, Databricks, and Kafka.
  • Implement ETL/ELT processes to collect, transform, and load data across diverse systems.
  • Ensure data quality, consistency, and integrity through validation frameworks and monitoring tools.
  • Optimize pipeline performance and scalability across cloud platforms (GCP and Azure).
  • Collaborate with engineering, analytics, and business teams to support data needs.
  • Lead technical discussions, contribute to architectural decisions, and mentor junior engineers.
  • Stay current with emerging tools, frameworks, and best practices in the data engineering space.
Loading...