Senior Data Engineer - Databricks, PySpark

at  Cognizant

Toronto, ON, Canada -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate02 Dec, 2024Not Specified07 Sep, 20246 year(s) or abovePython,SqlNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

At Cognizant, our global community sets us apart—an energetic, collaborative and inclusive workplace where everyone can thrive. And with projects at the forefront of innovation, you can build a varied, rewarding career and draw inspiration from dedicated colleagues and leaders. We are seeking someone who thrives in this setting and is inspired to craft meaningful solutions through true collaboration. Cognizant is right where you belong!

WE ARE COGNIZANT ARTIFICIAL INTELLIGENCE:

Digital technologies, including analytics and AI, give companies a once-in-a-generation opportunity to perform orders of magnitude better than ever before. However, clients need new business models built from analyzing customers and business operations at every angle to really understand them. With the power to apply artificial intelligence and data science to business decisions via enterprise data management solutions, we help leading companies prototype, refine, validate, and scale the most desirable products and delivery models to enterprise scale within weeks.
Job Summary: We are seeking a highly skilled Senior Data Engineer with 6 to 8 years of experience in Python, PySpark, Databricks, SQL, Databricks Workflows. This role involves developing and optimizing data workflows ensuring data integrity and contributing to the overall success of our data-driven projects.

EXPERIENCE: 7 TO 12 YEARS

Technical Skills Required: Python, PySpark, Databricks, SQL

Responsibilities:

WHAT YOU BRING TO THE ROLE:

  • Possess a minimum of 6 years of experience in Python, PySpark, Databricks, Databricks Workflows and SQL.
  • Exhibit proficiency in developing and optimizing data workflows.
  • Show capability in writing complex SQL queries for data extraction and manipulation.
  • Display experience in troubleshooting and resolving data processing issues.
  • Have a track record of ensuring data integrity and accuracy.
  • Exhibit strong documentation skills for technical specifications.

WHAT YOU’LL DO:

  • Develop and optimize data workflows using Databricks Workflows to ensure efficient data processing.
  • Develop data processing workflows and ETL processes using Python and PySpark.
  • Implement and maintain data pipelines using PySpark to handle large-scale data sets.
  • Utilize Python to develop robust and scalable data solutions.
  • Write complex SQL queries in Databricks SQL to extract and manipulate data as needed.
  • Ensure data integrity and accuracy by implementing best practices in data validation and quality checks.
  • Conduct performance tuning and optimization of data workflows to enhance system efficiency.
  • Troubleshoot and resolve issues related to data processing and workflows promptly.


REQUIREMENT SUMMARY

Min:6.0Max:12.0 year(s)

Information Technology/IT

IT Software - DBA / Datawarehousing

Software Engineering

Graduate

Proficient

1

Toronto, ON, Canada