Principal Engineer (Senior Data Engineer)
at TD Bank
Toronto, ON, Canada -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 08 Feb, 2025 | USD 108800 Annual | 11 Nov, 2024 | N/A | Computer Science,Pandas,Teams,Numpy,Orchestration,Nexus,Git,Big Data,Stash,Etl,Jenkins,Communication Skills,Pipeline Development,Python,Working Experience,Confluence,Azure | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
JOB DESCRIPTION:
About This Role: We are looking for a senior technical lead to provide technical leadership and expertise within data management. In this role, you’ll architect and optimize data solutions that drive critical business decisions and to provide thought leadership and insights on emerging data management solutions.
Key Responsibilities :
- Provide end to end leadership on key data initiatives to ensure the success of these initiatives within TD’s scaled Agile delivery model
- Lead design reviews, code reviews across PODs and to ensure that all activities support a secure and stable solution
- Lead a team of data engineers, providing technical guidance and mentoring to enhance their skills.
- When needed develop code to implement complex solution frameworks, or address issues/challenges with an implementation
- Define detailed system design for data integration processes or data marts/warehouse builds or extensions
- Work across PODs, roles and stakeholders to remove blockers and enable PODs to deliver optimally.
- Lead the design and implementation of scalable ETL pipelines on Azure, ensuring data quality, reliability and performance
- Develop and manage data infrastructure in Azure, including Azure Data Factory, Databricks, Data Lake and SQL Data Warehouse
- Automate ETL processes and optimize data workflows to reduce processing time and operational costs
- Implement data governance best practices, including data lineage, metadata management and data quality controls to ensure compliance with industry regulations
- To manage, coordinate delivery efforts across pods. Support data warehouse, Big Data environments by guiding stakeholders on requirements and implementation approaches
- Review and approve business and system specification requirements, author system design specifications, create or review estimates, contribute to and review system deployment plans
- Work with architecture experts to create solutions that meet data management standards
- Lead triage activities and drive immediate and long-term resolutions
- Actively contribute to our knowledge management system
- Stay updated with the latest advancements in Azure services and data engineering best practices, and introduce new technologies and methodologies to the team
QUALIFICATIONS:
- Degree, Postgraduate Degree, or Technical Certificate in Data Management or related discipline (e.g. Computer Science, Engineering), or equivalent practical experience
- Graduate degree nice to have
- 7-10 years of relevant experience in data engineering with a focus on ETL pipeline development and at least 2 years working with Azure Cloud
- Extensive experience in data warehousing concepts and implementation
- Excellent, executive level, communication skills with the ability to build positive relationships across teams
- Expertise in ETL and data movement, including relational and big data
- Experience working in large scale Enterprise Data Lake ingestion and curation projects
- Strong analytical and problem-solving skills; Experience with Production Support to investigate issues and resolution
- Ability to analyze, organize and prioritize work while meeting multiple deadlines
- Ability to work cooperatively and independently as part of a Scaled Agile pod
- Act as a brand champion for our business area / function and TD, both internally and externally
- Strong expertise in one or more of the following:
- PySpark/Spark.
- Python (Pandas, Numpy, Scikit-learn)
- Scala/Java
- Azure, including: Azure DataBricks, Azure Synapse/SQL Server, Azure Data Factory (ADF) and Orchestration
- Full working experience in Source-code versioning/promotion tools, e.g. Git/Jenkins. Nexus
- Experience with Agile concepts and the Atlassian stack (JIRA, Confluence, Stash, GIT) and Jenkins/Nexus
WHO WE ARE:
TD is one of the world’s leading global financial institutions and is the fifth largest bank in North America by branches/stores. Every day, we deliver legendary customer experiences to over 27 million households and businesses in Canada, the United States and around the world. More than 95,000 TD colleagues bring their skills, talent, and creativity to the Bank, those we serve, and the economies we support. We are guided by our vision to Be the Better Bank and our purpose to enrich the lives of our customers, communities and colleagues.
TD is deeply committed to being a leader in customer experience, that is why we believe that all colleagues, no matter where they work, are customer facing. As we build our business and deliver on our strategy, we are innovating to enhance the customer experience and build capabilities to shape the future of banking. Whether you’ve got years of banking experience or are just starting your career in financial services, we can help you realize your potential. Through regular leadership and development conversations to mentorship and training programs, we’re here to support you towards your goals. As an organization, we keep growing – and so will you.
Responsibilities:
- Provide end to end leadership on key data initiatives to ensure the success of these initiatives within TD’s scaled Agile delivery model
- Lead design reviews, code reviews across PODs and to ensure that all activities support a secure and stable solution
- Lead a team of data engineers, providing technical guidance and mentoring to enhance their skills.
- When needed develop code to implement complex solution frameworks, or address issues/challenges with an implementation
- Define detailed system design for data integration processes or data marts/warehouse builds or extensions
- Work across PODs, roles and stakeholders to remove blockers and enable PODs to deliver optimally.
- Lead the design and implementation of scalable ETL pipelines on Azure, ensuring data quality, reliability and performance
- Develop and manage data infrastructure in Azure, including Azure Data Factory, Databricks, Data Lake and SQL Data Warehouse
- Automate ETL processes and optimize data workflows to reduce processing time and operational costs
- Implement data governance best practices, including data lineage, metadata management and data quality controls to ensure compliance with industry regulations
- To manage, coordinate delivery efforts across pods. Support data warehouse, Big Data environments by guiding stakeholders on requirements and implementation approaches
- Review and approve business and system specification requirements, author system design specifications, create or review estimates, contribute to and review system deployment plans
- Work with architecture experts to create solutions that meet data management standards
- Lead triage activities and drive immediate and long-term resolutions
- Actively contribute to our knowledge management system
- Stay updated with the latest advancements in Azure services and data engineering best practices, and introduce new technologies and methodologies to the tea
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - Other
Software Engineering
Trade Certificate
Data management or related discipline (e.g
Proficient
1
Toronto, ON, Canada