Data Engineering Manager at Citco
Toronto, ON, Canada -
Full Time


Start Date

Immediate

Expiry Date

03 Dec, 25

Salary

0.0

Posted On

03 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

ABOUT CITCO

Citco is a global leader in fund services, corporate governance and related asset services with staff across 50 office locations worldwide. With more than $1.8 trillion in assets under administration, we deliver end-to-end solutions and exceptional service to meet our clients’ needs.
For more information about Citco, please visit www.citco.com

Responsibilities

As the Data Engineering Manager, you will be responsible for architecting, implementing, and optimizing end-to-end data solutions on Databricks while integrating with core AWS services. You will lead a technical team of data engineers, ensuring best practices in performance, security, and scalability. This role requires a deep, hands-on understanding of Databricks internals and a track record of delivering large-scale data platforms in a cloud environment.

  • Lead a team of data engineers in the architecture and maintenance of Databricks Lakehouse platform, ensuring optimal platform performance and efficient data versioning using Delta Lake
  • Manage and optimize Databricks infrastructure including cluster lifecycle, cost optimization, and integration with AWS services (S3, Glue, Lambda)
  • Design and implement scalable ETL/ELT frameworks and data pipelines using Spark (Python/Scala), incorporating streaming capabilities where needed
  • Drive technical excellence through advanced performance tuning of Spark jobs, cluster configurations, and I/O optimization for large-scale data processing
  • Implement robust security and governance frameworks using Unity Catalog, ensuring compliance with industry standards and internal policies
  • Lead and mentor data engineering teams, conduct code reviews, and champion Agile development practices while serving as technical liaison across departments
  • Establish and maintain comprehensive monitoring solutions for data pipeline reliability, including SLAs, KPIs, and alerting mechanisms
  • Configure and manage end-to-end CI/CD workflows using source control, automated testing, and version control
  • You have a Bachelor’s Degree in Engineering, Computer Science or equivalent.
  • 5+ years of hands-on experience with Databricks and Apache Spark, demonstrating expertise in building and maintaining a production-grade data pipelines
  • Proven experience leading and mentoring data engineering teams in complex, fast paced environments
  • Extensive experience with AWS cloud services (S3, EC2, Glue, EMR, Lambda, Step Functions)
  • Strong programming proficiency in Python (PySpark) or Scala, and advanced SQL skills for analytics and data modeling
  • Demonstrated expertise in infrastructure as code using Terraform or AWS CloudFormation for cloud resource management
  • Strong background in data warehousing concepts, dimensional modeling, and experience with RDBMS systems (e.g., Postgres, Redshift)
  • Proficiency with version control systems (Git) and CI/CD pipelines, including automated testing and deployment workflows
  • Excellent communication and stakeholder management skills, with demonstrated ability to translate complex technical concepts into business terms
  • Has demonstrated the use of AI in the development lifecycle
  • Some travel may be required to the US
  • Knowledge of financial industry will be preferred
Loading...