Senior Data Engineer at TextLayer
Ottawa, ON, Canada -
Full Time


Start Date

Immediate

Expiry Date

09 Dec, 25

Salary

200000.0

Posted On

10 Sep, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Databases, Data Modeling, Kubernetes, Tuning, Docker

Industry

Information Technology/IT

Description

ABOUT TEXTLAYER

TextLayer helps enterprises and funded startups build, deploy, and scale advanced AI systems, without rewriting their infrastructure.
We provide engineering teams with a modular, stable foundation, so they can adopt AI without betting on the wrong tech. Our flagship stack, TextLayer Core, is maintainable, tailored to the environment, and deployed with Terraform and standardized APIs.
We’re a team on a mission to help address the implementation gap that over 85% of enterprise clients experience in adding AI to their operations and products. We’re looking for sharp, curious people who want to meaningfully shape how we build, operate, and deliver.
If you’re excited to work on foundational AI infrastructure, ship production-grade systems quickly, and help define what agentic software looks like in practice, we’d love to meet you.
The Role
The Senior Data Engineer plays a critical role in our team, working on both the frontend and backend architecture and orchestration layer for our data systems. You’ll build production-grade data pipelines, develop sophisticated data processing workflows, and create robust integrations that power our customer-facing applications with reliable, scalable data infrastructure.

REQUIRED QUALIFICATIONS

  • 3+ years of experience as a full-stack engineer with strong Python expertise
  • Hands-on experience building data pipelines and processing architectures in production
  • Proficiency with data orchestration frameworks and ETL/ELT tools
  • Experience with databases, data modeling, and search implementations
  • Strong knowledge of data processing optimization and performance tuning
  • Experience with cloud platforms (AWS/GCP/Azure) for data workload deployment
  • Proficiency with Docker and Kubernetes for containerizing and orchestrating applications
  • Comfortable with modern data tooling and monitoring systems
  • Track record of building end-to-end data systems at scale

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Architect and maintain Python-based services using Flask and modern data frameworks for pipeline and workflow implementations
  • Build and scale secure, well-structured API endpoints that interface with data stores, processing engines, and downstream applications
  • Implement advanced data orchestration logic, ETL/ELT strategies, and tool chaining for complex data workflows
  • Design and optimize data pipelines, including data loaders, transformation strategies, and integration with search systems like OpenSearch
  • Develop and maintain ML data processing pipelines for ingesting, transforming, and serving data across various storage systems
  • Containerize data services using Docker and implement scalable deployment strategies with Kubernetes
  • Collaborate with engineering teams to productionize data models and processing workflows
  • Optimize data processing techniques for improved performance, reliability, and cost efficiency
  • Set up robust test coverage, monitoring, and CI/CD pipelines for data-powered backend services
  • Stay current with emerging trends in data engineering, pipeline architectures, Agent architecture and data systems
    What You Will Bring
    To succeed in this role, you’ll need deep full-stack development expertise, hands-on experience with data pipeline implementations, and a strong understanding of modern data processing patterns. You should be passionate about building scalable data infrastructure and optimizing data workflows.
Loading...