Sr. Machine Learning Engineer at enable
Toronto, ON, Canada -
Full Time


Start Date

Immediate

Expiry Date

26 Jul, 25

Salary

0.0

Posted On

27 Apr, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Ml, Docker, Fine Tuning, Machine Learning, Forecasting, Training, Optimization Models, Python, Aws, Engineers, Research, Color, Snowflake, Hiring, Data Science, Communication Skills, Distillation, Information Retrieval, Computer Science, Containerization, Azure

Industry

Information Technology/IT

Description

Do you want to help design new ways of processing Enterprise scale data at speed, learn leading edge technologies, work on complex big-data algorithms, shape processes into a growing engineering organisation, all while helping to scale a Series D rocket ship to the next level?
Then welcome to Enable \uD83D\uDE80

REQUIRED QUALIFICATIONS

  • 5+ years of experience in machine learning engineering, applied AI, or related fields.
  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, Engineering, or a related technical discipline.
  • Strong foundation in machine learning and data science fundamentals—including supervised/unsupervised learning, evaluation metrics, data preprocessing, and feature engineering.
  • Proven experience building and deploying RAG systems and/or LLM-powered applications in production environments.
  • Proficiency in Python and ML libraries such as PyTorch, Hugging Face Transformers, or TensorFlow.
  • Experience with vector search tools (e.g., FAISS, Pinecone, Weaviate) and retrieval frameworks (e.g., LangChain, LlamaIndex).
  • Hands-on experience with fine-tuning and distillation of large language models.
  • Comfortable with cloud platforms (Azure preferred), CI/CD tools, and containerization (Docker, Kubernetes).
  • Experience with monitoring and maintaining ML systems in production, using tools like MLflow, Weights & Biases, or similar.
  • Strong communication skills and ability to work across disciplines with ML scientists, engineers, and stakeholders.

PREFERRED QUALIFICATIONS

  • PhD in Computer Science, Machine Learning, Engineering, or a related technical discipline.
  • Experience with multi-agent RAG systems or AI agents coordinating workflows for advanced information retrieval.
  • Familiarity with prompt engineering and building evaluation pipelines for generative models.
  • Exposure to Snowflake or similar cloud data platforms.
  • Broader data science experience, including forecasting, recommendation systems, or optimization models.
  • Experience with streaming data pipelines, real-time inference, and distributed ML infrastructure.
  • Contributions to open-source ML projects or research in applied AI/LLMs.
  • Certifications in Azure, AWS, or GCP related to ML or data engineering.
    Enable Global Inc provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, gender, sexual orientation, gender identity, national origin, age, disability, genetic information, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state and local laws. Enable complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
    Enable expressly prohibits any form of unlawful employee harassment based on race, color, religion, gender, sexual orientation, national origin, age, genetic information, disability or veteran status. Improper interference with the ability of Enable employees to perform their expected job duties is absolutely not tolerated.

    LI-Hybri

Responsibilities
  • Design, build, and deploy RAG systems, including multi-agent and AI agent-based architectures for production use cases.
  • Contribute to model development processes including fine-tuning, parameter-efficient training (e.g., LoRA, PEFT), and distillation.
  • Build evaluation pipelines to benchmark LLM performance and continuously monitor production accuracy and relevance.
  • Work across the ML stack—from data preparation and model training to serving and observability—either independently or in collaboration with other specialists.
  • Optimize model pipelines for latency, scalability, and cost-efficiency, and support real-time and batch inference needs.
  • Collaborate with MLOps, DevOps, and data engineering teams to ensure reliable model deployment and system integration.
  • Stay informed on current research and emerging tools in LLMs, generative AI, and autonomous agents, and evaluate their practical applicability.
  • Participate in roadmap planning, design reviews, and documentation to ensure robust and maintainable systems.
Loading...