Machine Learning Operations (MLOps) Engineer at EMOTIV
San Francisco, California, USA -
Full Time


Start Date

Immediate

Expiry Date

25 Jul, 25

Salary

0.0

Posted On

26 Apr, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Docker, Architecture, Python, Wearables, Communication Skills, Aws, Neuroscience, Bash, Microservices, Data Processing

Industry

Information Technology/IT

Description

REQUIRED QUALIFICATIONS:

  • Proven experience designing and implementing MLOps pipelines on cloud platforms

(Preferably GCP & AWS).

  • Hands-on expertise with MLOps frameworks (e.g., Kubeflow, MLFlow, Metaflow,

Ray) and containerization tools (Docker, Kubernetes).

  • Strong programming skills in Python, Bash, or similar, paired with deep knowledge of

Linux environments.

  • Experience with monitoring tools like Prometheus, Grafana, or custom logging

frameworks for tracking system and model performance.

  • Knowledge of distributed computing frameworks (e.g., Spark, Ray) for handling large-scale data processing or model training.
  • Understanding of RESTful APIs and microservices architecture, with experience

integrating ML models into application ecosystems.

  • Excellent English communication skills, with a collaborative, team-focused approach.

Preferred Qualifications:

  • Experience with real-time data processing or edge computing.
  • Background in AI/ML applications tied to neuroscience, wearables, or human-

computer interaction (aligned with EMOTIV’s mission).
Please share your CV to Ms Huyen at huyennguyen@emotiv.com

Responsibilities
  • Design, build, and troubleshoot production-grade AI systems and applications on

GCP & AWS

  • Develop and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions, or

similar.

  • Optimize, refactor, containerize, deploy, and monitor data science models, ensuring

robust versioning and quality control.

  • Automate testing, validation, and performance evaluation of machine learning

models.

  • Partner with data scientists, engineers, and architects to deliver scalable solutions,

documenting processes clearly and comprehensively.

  • Manage and optimize infrastructure-as-code (IaC) using tools like Terraform or

CloudFormation to ensure scalable and reproducible environments.

  • Implement and monitor model performance metrics in production, proactively addressing drift, bias, or degradation.
  • Ensure security and compliance of AI systems, including data privacy standards (e.g., GDPR, HIPAA) and secure deployment practices.
Loading...