Sr Machine Learning Engineer at Ericsson
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

10 Feb, 26

Salary

0.0

Posted On

12 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning Engineering, Software Engineering, DevOps, Python, TensorFlow, PyTorch, Scikit-learn, AWS, Microservices, Terraform, Docker, Kubernetes, CI/CD, Cloud Computing, MLOps, APIs, SDKs

Industry

Telecommunications

Description
Join our Team About this opportunity: Ericsson Enterprise Wireless Solutions (BEWS) is responsible for driving Ericsson's Enterprise Networking and Security business. Our expanding product portfolio covers wide area networks, local area networks, and enterprise security. We are the #1 global market leader in Wireless-WAN enterprise connectivity and are rapidly growing in enterprise Private 5G networks and Secure Access Services Edge (SASE) solutions. What Will You Do: Design, build, and maintain end-to-end ML pipelines for data ingestion, model training, evaluation, deployment, and monitoring. Deploy ML models as microservices with scalable APIs, ensuring low latency, high availability, and maintainability. Automate infrastructure provisioning and configuration using Terraform, AWS CloudFormation, or similar tools. Build and manage CI/CD pipelines for model deployment, versioning, and rollback using tools such as GitHub Actions, Jenkins, or GitLab CI. Integrate ML models into cloud-native architectures using AWS services (e.g., SageMaker, EKS, Lambda, S3, ECS, CloudWatch). Develop reusable microservices, APIs, and SDKs to enable faster integration of ML models into production systems. Stay current with emerging technologies in MLOps, cloud computing, and distributed model serving. What will you Bring: 5-9 years of experience in ML Engineering, Software Engineering, or DevOps with a focus on machine learning systems. Strong programming skills in Python and familiarity with ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. Deep experience with AWS cloud services (SageMaker, Lambda, ECS/EKS, CloudWatch, S3, IAM, etc.). Proven experience deploying ML models into production in microservices-based architectures (FastAPI, Flask, or gRPC). Hands-on expertise in Terraform (Infrastructure as Code) for cloud provisioning and environment setup, containerization and orchestration using Docker and Kubernetes. Strong understanding of CI/CD pipelines, model registry, and artifact management. Preferred Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. Experience with AWS CDK, Serverless Framework, or Argo Workflows, A/B testing, canary deployments, and shadow deployments for ML models, working in AIOps, telecom, or enterprise network domains is a plus.
Responsibilities
Design, build, and maintain end-to-end ML pipelines for data ingestion, model training, evaluation, deployment, and monitoring. Deploy ML models as microservices with scalable APIs, ensuring low latency, high availability, and maintainability.
Loading...