Log in with
Don't have an account? Create an account
Need some help?
Talk to us at +91 7670800001
Log in with
Don't have an account? Create an account
Need some help?
Talk to us at +91 7670800001
Please enter the 4 digit OTP has been sent to your registered email
Sign up with
Already have an account? Log in here
Need some help?
Talk to us at +91 7670800001
Jobs Search
Start Date
Immediate
Expiry Date
04 Dec, 25
Salary
132762.98
Posted On
04 Sep, 25
Experience
0 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Python, Sql, Validation, Learning Techniques, Java, Docker, Hive, Numpy, Machine Learning, Kubernetes, Computer Science, Pandas, Data Structures, Testing, Git, Computer Architecture, Data Science, Nltk, Languages, Spark, Keras, Github, Scala
Industry
Information Technology/IT
JOB OVERVIEW
We are seeking a skilled Engineer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining software applications that meet the needs of our clients. This role requires a strong foundation in various programming languages and technologies, along with a passion for problem-solving and innovation.
QUALIFICATIONS:
· BS, MS, or PhD degree in Computer Science or a related field, or equivalent practical experience.
· 3 to 5 year of experience and Hands-on with Languages : Scala, Java , Python
· Strong computer science fundamentals: data structures, algorithms, performance complexity, and implications of computer architecture on software performance (e.g., I/O and memory tuning).
· Solid software engineering fundamentals: experience with version control systems (Git, GitHub) and workflows, and the ability to write production-ready code.
· Knowledge of Machine Learning or Data Science languages, tools, and frameworks, including SQL, Scikit-learn, NLTK, NumPy, Pandas, TensorFlow, and Keras.
· Understanding of machine learning techniques (e.g., classification, regression, clustering) and principles (e.g., training, validation, and testing).
· Experience with data processing tools and distributed computing systems and related technologies such as Spark, Hive, and Flink.
· Familiarity with cloud technologies, including AWS SageMaker tools and AWS Bedrock.
· Understanding of DevOps concepts, including CI/CD.
· Experience with software container technology, such as Docker and Kubernetes.
· In-depth knowledge of MLOps principles and tools for model lifecycle management, including experiment tracking, model registry, and serving infrastructure.
· Experience with workflow orchestration tools (e.g., Apache Airflow, Kubeflow Pipelines).
· Familiarity with model explainability (XAI) and fairness techniques.
· Proficiency in optimizing machine learning models for performance, efficiency, and resource utilization.
· Experience with A/B testing frameworks and statistical analysis for model evaluation.
Job Type: Contract
Pay: $110,240.69 - $132,762.98 per year
Work Location: Hybrid remote in Mountain View, HI 9677
How To Apply:
Incase you would like to apply to this job directly from the source, please click here
RESPONSIBILITIES
The JD mentioned pipelines, orchestration, monitoring, CI/CD, containerization → all core MLOps responsibilities
As a Machine Learning Engineer, you will have a strong AI/ML background and be an integral part of a vibrant team of Data Scientists and Machine Learning engineers. You will be expected to help architect, code, optimize, and deploy Machine Learning models at scale using the latest industry tools and techniques. You will also contribute to automating, delivering, monitoring, and improving machine learning solutions. Important skills for this role include software development, systems engineering, data wrangling, feature engineering, architecture, MLOps, and testing.
RESPONSIBILITIES:
· Design and build scalable, usable, and high-performance machine learning systems.
· Collaborate cross-functionally with product managers, data scientists, and engineers to understand, implement, refine, and design machine learning and other algorithms.
· Effectively communicate results to peers and leaders.
· Explore state-of-the-art technologies and apply them to deliver customer benefits.
· Discover, access, import, clean, and prepare data for machine learning.
· Work with AI scientists to create and refine features from underlying data and build pipelines to train and deploy models.
· Run regular A/B tests, gather data, perform statistical analysis, and draw conclusions on the impact of your models.
· Explore new technology shifts to determine their potential connection with desired customer benefits.
· Interact with various data sources, collaborating with peers and partners to refine features from the underlying data and build end-to-end pipelines.
· Model Productionalization: Partner with data scientists to productionalize prototype models for customer use at scale, which may involve increasing training data, automating training and prediction, and orchestrating data for continuous prediction. You will also provide metrics (like precision, recall etc.) for model comparison.
· Model Enhancement: Work on existing codebases to enhance model prediction performance or reduce training time, understanding the specifics of algorithm implementation. This can be exploratory or directed work based on data science team proposals.
· Machine Learning Tools: Develop tools to address pain points in the data science process, such as speeding up training, simplifying data processing, or improving data management tooling.
· Model Context Protocol (MCP) Server:
o Experience building and maintaining services that manage and serve the contextual information required by large models.
o Familiarity with creating and managing data pipelines for real-time and batch-based context enrichment.
o Skills in designing scalable, low-latency APIs for models to access and retrieve necessary context during inference.
· Implement robust monitoring and alerting for deployed models to ensure continuous performance and detect anomalies.
· Manage model versions, dependencies, and deployment workflows using MLOps best practices.