Senior Data Engineer at Qode
Ramagundam, Telangana, India -
Full Time


Start Date

Immediate

Expiry Date

27 May, 26

Salary

0.0

Posted On

26 Feb, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Aws, Ml Pipeline, Genai, Lambda, S3, Glue, Sagemaker, Python, Spark, Terraform, Cloudformation, Ci/Cd, Github, Llm, Data Transformation Frameworks

Industry

Software Development

Description
Data Engineer at hyderabad min 5yrs exp to max 10yrs Skills Primary Skills: AWS, ML pipeline, GenAI, Lambda, S3, Glue, SageMaker, Python, Spark, Terraform, CloudFormation, CI/CD tools, GitHub Secondary Skills: LLMs, Data transformation frameworks any domain Senior Data EngineerHyderabad - (on-site)Location: Hyderabad Work Mode: On-site Experience: 5-10 YearsEmail resumes to: naveenkb@10xscale.aiSkillsPrimary Skills:AWS, ML pipeline, GenAI, Lambda, S3, Glue, SageMaker, Python, Spark, Terraform, CloudFormation, CI/CD tools, GitHubSecondary Skills:LLMs, Data transformation frameworksAbout the RoleMNC looking for a highly skilled Senior Data Engineer with strong expertise in AWS, ML pipeline automation, and GenAI solutions. The ideal candidate will design and build production-grade data and machine learning systems with a strong focus on scalability, reliability, and security.This role requires hands-on experience in AWS services, ML lifecycle management, infrastructure automation, and cloud-native architectures.Key Responsibilities Design, build, and maintain production-grade applications and services on AWS. Develop, deploy, and automate end-to-end ML pipelines and data transformation pipelines. Implement LLM-based and GenAI solutions, including: Prompt engineering Evaluation frameworks Workflow integration Model lifecycle optimization Build scalable and high-performance data processing systems using Spark, AWS Glue, or equivalent. Apply strong software engineering best practices: Version control CI/CD automation Infrastructure as Code (Terraform / CloudFormation) Automated testing Observability & monitoring Ensure strict adherence to: Security best practices Data governance policies Compliance standards (financial-services alignment preferred) Collaborate using GitHub for: Pull requests Code reviews Workflow automation Required Qualifications 5+ years of experience in Data Engineering or related field. Proven experience building and automating data pipelines. Hands-on experience developing and deploying ML pipelines. Strong expertise in AWS services: Lambda S3 Glue SageMaker (training, endpoints, inference) Experience with Spark or large-scale data processing frameworks. Strong understanding of CI/CD and DevOps practices. Experience with Infrastructure-as-Code (Terraform or CloudFormation). Strong problem-solving and debugging skills. Preferred Qualifications Experience working with LLMs and GenAI frameworks. Knowledge of financial services security and compliance standards. Experience building evaluation frameworks for ML models. Understanding of observability and monitoring tools. Experience with production-grade ML deployments. Technical Skills AWS (Lambda, S3, Glue, SageMaker) Python (preferred) Spark Terraform / CloudFormation CI/CD tools GitHub ML pipeline orchestration tools Data transformation frameworks Why Join Us? Opportunity to work on cutting-edge GenAI and ML systems. Production-grade architecture exposure. High-impact role in scalable cloud systems. Work with advanced AWS cloud-native technologies.
Responsibilities
The role involves designing, building, and maintaining production-grade data and machine learning systems on AWS, focusing on scalability and reliability. Key tasks include developing and automating end-to-end ML and data transformation pipelines, and implementing LLM-based and GenAI solutions.
Loading...