DE&A - AIML - Auto ML at Zensar Technologies UK Ltd
, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

18 May, 26

Salary

0.0

Posted On

17 Feb, 26

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Genai Architecture, Data Engineering, RAG, LLMOps, MLOps, Vector Databases, Python, SQL, Azure, AWS, GCP, Containerization, CI/CD, Responsible AI, Prompt Engineering, Observability

Industry

IT Services and IT Consulting

Description
Job Title Senior GenAI Data Engineering Developer (AI/GenAI Architect – Hands-on) Experience 8–12 years (Data/AI Engineering), with 2+ years in AI/GenAI architecture and solution design Role Summary We’re seeking a hands-on GenAI Data Engineering leader who can architect and build production-grade GenAI solutions—from data pipelines and vectorization to RAG, LLM orchestration, governance, and cost-aware operations. You will translate business problems into secure, scalable, and compliant AI systems using LLMs, embeddings, and modern data stacks across Azure/AWS/GCP. Key Responsibilities Architecture & Solutioning Lead end-to-end GenAI solution architecture (Assess → Design → Build → Operate) for use cases like RAG/Q&A, copilots, summarization, classification, agents, autonomous workflows. Define LLMOps/MLOps blueprints: environments, CI/CD, model registry, observability, evaluation, A/B testing, canary rollouts, guardrails. Design RAG architectures: document loaders, chunking strategies, embeddings selection, vector schema design, re-ranking, caching, and fallbacks. Establish data governance for AI: PII handling, safety, red-teaming, content filters, model risk, usage policies, and auditability. Data & Platform Engineering Build robust ingestion & transformation pipelines (batch/streaming) to prepare high-quality corpora for LLMs. Operationalize chunking/embedding/vector indexing, metadata enrichment, synonyms/ontologies, and semantic retrieval performance tuning. Implement feature/knowledge stores, vector DBs, and document stores (e.g., Azure AI Search, Elasticsearch, Pinecone, Weaviate, Milvus, pgvector). Integrate orchestration frameworks (Airflow/Prefect/AKS/Databricks Jobs) and API gateways. Application & Orchestration Develop prompt pipelines (system/hybrid prompts, tool-use), retrieval chains, agents, function-calling, and tool integrations (SQL, search, APIs). Build and harden LLM applications (e.g., FastAPI/Flask/Functions) with authentication/authorization, rate-limiting, telemetry, and cost controls. Introduce guardrails (PII scrubbing, jailbreak mitigation, toxicity, hallucination checks) and evaluation harnesses (BLEU/ROUGE/METEOR, custom rubric scoring, human-in-the-loop). Ops, Observability & Cost Set up model and app observability (latency, token usage, failure modes, retrieval quality, drift detection). Implement cost monitoring (per-call, per-user, per-use-case), prompt/embedding caching, and routing to optimize spend/performance. Drive SLA/SLO definitions, incident runbooks, and reliability engineering practices. Stakeholder Leadership Partner with Product, Security, Compliance, and Enterprise Architecture to align business outcomes with responsible AI. Lead technical design reviews, mentor developers, and contribute to standards/patterns across teams. Must-Have Skills GenAI Architecture & Delivery Proven design/delivery of RAG and LLM apps in production Expertise in prompt engineering, prompt templating, evaluation, guardrails Experience with model selection (proprietary vs open-source), routing, and fallback strategies Data Engineering Excellence Strong in Python (data processing, APIs, ETL/ELT, testing) Proficient with SQL (analytical queries, performance tuning, stored procedures as needed) Experience building scalable pipelines (Databricks/Spark, Airflow/Prefect, Kafka/EventHub) Vector & Retrieval Systems Hands-on with vector databases and embedding pipelines Mastery of chunking, retrieval optimization, re-ranking, metadata strategies Cloud & Platform One or more: Azure (OpenAI, AI Search, Databricks, ADF/ADF v2/Synapse, AKS/Functions), AWS (Bedrock, OpenSearch, Sagemaker, Lambda/EKS), GCP (Vertex AI, BigQuery, GKE) Containerization & CI/CD: Docker, Kubernetes, GitHub Actions/Azure DevOps/Jenkins Security, Governance & Compliance Experience implementing Responsible AI, data privacy/PII, RBAC/ABAC, secret management, network isolation, policy-as-code Communication & Leadership Ability to translate business problems to GenAI architectures and guide teams through delivery Nice-to-Have Skills LLM Frameworks & Tools: LangChain, LlamaIndex, Semantic Kernel, DSPy Observability/Eval: MLflow, Promptfoo, TruLens, Arize, EvidentlyAI, OpenTelemetry, Kibana/Grafana Search/Retrieval: Elasticsearch/OpenSearch, Redis Stack, Vespa NLP/ML: Transformers, fine-tuning/LoRA, vector quantization, distillation Data Quality: Great Expectations/Deequ, Monte Carlo Edge/Hybrid: On-prem GPU, NVIDIA NIM, Triton Inference Server Compliance: SOC2, HIPAA, GDPR familiarity in AI contexts Qualifications Bachelor’s/Master’s in Computer Science, Data Engineering, AI/ML, or related field 8–12 years in data/AI engineering; 2+ years in GenAI/LLM architecture Track record delivering secure, reliable, cost-efficient GenAI solutions at enterprise scale At Zensar, we’re “experience-led everything”. We are committed to conceptualizing, designing, engineering, marketing, and managing digital solutions and experiences for over 130 leading enterprises. We are a company driven by a bold purpose: Together, we shape experiences for better futures. Whether for our clients, our people, or the world around us, this belief powers everything we do. At the heart of our culture is ONE with Client - a set of four core values that reflect who we are and how we work: One Zensar, Nurturing, Empowering, and Client Focus. Part of the $4.8 billion RPG Group, we’re a community of 10,000+ innovators across 30+ global locations, including Milpitas, Seattle, Princeton, Cape Town, London, Zurich, Singapore, and Mexico City. Explore Life at Zensar and join us to Grow. Own. Achieve. Learn. to be the best version of yourself. We believe the best work happens when individuality is celebrated, growth is encouraged, and well-being is prioritized. We are an equal employment opportunity (EEO) and affirmative action employer, committed to creating an inclusive workplace. All qualified applicants will be considered without regard to race, creed, color, ancestry, religion, sex, national origin, citizenship, age, sexual orientation, gender identity, disability, marital status, family medical leave status, or protected veteran status.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
The role involves leading end-to-end GenAI solution architecture, covering assessment, design, build, and operation for use cases like RAG, copilots, and agents, while defining LLMOps/MLOps blueprints for production environments. Key tasks include building robust data pipelines for LLM corpora preparation, operationalizing vector indexing, and developing prompt pipelines, agents, and hardened LLM applications with necessary guardrails.
Loading...