Member of Technical Staff, Model Evaluation at INCEPTION ARTIFICIAL INTELLIGENCE L.L.C - O.P.C
San Francisco, California, United States -
Full Time


Start Date

Immediate

Expiry Date

08 Jun, 26

Salary

0.0

Posted On

10 Mar, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

LLM Evaluation, Evaluation Metrics, Frameworks, Benchmarks, Model Quality, Safety, Reliability, Regression Detection, Automated Pipelines, Statistical Analysis, Python, PyTorch, Experimental Design, Git, Docker, Communication

Industry

technology;Information and Internet

Description
The Role We seek experienced engineers and scientists to develop the evaluation metrics and systems that drive frontier LLM performance. You'll design the frameworks that tell us whether our models are improving and ensure they perform reliably at scale in production. Key Responsibilities * Design, develop, and maintain robust evaluation frameworks and benchmarks for measuring LLM performance across diverse tasks and domains. * Define and implement quantitative metrics that capture model quality, safety, reliability, and regression detection. * Build scalable, automated evaluation pipelines that integrate into model training and deployment workflows. * Conduct rigorous statistical analysis of model outputs to identify failure modes, biases, and performance gaps. * Partner with product and customer-facing teams to translate real-world use cases into meaningful evaluation criteria. Qualifications * BS/MS/PhD in Computer Science, Machine Learning, Statistics, or a related field (or equivalent experience). * At least 2 years of experience in ML evaluation, applied ML research, or a related engineering role. * Strong understanding of LLM fundamentals (autoregressive generation, instruction tuning, RLHF, in-context learning, decoding strategies). * Proficiency in Python and ML frameworks such as PyTorch. * Experience designing and implementing evaluation metrics and benchmarks for generative models. * Solid foundation in statistics, experimental design, and hypothesis testing. * Experience with version control (Git) and containerization (Docker). * Excellent communication skills with the ability to distill complex evaluation results into actionable insights. Preferred Skills * Experience with human-in-the-loop evaluation systems (Likert-scale annotation, pairwise preference ranking, red-teaming). * Familiarity with LLM safety and alignment evaluation (toxicity, hallucination detection, factual grounding). * Knowledge of existing benchmark suites (MMLU, HumanEval, HELM, BIG-Bench) and their limitations. * Experience building evaluation infrastructure at scale using cloud platforms (AWS, GCP, Azure). * Familiarity with MLOps practices and CI/CD pipelines for model validation. * Experience with data engineering, large-scale data labeling, or synthetic data generation for evaluation purposes.
Responsibilities
The role involves designing, developing, and maintaining robust evaluation frameworks and benchmarks to measure Large Language Model (LLM) performance across various tasks and domains. This includes defining quantitative metrics for quality and safety, building scalable automated pipelines, and conducting statistical analysis to find failure modes.
Loading...