Senior Research Engineer, LLM Evaluation and Behavioral Analysis at Together AI
San Francisco, California, United States -
Full Time


Start Date

Immediate

Expiry Date

11 Mar, 26

Salary

270000.0

Posted On

11 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Evaluation Tooling, Distributed Workflows, LLMs, Model Evaluation, Testing, Red-Teaming, Experiment Design, Dataset Building, Behavioral Signals, Function Calling, GPU Environments, Multi-Turn Reasoning, Inference Systems, Post-Training Workflows, Behavior Analysis

Industry

Software Development

Description
About the Role Together AI is building the fastest, most capable open-source-aligned LLMs and inference stack in the world. As part of the Turbo organization, you will be a critical bridge between cutting-edge model research and real-world behavioral reliability. This role focuses on deeply understanding model behavior — probing reasoning, tool use, function calling, multi-step interactions, and subtle failure modes — and building the evaluation systems that ensure models behave intelligently and consistently in production. You will develop robust evaluation pipelines, design high-quality behavioral test suites, and work closely with training, post-training, inference, and product teams to identify regressions, shape datasets, and influence model improvements. Your work will directly define how Together measures model quality and reliability across releases. Responsibilities Build and iterate on evaluation frameworks that measure model performance across instruction following, function calling, long-context reasoning, multi-turn dialog, safety, and agentic behaviors. Develop specialized evaluation suites for: Function calling — argument correctness, schema adherence, tool selection, multi-function planning, and error recovery. Agentic workflows — task decomposition, multi-step planning, self-correction, and autonomous tool-use sequences. Tool-augmented interactions — search, retrieval, code execution, API-driven actions. Create CI/CD automated pipelines for A/B comparisons, regression detection, behavioral drift monitoring, and adversarial probing. Design and curate high-quality evaluation datasets, especially nuanced or challenging cases across domains. Collaborate with researchers and engineers to diagnose failures, triage regressions, and guide data selection, shaping strategies, objective design, and system improvements. Work with engineering teams to build dashboards, reports, and internal tools that help visualize behavior changes across releases. Operate in a fast-paced, high-impact environment with deep technical ownership and close partnership with world-class model researchers and infra engineers. Requirements Strong engineering skills with Python, evaluation tooling, and distributed workflows. Experience working with LLMs or transformer-based models, particularly in model evaluation, testing, or red-teaming. Ability to reason clearly about qualitative behavior, edge cases, and model failure patterns. Experience designing experiments, building datasets, and interpreting noisy behavioral signals. Understanding of function calling and structured output formats. Familiarity with GPU or distributed compute environments. Hands-on experience evaluating function-calling models, agentic systems, or tool-augmented LLM pipelines. Experience with multi-turn or multi-step reasoning tasks. Familiarity with inference systems, distributed infrastructure, or post-training workflows. Passion for discovering subtle behaviors, surprising model gaps, or edge-case failures. About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society. Our mission is to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets including FlashAttention, Hyena, FlexGen, ATLAS, and RedPajama. We invite you to join a passionate group of researchers and engineers in building the next generation of AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance, and other benefits. The US base salary range for this full-time position is: $220,000 – $270,000 + equity + benefits. Compensation varies by location, level, and experience. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer equal opportunity to all individuals regardless of race, color, ancestry, religion, sex, sexual orientation, national origin, age, citizenship, marital status, disability, gender identity, veteran status, or other protected characteristics. Please see our privacy policy at https://www.together.ai/privacy
Responsibilities
The role involves building evaluation frameworks to measure model performance and developing specialized evaluation suites for various tasks. It also includes creating automated pipelines for regression detection and collaborating with teams to improve model quality.
Loading...