Research Scientist - Seed Multimodal Interaction and World Model - Reinforc at ByteDance
Seattle, Washington, USA -
Full Time


Start Date

Immediate

Expiry Date

05 Nov, 25

Salary

416100.0

Posted On

06 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Computer Science, Software Development, Reinforcement Learning, Computer Engineering

Industry

Information Technology/IT

Description

QUALIFICATIONS

Minimum Qualifications:- Master or PhD in Software Development, Computer Science, Computer Engineering, or a related technical discipline- Publications in top-tier venues, such as CVPR, ECCV, ICCV, NeurIPS, ICLR, ICML, or other leading conferences in AI and ML- Strong research background in at least one of the following: reinforcement learning, multimodal learning, video understanding, or vision-language modelingPreferred Qualifications:- Experience with reinforcement learning in multimodal or interactive environments- Familiarity with video generation or diffusion-based generative models- Experience with large-scale model training (e.g., distributed training, curriculum learning, or memory-augmented transformers)- Solid programming and engineering skills, with experience building training or evaluation pipelines for ML models

Responsibilities

About Seed TeamEstablished in 2023, the ByteDance Seed team is dedicated to discovering new approaches to general intelligence, and pursuing the edge of intelligence. Our research spans large language models, speech, vision, world models, AI infrastructure, next-generation interfaces and more.With a long-term vision and determination in AI, the ByteDance Seed team remains committed to foundational research. We aim to become a world-class AI research team that drives real technological progress and delivers societal benefits.With labs across China, Singapore, and the U.S., our team has already released industry-leading general-purpose large models and advanced multimodal capabilities, powering over 50 real-world applications — including Doubao, Coze, and Jimeng.- Design and implement reinforcement learning (RL) training systems for large-scale multimodal foundation models- Develop unified modeling frameworks that integrate video, audio, and language, with a focus on visual latent reasoning- Explore Reinforcement Learning-based approaches to bridge understanding and generation for multimodal visual reasoning- Collaborate with researchers to evaluate models on tasks involving world modeling, reasoning, and instruction-conditioned generation

Loading...