Student Researcher - Doubao (Seed) - Machine Learning System - 2025 Start (PhD)

at  ByteDance

Seattle, Washington, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate10 Sep, 2024Not Specified11 Jun, 2024N/AParallel Computing,Data Processing,Creativity,Tuning,Triton,Disabilities,Cloud Computing,Mpi,Machine Learning,Large Scale SystemsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

Established in 2023, the ByteDance Doubao (Seed) Team is dedicated to building industry-leading AI foundation models. We aim to make world-leading research and foster both technological and social progress.
With a long-term vision and a strong commitment to the AI field, the Team conducts research in a range of areas including natural language processing (NLP), computer vision (CV), and speech recognition and generation. It has labs and researcher roles in China, Singapore, and the US.
Leveraging substantial data and computing resources and through continued investment in these domains, our team has built a proprietary general-purpose model with multimodal capabilities. In the Chinese market, Doubao models power over 50 ByteDance apps and business lines, including Doubao, Coze, and Dreamina, and was launched to external enterprise clients through Volcano Engine. The Doubao app is the most used AIGC app in China.
Why Join Us
Creation is the core of ByteDance’s purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible.
Together, we inspire creativity and enrich life - a mission we aim towards achieving every day.
To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At ByteDance, we create together and grow together. That’s how we drive impact - for ourselves, our company, and the users we serve.
Join us.
Team Introduction
The AML Machine Learning Systems team provides E2E machine learning experience and machine learning resources for the company. The team builds heterogeneous ML training and inference systems based on GPU and AI chips and advances the state-of-the-art of ML systems technology to accelerate and stablize training of models such as stable diffusion and LLM. The team is also responsible for research and development of hardware acceleration technologies for AI and cloud computing, via technologies such as distributed systems, commnucation compression and quantization. The team is reinventing the ML infra for large scale language models. We have published papers at top tier conferences such as ICML, NSDI, EuroSys, OSDI, SOSP, MLSys, NeurIPS, etc.
We are looking for talented individuals to join us for a Student Researcher opportunity in 2025. Student Researcher opportunities at ByteDance aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance.
The Student Researcher position provides unique opportunities that go beyond the constraints of our standard internship program, allowing for flexibility in duration, time commitment, and location of work.
Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to ByteDance and its affiliates’ jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early.

Responsibilities

  • Research and develop our efficient machine learning systems, including efficient optimizers, parameters, and gradient efficient training with rank reduction and communication compression.
  • Develop a state-of-the-art asynchronous training framework ensuring convergence.
  • Implement both general purpose training framework features and model specific optimizations (e.g. LLM, diffusions).
  • Improve efficiency and stability for extremely large scale distributed training jobs.
  • Currently in PhD program in distributed, parallel computing principles and know the recent advances in computing, storage, networking, and hardware technologies
  • Familiar with machine learning algorithms, platforms and frameworks such as PyTorch and Jax.
  • Have basic understanding of how GPU and/or ASIC works.
  • Expert in at least one or two programming languages in Linux environment: C/C++, CUDA, Python.
  • Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment.

Preferred Qualifications

THE FOLLOWING EXPERIENCES WILL BE A BIG PLUS:

  • GPU based high performance computing, RDMA high performance network (MPI, NCCL, ibverbs).
  • Distributed training framework optimizations such as DeepSpeed, FSDP, Megatron, GSPMD.
  • AI compiler stacks such as torch.fx, XLA and MLIR.
  • Large scale data processing and parallel computing.
  • Experiences in designing and operating large scale systems in cloud computing or machine learning.
  • Experiences in in-depth CUDA programming and performance tuning (cutlass, triton).
    ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
    ByteDance Inc. is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://shorturl.at/cdpT2
    By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy

Responsibilities:

  • Research and develop our efficient machine learning systems, including efficient optimizers, parameters, and gradient efficient training with rank reduction and communication compression.
  • Develop a state-of-the-art asynchronous training framework ensuring convergence.
  • Implement both general purpose training framework features and model specific optimizations (e.g. LLM, diffusions).
  • Improve efficiency and stability for extremely large scale distributed training jobs.
  • Currently in PhD program in distributed, parallel computing principles and know the recent advances in computing, storage, networking, and hardware technologies
  • Familiar with machine learning algorithms, platforms and frameworks such as PyTorch and Jax.
  • Have basic understanding of how GPU and/or ASIC works.
  • Expert in at least one or two programming languages in Linux environment: C/C++, CUDA, Python.
  • Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - System Programming

Software Engineering

Phd

Proficient

1

Seattle, WA, USA