Research Engineer (Foundation Model) - Machine Learning Systems

at  ByteDance

Singapore, Southeast, Singapore -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate15 Sep, 2024Not Specified18 Jun, 20243 year(s) or aboveLarge Scale Systems,Triton,Mpi,Parallel Computing,Cloud Computing,Tuning,Data Processing,Machine LearningNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

ByteDance will be prioritizing applicants who have a current right to work in Singapore, and do not require ByteDance sponsorship of a visa.
About ByteDance
Founded in 2012, ByteDance’s mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok and Helo as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join Us
Creation is the core of ByteDance’s purpose. Our products are built to help imaginations thrive. This is doubly true of the teams that make our innovations possible.
Together, we inspire creativity and enrich life - a mission we aim towards achieving every day.
To us, every challenge, no matter how ambiguous, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always.
At ByteDance, we create together and grow together. That’s how we drive impact - for ourselves, our company, and the users we serve.
Join us.
About the Team
The Seed Foundation Machine Learning (ML) Systems team provides end-to-end (E2E) machine learning experience and machine learning resources for the company. The team builds heterogeneous ML training and inference systems based on GPU and AI chips and advances the state-of-the-art of ML systems technology to accelerate models such as stable diffusion and LLM.
The team is also responsible for research and development of hardware acceleration technologies for AI and cloud computing, via technologies such as distributed systems, compilers, HPC, and RDMA networking. The team is reinventing the ML infra for large scale language models. We have published papers at top tier conferences such as SIGCOMM, NSDI, EuroSys, OSDI, SOSP, MLSys, NeurIPS, etc.

MINIMUM QUALIFICATION:

  • Bachelor or above degree in distributed, parallel computing principles and know the recent advances in computing, storage, networking, and hardware technologies;
  • At least 3 years or more working experiences;
  • Familiar with machine learning algorithms, platforms and frameworks such as PyTorch and Jax.
  • Have basic understanding of how GPU and/or ASIC works;
  • Expert in at least one or two programming languages in Linux environment: C/C++, CUDA, Python;

Preferred Qualifications:The following experiences will be a big plus:

  • GPU based high performance computing, RDMA high performance network (MPI, NCCL, ibverbs);
  • Distributed training framework optimizations such as DeepSpeed, FSDP, Megatron, GSPMD
  • AI compiler stacks such as torch.fx, XLA and MLIR;
  • Large scale data processing and parallel computing;
  • Experiences in designing and operating large scale systems in cloud computing or machine learning;
  • Experiences in in-depth CUDA programming and performance tuning (cutlass, triton)

ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too

Responsibilities:

  • Optimizing large scale parallel training for state of the art deep learning algorithms such as large language models, multi-modality models, diffusion, reinforcement learning, etc
  • Research and develop our machine learning systems, including accelerated computing architecture, management and monitoring
  • Deploy the machine learning systems, distributed machine learning training and inference
  • Manage cross-layer optimization of system and AI algorithms and hardware for machine learning (GPU, ASIC)


REQUIREMENT SUMMARY

Min:3.0Max:8.0 year(s)

Computer Software/Engineering

IT Software - Application Programming / Maintenance

Software Engineering

Graduate

Computing storage networking and hardware technologies

Proficient

1

Singapore, Singapore