Machine Learning Researcher, Multimodal Foundation Models

at  Apple

Seattle, Washington, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate10 Sep, 2024USD 284900 Annual15 Jun, 2024N/APython,Computer Vision,Computer Science,Machine Learning,Computer Graphics,Deep LearningNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

SUMMARY

Posted: Dec 7, 2023
Role Number:200527006
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there’s no telling what you could accomplish. Multifaceted, amazing people and inspiring, innovative technologies are the norm here. The people who work here have reinvented entire industries with all Apple Hardware products. The same passion for innovation that goes into our products also applies to our practices strengthening our commitment to leave the world better than we found it. Join us in this truly exciting era of Artificial Intelligence to help deliver the next groundbreaking Apple product & experiences! As a member of our dynamic group, you will have the unique and rewarding opportunity to craft upcoming research directions in the field of multimodal foundation models that will inspire the future Apple products. We are continuously advancing the state of the art in Computer Vision and Machine Learning. You will be working alongside highly accomplished and deeply technical scientists and engineers to develop state of the art solution for challenging problems. We are touching all aspects of language and multimodal foundation models, from data collection, data curation to modeling, evaluation and deployment. This is a unique opportunity to be part of what forms the future of Apple products that will touch the life of many people. We (Spatial Perception Team) looking for a machine learning researcher to work on the field of Generative AI and multi-modal foundation models. Our team has an established track record of shipping features that leverages multiple sensors, such as FaceID, RoomPlan and hand tracking in VisionPro. We are focused on building experiences that leverages the power of our sensing hardware as well as large foundation models. You will be a part of a diverse, fast moving team based in Cupertino.

DESCRIPTION

This position requires a highly motivated person who wants to help us advance the field of generativeAI and multi-modal foundation models. You will be responsible for designing, implementing, evaluating foundation models based on the latest advancements in the fields, taking into account future hardware design and product needs. In addition, you will have an opportunity to engage and collaborate with several teams across apple to deliver the best products.

KEY QUALIFICATIONS

  • Strong academic and publication record (CVPR, ICCV/ECCV, NeurIPS, ICML, etc)
  • Solid programming skills with Python
  • Deep understanding of large foundation models
  • Deep understanding of multi-task, multi-modal machine learning domain
  • Familiarity with deep learning toolkits
  • Familiar with challenges associated with training large models and working with large data
  • Ability to communicate the results of analyses in a clear and effective manner

EDUCATION & EXPERIENCE

PhD in Computer Science, Computer Vision, Computer Graphics, Machine Learning or equivalent.

Responsibilities:

Please refer the Job description for details


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Other

Software Engineering

Phd

Proficient

1

Seattle, WA, USA