Applied Machine Learning Research Engineer - Multimodal LLMs for Human Unde at Apple
Sunnyvale, California, United States -
Full Time


Start Date

Immediate

Expiry Date

05 Feb, 26

Salary

0.0

Posted On

07 Nov, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Computer Vision, NLP, Multimodal Fusion, Generative AI, Deep Learning, Python, Algorithm Design, Model Evaluation, AI Research, Data Science, Human Understanding, Multimodal LLMs, Collaboration, Real-Time Features, Innovation

Industry

Computers and Electronics Manufacturing

Description
We’re starting to see the incredible potential of multimodal foundation and large language models, and many applications in the computer vision and machine learning domain that previously appeared infeasible are now within reach. We are looking for a highly motivated and skilled Applied Machine Learning Research Engineer to join our team in the Video Computer Vision group and help us push the boundaries of human understanding. The Video Computer Vision org has pioneered human-centric real-time features such as FaceID, FaceKit, and Gaze and Hand gesture control which have changed the way millions of users interact with their devices. We balance research and product requirements to deliver Apple quality, pioneering experiences, innovating through the full stack, and partnering with HW, SW and AI teams to shape Apple's products and bring our vision to life. DESCRIPTION You’ll work on ground breaking research projects to advance our AI and computer vision capabilities, contribute to both foundational research and practical applications on multimodal large language models, and design, implement, and evaluate algorithms and models for human understanding. You have a strong background in developing and exploring multimodal large language models that integrate diverse data modalities such as text, image, video, and audio. You’ll have the opportunity to collaborate with multi-functional teams, including researchers, data scientists, software engineers, human interface designers and application domain experts. You’ll stay up-to-date on the latest advancements in AI, machine learning, and computer vision and apply this knowledge to drive innovation within the company. MINIMUM QUALIFICATIONS Experience in developing, training/tuning multimodal LLMs. Programming skills in Python. Masters degree with a minimum of 3 years relevant industry experience. PREFERRED QUALIFICATIONS Expertise in one or more of: computer vision, NLP, multimodal fusion, Generative AI. Experience with at least one deep learning framework such as JAX, PyTorch, or similar. Publication record in relevant venues. PhD in Computer Science, Electrical Engineering, or a related field with a focus on AI, machine learning, or computer vision.
Responsibilities
You will work on groundbreaking research projects to advance AI and computer vision capabilities. This includes contributing to foundational research and practical applications on multimodal large language models.
Loading...