AIML - ML Engineer, Siri Multi-modal Systems

at  Apple

Seattle, Washington, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate28 Jul, 2024USD 243300 Annual30 Apr, 2024N/AInterpersonal Skills,Speech Recognition,Speech Processing,Creativity,Natural Language Understanding,Computer Vision,Computer Science,Programming Languages,Machine LearningNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

SUMMARY

Posted: Apr 25, 2024
Weekly Hours: 40
Role Number:200548661
Play a part in the next revolution in human-computer interaction. Contribute to a product that is redefining mobile computing. Build groundbreaking technology for large scale systems, spoken language, computer vision, big data, and artificial intelligence. And work with the people who crafted the intelligent assistant that helps millions of people get things done - just by asking. Join the Siri multi-modal learning team at Apple! The Siri team is looking for machine learning engineer to influence in developing Siriʼs next generation multi-modal assistant on Appleʼs innovative devices and novel features. You should be eager to get involved in hands-on work researching and developing new Siri experiences with multiple input modalities like speech, vision and other sensors.

KEY QUALIFICATIONS

  • Machine learning research and development experience on developing systems for computer vision, speech recognition, natural language understanding applications
  • Fluency in programming languages including but not limited to Python/Java
  • Proficiency in at least one major machine learning framework, such as Tensorflow, PyTorch etc.
  • Strong understanding of machine learning for different modalities like computer vision, speech processing, natural language understanding
  • Proven track record of researching, inventing and/or shipping advanced machine learning algorithms
  • Creativity and curiosity for solving highly complex problems
  • Outstanding communication and interpersonal skills with ability to work well in cross-functional teams

DESCRIPTION

You will be a part of a team that’s responsible for help research and develop Siriʼs multi-modal experience in a full range Apple devices. This position requires passion for researching and developing multi-modal machine learning algorithms and systems! The role will partner with the speech, vision, natural language understanding teams to deliver a phenomenal Siri user experience. You must have a “make this happen” attitude and willingness to also work hands-on in building machine learning tools, testing, data collection, running experiments as well as work with pioneering computer vision, speech and natural language understanding processing algorithms. Responsibilities: - Research, design and implementation of machine learning/deep learning algorithms - Benchmarking and fine tuning of machine learning/deep learning algorithms - Optimizing algorithms for real time and low power constraints for embedded devices - Support algorithm integration into Apple products - Collaboration with teams across Apple with multidisciplinary skills - You’ll be working closely with engineers from a number of other teams at Apple - You’re a standout colleague who thrives in a fast paced environment with rapidly changing priorities

EDUCATION & EXPERIENCE

MS, PhD or equivalent experience in Computer Science, Electrical Engineering or related field with focus on machine learning

Responsibilities:

Please refer the Job description for details


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Application Programming / Maintenance

Information Technology

Phd

Proficient

1

Seattle, WA, USA