Software Engineer - Computer Vision at Apple
Sunnyvale, California, United States -
Full Time


Start Date

Immediate

Expiry Date

05 Feb, 26

Salary

0.0

Posted On

07 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Robust Api Design, Debugging, Performance Optimization, Collaboration, Ios Development, Macos Development, Swift, Swiftui, Machine Learning, Computer Vision, Real-Time Video Pipelines, Object Detection, Segmentation, Tracking, Pose Estimation, Integration

Industry

Computers and Electronics Manufacturing

Description
We’re starting to see the incredible potential of multimodal foundation and large language models, and many applications in the computer vision and machine learning domain that previously appeared unfeasible are now within reach. We are looking for a highly motivated and skilled Senior Software Engineer to join our team in the Video Computer Vision group and help us enable that potential for realtime human understanding on Apple devices. The Video Computer Vision org has pioneered human-centric real-time features such as FaceID, FaceKit, and Gaze and Hand gesture control which have changed the way millions of users interact with their devices. We balance research and product requirements to deliver Apple quality, pioneering experiences, innovating through the full stack, and partnering with HW, SW and AI teams to shape Apple's products and bring our vision to life. DESCRIPTION You’ll work on ground breaking projects to advance our AI and computer vision capabilities for human understanding. You have a strong background in integrating CV/ML algorithms in your code and efficiently running foundation and language models on device. You’ll have the opportunity to collaborate with multi-functional teams, including researchers, data scientists, software engineers, human interface designers and application domain experts. MINIMUM QUALIFICATIONS Experience with Robust API Design: Proven ability to design developer-facing APIs with a clear understanding of architectural tradeoffs, design patterns, and anti-patterns. Strong intuition for maintainability and extensibility. Exceptional debugging and performance optimization skills. Track record of multi-functional collaboration and product delivery: Demonstrated success delivering high-performance, production-quality code in collaborative, multi-disciplinary environments. Experience with iOS/macOS development: Familiarity with Swift, SwiftUI, modern concurrency (e.g., structured concurrency with async/await), and Apple system frameworks such as Cocoa/Cocoa Touch, Core ML, Metal, and Accelerate. Foundational understanding of machine learning: Familiarity with ML algorithms and development pipelines, with the ability to work effectively with ML practitioners and integrate ML components into production systems. PREFERRED QUALIFICATIONS Experience building internal developer tools: Hands-on experience developing tools such as test data visualization systems, debugging enhancements, and robust unit/integration testing frameworks to support engineering workflows. Experience with live camera streaming applications: Understanding of real-time video pipelines, image transformations, and rendering loops. Experience integrating on-device CV/ML algorithms: Familiarity with common computer vision techniques (e.g., object detection, segmentation, tracking, pose estimation), sequence models for real-time inference and FMs/LLMs optimized for on-device performance.
Responsibilities
You will work on groundbreaking projects to advance AI and computer vision capabilities for human understanding. Collaborate with multi-functional teams to integrate CV/ML algorithms and run models efficiently on devices.
Loading...