Research Scientist - Multimodal Sensing and Social Behavior at OCULUS
Redmond, Washington, USA -
Full Time


Start Date

Immediate

Expiry Date

25 Oct, 25

Salary

208000.0

Posted On

26 Jul, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Open Source, Data Collection, Behavioral Science, Human Interaction, Deep Learning, Computer Engineering, Publications, Python, Computer Science, Informatics, Coding Experience, C++, Conferences, Data Processing, Human Computer Interaction, Human Behavior, Research

Industry

Information Technology/IT

Description

Meta’s Reality Labs Research (RL-R) brings together a team of researchers, developers, and engineers to create the future of Mixed Reality (MR), Augmented Reality (AR), and Wearable Artificial Intelligence (AI). Within RL-R, the ACE team solves complex challenges in behavioral inference from sparse information. We leverage multimodal, egocentric data and cutting-edge machine learning to deliver robust, efficient models that serve everyone. Our research provides core building blocks to unlock intuitive and helpful Wearable AI, empowering everyone to harness the superpowers of this emerging technology in their daily lives.In this role, you will work closely with Research Scientists and Engineers from across RL-R to develop novel, state-of-the-art algorithms for wearables that incorporate social behavior dynamics and multimodal sensing platforms. You will design and implement data collection strategies, benchmarks, and metrics to validate and improve model efficiency, scalability, and stability. Your expertise in psychology and human-human interaction will be crucial in developing AI algorithms that can infer human behavior patterns from wearable devices. You will also have the opportunity to work with multiple egocentric sensor modalities to advance our understanding of human behavior in various contexts.

MINIMUM QUALIFICATIONS:

  • Bachelor’s degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • PhD degree in Informatics, Social/Behavioral Sciences, Computer Science, Human-Computer Interaction, or related field plus 2+ years of research scientist experience in industry
  • Documented understanding of social behavior dynamics, including expertise in psychology and human-human interaction
  • 2+ years of research scientist (post-PhD) experience with designing field experiments, data campaigns, and observation skills for human behavior
  • Proven track record of solving complex challenges with multimodal ML as demonstrated through grants, fellowships, patents, or publications in top journals or at conferences like CVPR, NeurIPS, CHI, or equivalent
  • 2+ years of documented experience with multimodal sensing platforms, data collection, multimodal data processing and analysis

PREFERRED QUALIFICATIONS:

  • PhD degree in Behavioral Science, Computer Science or related field plus 3+ years experience with biosignals, behavioral signals, or egocentric data from wearable sensors
  • 2+ years of coding experience documented in publications or open source (e.g., GitHub) repositories (such as Python, C++, PyTorch, etc.)
  • Experience with Large Language Models
  • Experience working in Wearables, Augmented Reality/Virtual Reality
  • Experience with Multimodal Deep Learning approaches and research
Responsibilities
  • Characterize human behavior in-the-wild to derive behavioral signals for user states in the form of quantitative insights from ethnographic observations
  • Identify use cases and experiences that leverage behavioral signals to provide user value in wearable AI assistance
  • Design and implement data collection strategies, benchmarks, and metrics to validate and improve model interpretability, scalability, and stability
  • Provide research results that accelerate the development and application of state-of-the-art AI algorithms to infer human behavior patterns from wearable devices
  • Translate results of human data collection into datasets that can be effectively leveraged by ML tools and language readily interpretable by foundational models
  • Collaborate with researchers and engineers across broad disciplines through all stages of project development
  • Contribute to research that can eventually be applied to Meta products and services
  • Create tools, infrastructure, and documentation to accelerate research
  • Learn constantly, dive into new areas with unfamiliar technologies, and embrace the ambiguity of Augmented Reality/Virtual Reality problem solving
Loading...