Senior Researcher, Agentic Safety & Alignment at Huawei Technologies Co. Ltd - Qatar
Helsinki, Uusimaa, Finland -
Full Time


Start Date

Immediate

Expiry Date

20 Jul, 26

Salary

0.0

Posted On

21 Apr, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Large Language Models, Reinforcement Learning, Python, Java, C++, PyTorch, TensorFlow, Agentic AI, AI Alignment, Neural Networks, Computer Vision, Red-teaming, Prompt Engineering, Machine Learning, Model Robustness, Neural Network Interpretability

Industry

Telecommunications

Description
Huawei Consumer Business Group is the leader in the all-scenario AI life. It covers smartphones, PC and tablets, wearables, mobile broadband devices, family devices and device cloud services. Huawei Consumer Business Group is dedicated to delivering the latest technologies to consumers. We believe in creating safe, privacy-friendly, and high-quality products with great user experiences for our consumers. The mission of Cloud Content Security Lab at our Helsinki Research Center is to protect the online safety of Huawei mobile users within the Harmony OS ecosystem through technological advancements. We build advance AI detection solutions to ensure content safety, safety in AI such as agentic AI safety, machine unlearning. We are looking for a passionate and motivated researcher with a solid track record of solving challenging problems, advancing state-of-the-art, and demonstrated passion for ground-breaking research that could be scaled to production environment with an emphasis on the intersection of AI and Safety. Your responsibilities: · Develop and implement state-of-the-art alignment techniques (e.g., RLHF, RLAIF, Constitutional AI) specifically tailored for LLMs CoT, agentic workflows and multi-step reasoning. · Create automated red-teaming frameworks and "safety sandboxes" to test for agent-specific failure modes. · Develop robust defenses against jailbreaking, prompt injections, and adversarial exploits that target a model’s planning and tool-use capabilities. · Build tools to understand why an agent made a specific decision, ensuring the "black box" of agentic reasoning becomes transparent and auditable. · Turn cutting-edge AI safety papers into high-performance, scalable code, transforming theoretical breakthroughs into production-ready tools and frameworks. Requirements: · Ph.D. in Computer Science, Deep Learning, Machine Learning, Mathematics or other related fields. · Focused on research in the AI field with good track records and high motivation in Agentic/Gen AI Safety Alignment domain. · Strong proficiency in Large Language Models (LLMs), neural networks, and computer vision architectures and Reinforcement learning. · Strong background in Python, Java, or C++, with deep knowledge of ML frameworks such as PyTorch, TensorFlow. Familiarity with agentic frameworks (e.g., LangChain, AutoGPT, OpenClaw). · Successful experience in AI alignment to human values and expectations, model robustness improvement, controlled and continual learning, neural network interpretability and editing techniques is highly valued. · Prior work in AI Safety, Ethics, or Trust & Safety is a huge plus. · Pioneering novel methods and neural networks that revolutionized machine learning or the AI field, or revolutionized the industry, is a huge plus. · Strong publication record in top conferences (i.e. NeurIPS/ICLR/ICML/ AAAI/ACL/CVPR/ICCV/EMNLP/NAACL). · Candidates with ICPC, IOI/IMO, IOAI and other international competition in computer science, machine learning, and AI, etc. are highly preferred. · Good teamwork, enjoy working with multi‑culture teams, passionate in challenging the status quo. This is a full‑time employee position based in Ruoholahti, Helsinki. Why join us? Join a high‑caliber lab: Work in our Cloud Content Security Lab in Ruoholahti, Helsinki, alongside experienced international researchers and engineers in a focused environment that values scientific excellence, depth, and real impact to protect millions of Huawei Mobile users. You'll enjoy access to extensive occupational healthcare, employee perks including culture & sports and phone benefits, team-building events and celebrations, employee recognition through awards and gifts, and daily office breakfasts and snacks. e are a team of multinational background, creating a unique working environment for developing imaging technologies.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
Develop and implement advanced alignment techniques for LLMs and agentic workflows while creating automated red-teaming frameworks to test for failure modes. Build robust defenses against adversarial exploits and ensure agentic reasoning processes are transparent and auditable.
Loading...