Algorithm Engineer, LLM (Safety First) – AI Safety at Binance
Remote, , Singapore -
Full Time


Start Date

Immediate

Expiry Date

26 Nov, 25

Salary

0.0

Posted On

26 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Machine Learning, Financial Systems, Data Governance, Risk Frameworks, Collaboration, Python, Computer Science, Crypto

Industry

Information Technology/IT

Description

Binance is a leading global blockchain ecosystem behind the world’s largest cryptocurrency exchange by trading volume and registered users. We are trusted by over 280 million people in 100+ countries for our industry-leading security, user fund transparency, trading engine speed, deep liquidity, and an unmatched portfolio of digital-asset products. Binance offerings range from trading and finance to education, research, payments, institutional services, Web3 features, and more. We leverage the power of digital assets and blockchain to build an inclusive financial ecosystem to advance the freedom of money and improve financial access for people around the world.

REQUIREMENTS:

  • Master’s/PhD in Machine Learning, AI, Computer Science, or related field
  • Research track record (ICLR, NeurIPS, ACL, ICML) a plus
  • Hands-on experience building LLM/agent guardrails (policy design, refusal rules, filtering, permissions)
  • Practical experience with hallucination mitigation and safety evaluation
  • Proven ability to ship AI safety frameworks to production
  • Strong coding in Python (Java a plus); expertise in PyTorch/TensorFlow/JAX
  • Understanding of privacy, PII handling, data governance, and risk frameworks
  • Interest in crypto, Web3, and financial systems
  • Self-driven with strong ownership and delivery skills
  • Excellent communication and collaboration abilities
Responsibilities

ABOUT THE ROLE

We are seeking an LLM Algorithm Engineer (Safety First) to join our AI/ML team, with a focus on building robust AI guardrails and safety frameworks for large language models (LLMs) and intelligent agents. This role is pivotal in ensuring trust, compliance, and reliability in Binance’s AI-powered products such as Customer Support Chatbots, Compliance Systems, Search, and Token Reports.

RESPONSIBILITIES:

  • Design and build an AI Guardrails framework as a safety layer for LLMs and agent workflows
  • Define and enforce safety, security, and compliance policies across applications
  • Detect and mitigate prompt injection, jailbreaks, hallucinations, and unsafe outputs
  • Implement privacy and PII protection: redaction, obfuscation, minimisation, data residency controls
  • Build red-teaming pipelines, automated safety tests, and risk monitoring tools
  • Continuously improve guardrails to address new attack vectors, policies, and regulations
  • Fine-tune or optimise LLMs for trading, compliance, and Web3 tasks
  • Collaborate with Product, Compliance, Security, Data, and Support to ship safe features
Loading...