AI Evaluation Specialist at Binance
Remote, , Singapore -
Full Time


Start Date

Immediate

Expiry Date

09 Dec, 25

Salary

0.0

Posted On

09 Sep, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Science, Test Driven Development, Analytical Skills, Communication Skills, Artificial Intelligence, Pipelines, Evaluation Methodologies, Computer Science, Evaluation Tools, Root

Industry

Information Technology/IT

Description

Binance is a leading global blockchain ecosystem behind the world’s largest cryptocurrency exchange by trading volume and registered users. We are trusted by over 280 million people in 100+ countries for our industry-leading security, user fund transparency, trading engine speed, deep liquidity, and an unmatched portfolio of digital-asset products. Binance offerings range from trading and finance to education, research, payments, institutional services, Web3 features, and more. We leverage the power of digital assets and blockchain to build an inclusive financial ecosystem to advance the freedom of money and improve financial access for people around the world.
We are seeking a dedicated AI Evaluation Specialist responsible for designing, implementing, and managing comprehensive evaluation frameworks that span the entire lifecycle of LLM agents—from pre-deployment testing to post-deployment monitoring and iterative refinement. Your work will directly influence Binance’s AI adoption journey by ensuring the reliability, adaptability, and governance compliance of AI agents operating across various domains such as Customer Service, Growth, and Compliance.

REQUIREMENTS:

  • Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field.
  • Strong understanding of Large Language Models (LLMs), autonomous AI agents, and their system architectures.
  • Experience with AI evaluation methodologies, including offline benchmarking, online monitoring, and hybrid human-AI evaluation approaches.
  • Familiarity with software engineering best practices such as Test-Driven Development (TDD), Behavior-Driven Development (BDD), and their limitations in AI contexts.
  • Proficiency in designing adaptive, lifecycle-spanning evaluation frameworks that incorporate both quantitative and qualitative metrics.
  • Experience with evaluation tools and frameworks (e.g., Opik,LangSmith) is a plus.
  • Ability to analyze complex system-level behaviors, including reasoning pipelines, tool integrations, and emergent agent actions.
  • Strong analytical skills with experience in data-driven diagnostics and root cause analysis.
  • Excellent communication skills to document evaluation plans, results, and recommendations clearly.
  • Experience working in cross-functional teams and managing feedback loops between evaluation and development.
  • Experience collaborating with infrastructure or platform teams to improve AI tooling and automation platforms.
Responsibilities
  • Participate in the entire software development lifecycle, encompassing all stages from requirements analysis to test planning, execution, defect tracking, through to product release and maintenance.
  • Go to person in relation to A.I Agents evaluation and continuously monitoring.
  • Create comprehensive and effective test strategies and hands-on testing to ensure the accuracy, reliability, and performance of AI and data applications .
  • Root cause analysis of test failures and product issues in an effective manner, and drive optimization for future enhancements.
  • Design and develop internal tools leveraging AI technology to improve engineering and testing work efficiency.
Loading...