AI Engineer, LLMs at Logical Intelligence
San Francisco, California, United States -
Full Time


Start Date

Immediate

Expiry Date

20 Jul, 26

Salary

0.0

Posted On

21 Apr, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Large Language Models, Python, C++, PyTorch, TensorFlow, JAX, Distributed Training, Machine Learning Infrastructure, DataOps, Kubernetes, Energy-based Models, Formal Verification, Latent Space Reasoning, High-performance Computing, Algorithms, Transformers

Industry

Software Development

Description
Who we are At Logical Intelligence, we're revolutionizing software development with AI-powered formal verification. We've developed groundbreaking agents that provide mathematical guarantees of code correctness, ensuring that software behaves exactly as intended while proactively identifying bugs and security vulnerabilities. Our novel foundation model enables scalable, precise reasoning for formally verifiable code across Rust, Golang, and smart contract VMs. We’ve won ​​a well-known formal verification benchmark called PutnamBench, which consists of 672 hard math problems from the William Lowell Putnam Exam, the oldest collegiate mathematics competition in North America. Backed by a world-class team – including ICPC champions, a Fields Medalist and an ACM Turing Award winner – we're building the future where all code is provably correct. About the role Join our team as an AI Engineer and help us push the boundaries of what's possible in logical reasoning! We’re looking for a motivated individual to design, implement, and refine efficient Large Language Models (LLMs) pipelines for scaled distributed training. You'll be at the forefront of designing and refining algorithms that go beyond the capabilities of traditional LLMs. You'll work closely with a talented team of AI experts, EBM specialists, formal verification engineers, and software developers to create groundbreaking solutions. What you'll do Implement new reasoning algorithms and models Evaluate reasoning approaches, including latent space reasoning Pre-train, fine-tune, and modify the State-of-the-Art LLMs Optimizing and scaling LLM pipelines Adjust frameworks and interfaces to accelerate machine learning development Derive practical solutions and integrate them with the results of other teams to provide the best overall resolution Qualifications Deep understanding of transformers' internals, and ability to make radical changes to the architecture and handle higher-order derivatives Expertise in programming languages and tools critical for high-performance computing in Python/C++ and machine learning including Deep Learning frameworks like PyTorch /TensorFlow/JAX Expertise in optimizing machine learning systems, including general techniques and LLM-specific optimizations Understanding state-of-the-art approaches in LLM reasoning Ability to understand complex learning approaches, such as energy-based models 3+ years of production experience in ML Infra, DataOps, distributed training. Proficiency with Kubernetes clusters and distributed compute assets Strong communication and teamwork skills Readiness to explore and promote cutting edge technologies in ML Infrastructure domain and beyond Bonus Points: Demonstrated publications in any of the major conferences Experience in EBM or latent reasoning Demonstrated publications in any of the major conferences Mathematical Reasoning – discrete math and logic logicalintelligence.com
Responsibilities
You will design, implement, and refine efficient Large Language Model pipelines for scaled distributed training. Additionally, you will evaluate reasoning approaches and integrate practical solutions with other team members to advance formal verification technology.
Loading...