Applied AI Engineer at Mem0
SFBA, California, USA -
Full Time


Start Date

Immediate

Expiry Date

09 Nov, 25

Salary

180000.0

Posted On

10 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Research, Data Models, Rapid Prototyping, Use Case

Industry

Information Technology/IT

Description

MINIMUM QUALIFICATIONS

  • Full-stack fluency: Next.js/React on the front end and Python backends (FastAPI/Django/Flask) or Node where needed.
  • Strong Python and TypeScript/JavaScript; comfortable building APIs, wiring data models, and deploying quick demos.
  • Hands-on with the LLM/RAG stack: embeddings, vector databases, retrieval strategies, prompt engineering.
  • Track record of rapid prototyping: moving from idea demo in days, not months; clear documentation of results and trade-offs.
  • Ability to design small, meaningful evaluations for a use case (quality + latency) and iterate based on evidence.
  • Excellent communication with Research and Backend; crisp specs, readable code, and honest status updates.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

ROLE SUMMARY:

Own the 0 1. You’ll turn vague customer use cases into working proofs-of-concept that showcase what Mem0 can do. This means rapid full-stack prototyping, stitching together AI tools, and aggressively experimenting with memory retrieval approaches until the use case works end-to-end. You’ll partner closely with Research and Backend, communicate trade-offs clearly, and hand off winning prototypes that can be hardened for production.

WHAT YOU’LL DO:

  • Build POCs for real use cases: Stand up end-to-end demos (UI + APIs + data) that integrate Mem0 in the customer’s flow.
  • Experiment with memory retrieval: Try different embeddings, indexing, hybrid search, re-ranking, chunking/windowing, prompts, and caching to hit task-level quality and latency targets.
  • Prototype with Research: Implement paper ideas and new techniques from scratch, compare baselines, and keep what wins.
  • Create eval harnesses: Define small gold sets and lightweight metrics to judge POC success; instrument demos with basic telemetry.
  • Integrate AI tooling: Combine LLMs, vector DBs, Mem0 SDKs/APIs, and third-party services into coherent workflows.
  • Collaborate tightly: Work with Backend on clean contracts and data models; with Research on hypotheses; share learnings and next steps.
  • Package & handoff: Write concise docs, scripts, and templates so Engineering can productionize quickly.
Loading...