MID GenAI ENGINEER at TALENT MATCHMAKERS
Oslo, , Norway -
Full Time


Start Date

Immediate

Expiry Date

01 Jul, 26

Salary

0.0

Posted On

02 Apr, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, FastAPI, LLM, LangChain, LangGraph, OpenAI, Prompt Engineering, Agent Orchestration, Docker, REST APIs, Distributed Systems, Backend Development, System Integration, Data Retrieval, Cloud Infrastructure

Industry

Human Resources Services

Description
ABOUT THE ROLE We are looking for a Mid GenAI / LLM Developer to join our partner's team & help build a production-grade AI agent platform designed to integrate with enterprise engineering tools and processes in the energy industry. Their team supports every phase of the Software Development Life Cycle (SDLC), from developing detailed roadmaps and resolving complex software challenges to ensuring quick time-to-market and optimized ROI. Remote role in Romania. Collaboration available through CIM or B2B (SRL). ABOUT THE PROJECT The project focuses on developing a LLM-powered multi-agent infrastructure that can orchestrate tasks, interact with multiple engineering platforms, and support complex technical workflows used by engineers. You will work on building and integrating AI agents capable of interacting with existing systems, tools, and data sources, enabling engineers to interact with their engineering ecosystem through intelligent interfaces. This is a hands-on engineering role focused on building real production systems, where strong engineering execution is essential. You should be comfortable working with modern AI frameworks, backend development, and building integrations across multiple platforms within a distributed engineering environment. WHAT MAKES THIS ROLE ATTRACTIVE Work on a large-scale enterprise AI platform used in the energy industry; Build LLM-powered agent systems integrated with engineering platforms; Opportunity to work on complex integrations across multiple enterprise tools and systems; Collaborate in a multidisciplinary AI, infrastructure, and platform engineering team; Build production GenAI systems with real operational impact; High ownership and strong technical autonomy; Opportunity to grow into AI architecture or platform leadership roles. DUTIES AND RESPONSIBILITIES LLM & Agentic System Development: Design and implement AI agents powered by LLMs that interact with engineering tools, APIs, and internal systems. Agent Orchestration & Workflows: Build multi-agent workflows that coordinate tasks across different systems, including prompt orchestration, tool calling, task routing, and agent collaboration. Platform Integration: Develop integrations with multiple engineering platforms, APIs, and services to enable agents to execute workflows across the engineering ecosystem. Backend Development (Python-first): Build backend services using Python (e.g., FastAPI) that support agent execution, orchestration logic, and system integration. LLM Application Architecture: Design scalable LLM application components including prompt pipelines, agent control logic, response validation, and interaction flows. Data & Knowledge Integration: Enable agents to retrieve and process information from structured and unstructured data sources across enterprise systems. API & Tooling Integration: Develop APIs and connectors allowing agents to interact with internal tools, services, and engineering platforms. Containerized Deployment: Package services using Docker and support deployment pipelines within a cloud-based infrastructure. Collaboration: Work closely with infrastructure engineers, frontend developers, cloud architects, DevOps engineers, and product stakeholders to iteratively improve the platform. REQUIREMENTS Python Skills (Mandatory): 2-3 years of experience building backend systems with Python (FastAPI, async programming, API development). LLM & Agentic Systems Experience: Hands-on experience building LLM-powered applications and agent workflows using technologies such as: LangChain; LangGraph; OpenAI / Azure OpenAI APIs; Prompt engineering; Tool calling / agent orchestration AI Application Integration: Experience integrating LLM-based services with external platforms, APIs, or enterprise systems. Backend Architecture & APIs: Strong understanding of REST APIs and system integration patterns. Distributed System Thinking: Ability to design systems that coordinate multiple services, agents, or tools within larger application ecosystems. Containerization: Experience with Docker. Clean Code & Production Mindset: Ability to write maintainable code with testing discipline and a production mindset. Multi-Language Openness: Willingness to collaborate in environments where other languages (such as Go) may be used. Language Skills: Fluent in English. NICE TO HAVE • Experience with Go; • Experience with LlamaIndex, Haystack, HuggingFace Transformers, LangSmith (tracing & evaluation); • Experience working with multi-agent architectures; • Familiarity with vector databases or semantic retrieval systems; • Basic knowledge of Kubernetes; • Experience integrating with complex enterprise platforms or engineering tools; • Experience building AI-driven internal platforms or developer tools.
Responsibilities
Design and implement LLM-powered AI agents that orchestrate complex workflows across engineering platforms and internal systems. Collaborate with multidisciplinary teams to build scalable backend services and integrate data sources into a production-grade AI agent platform.
Loading...