AI Principal Engineer (m/f/n) at InPost Ireland
Warsaw, Masovian Voivodeship, Poland -
Full Time


Start Date

Immediate

Expiry Date

19 Jul, 26

Salary

0.0

Posted On

20 Apr, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Java, Generative AI, LLMs, Microservices, Cloud infrastructure, Kubernetes, Docker, API design, Vector databases, LangChain, CI/CD, SQL, NoSQL, System architecture, Data pipelines, Mentorship

Industry

Logistics;Transportation;Supply Chain and Storage

Description
Company Description InPost has revolutionised e-commerce parcel delivery in Poland and is now one of Europe’s leading OOH e-commerce enablement platforms. Founded in 1999 by Rafał Brzoska, InPost provides delivery services through our network of almost 60,000 Automated Parcel Machines (APMs) and almost 35,000 pick-up drop-off points (PUDO) in nine countries across Europe, as well as to-door courier and fulfilment services to e-commerce merchants. InPost’s lockers provide consumers with a cheaper and more flexible, convenient, environmentally friendly and contactless delivery option. Job Description We are seeking a skilled and innovative AI Principal Java Software Engineer, experienced in working with Generative AI (GenAI) models, such as Large Language Models (LLMs), and integrating these solutions into business applications. This role combines software engineering responsibilities with deep knowledge of LLMS, APIs, and cloud infrastructure - focused on building modern, AI‑enhanced business applications. Key Responsibilities Drive the technical architecture across the domain, with a focus on modernization, scalability and AI integration. Lead the design and implementation of microservices and cloud-native systems. Guide the transition from legacy systems to modern distributed systems. Collaborate with senior stakeholders (EMs, Staff and Principal Engineers, Directors) to align on technology direction. Champion engineering excellence, fostering a culture of autonomy, accountability, and quality. Provide mentorship and leadership across engineering teams. Model Integration & API Development Integrate LLMs and other GenAI models into web applications through efficient API design and implementation. Build and optimize API endpoints enabling seamless, real-time communication between front-end applications and back-end AI services. Design and develop secure, scalable, and high-performing Java-based microservices for AI model deployment. Back-End Development & AI Pipelines Develop robust back-end systems in Java to support deployment, scalability, and ongoing maintenance of GenAI models. Build and maintain data pipelines, including preprocessing input data and post-processing model outputs for application use. Implement best practices for sensitive data handling and maintaining high model performance. Infrastructure & Deployment Use Kubernetes and Docker for containerization and orchestration to ensure scalable deployment of AI applications. Implement CI/CD pipelines for automated testing and delivery of code changes. Maintain scalable and secure cloud infrastructure using platforms such as Google Cloud Platform or Azure for model training, storage, and deployment. LLM and GenAI Ecosystem Expertise Utilize vector databases (e.g., Pinecone, Weaviate, Faiss) for embedding management and similarity search. Work with frameworks supporting model development and deployment, including Hugging Face, LangChain, and OpenAI ecosystem tools. Optimize and fine-tune LLMs based on specific application needs. Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field (minimum) Expertise in Software Development. 7+ years of relevant experiencer, ideally with a focus on AI model integration. Strong knowledge of GenAI/LLMs, including model selection, tuning, and embedding strategies. Experience developing APIs enabling communication between front-end applications and AI systems. Working knowledge of Docker and Kubernetes. Familiarity with cloud platforms (AWS, GCP, Azure) for scalable AI deployment. Experience with vector databases and their integration with LLM-driven applications. Familiarity with SQL and NoSQL databases, as well as caching solutions (e.g., Redis). Experience with CI/CD pipelines, Git, and DevOps practices. Excellent command of English AND Polish. Preferred Qualifications Knowledge of streaming architectures for real-time data processing (e.g., Apache Kafka). Familiarity with serverless architectures (e.g., AWS Lambda) for scalable AI features. Prior experience with ML frameworks such as TensorFlow, PyTorch, or ONNX. Strong understanding of data privacy and security in AI applications. Soft Skills Strong problem-solving abilities with both independent and team-based work styles. Excellent communication skills with the ability to translate technical requirements into actionable development tasks. Proactive approach to staying current with evolving AI technologies and frameworks. Additional Information ​​​​​​Why Join InPost? The option to work from the office or 100% remotely Opportunity to work in a diverse, international and cross-functional environment, along with leading experts. Fulfilling careers with a range of benefits for employees and invests in providing training opportunities for their development. Involvement in technology monitoring and choices Your impact will be visible instantly and you will be making a difference in our users lives Participation in building new Centre of Excellence at InPost Direction: Staff/Principal Engineers - InPost Tech Organisation: InPost Group - Technology

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
Drive the technical architecture for AI-enhanced business applications and lead the design of scalable microservices. Integrate Generative AI models into systems while mentoring engineering teams and championing technical excellence.
Loading...