AI Full Stack Engineer at Port
Tel-Aviv, Tel-Aviv District, Israel -
Full Time


Start Date

Immediate

Expiry Date

30 May, 26

Salary

0.0

Posted On

01 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

GenAI, Full Stack Engineering, Backend Systems, Frontend Systems, Agentic Workflows, LLM Integration, NodeJS, Python, Go, React, TypeScript, RAG Pipelines, Kafka, Redis, PostgreSQL, MongoDB

Industry

technology;Information and Internet

Description
About Port At Port.io, we are building an open and flexible Agentic Engineering Platform for modern engineering organizations. Following our recent $100M Series C funding round, we are in a phase of rapid hypergrowth with strong enterprise momentum. We act as the central nervous system for engineering, enabling platform teams to unify their stack and expose it as a governed layer through golden paths for developers and AI agents. By combining rich engineering context, workflows, and actions, we help organizations transition from manual processes to autonomous, AI-assisted engineering workflows while maintaining control and accountability. As a product-led company, we believe in building world-class platforms that fundamentally shape how modern engineering organizations operate. About the role We are looking for a highly motivated AI Full stack Engineer with GenAI background in production to join our team and help us shape the future of the Agentic engineering platform (AEP). What you’ll do: At Port, we’re a platform by developers, for developers. Your role will encompass end-to-end design, implementation, and daily feature delivery across both backend and frontend systems. You will: Implement high scale AI-powered features deeply integrated into our platform Design and build production-grade backend systems serving a wide and growing user base Build agent-based workflows using frameworks such as AI SDK Integrate LLMs into real production systems with attention to reliability, latency, observability, and cost Work across frontend (React + TypeScript) and backend (NodeJS, Python, Go) to deliver complete AI-driven user experiences Own features end-to-end: design, implementation, testing, deployment, and monitoring Help define standards and best practices around AI reliability and evaluation Contribute to technical planning, mentor teammates, and help recruit top talent Develop retrieval-augmented generation (RAG) pipelines over structured and unstructured data Our stack includes React + TypeScript on the frontend, and NodeJS + TypeScript, Python, and Golang on the backend, and Vercel’s AI-SDK + AWS Bedrock + Azure OpenAI for GenAI. We use Kafka + Kafka Connect, Redis, PostgreSQL, MongoDB and other modern infrastructure components. Requirements 5+ years of professional software engineering experience Experience in NodeJS + TypeScript Strong experience designing and developing complex systems from design to production Experience dealing with scale and performance-related challenges Experience building or integrating AI/LLM-powered applications in production or meaningful production systems Experience building agent workflows and tool integrations Ability to think critically about model limitations, hallucinations, latency, and cost tradeoffs A collaborative team player with a can-do approach Strong written and verbal communication skills in English and Hebrew Advantages: Experience with AWS or other cloud platforms Experience with vercel’s AI SDK Experience with embeddings, vector databases, or semantic search Expierence with AWS Bedrock / Azure Open-AI Experience building tool-using agents or workflow engines Experience with AI evaluation, observability, and monitoring Experience in DevOps-related tools Experience with PostgreSQL, Kafka, DocumentDB, OpenSearch, Redis
Responsibilities
The engineer will be responsible for the end-to-end design, implementation, and daily feature delivery across both backend and frontend systems, focusing on implementing high-scale AI-powered features deeply integrated into the platform. This includes building agent-based workflows, integrating LLMs with attention to production quality, and developing Retrieval-Augmented Generation (RAG) pipelines.
Loading...