AI Software/ Data Engineer at Vurvey Labs
Cincinnati, Ohio, United States -
Full Time


Start Date

Immediate

Expiry Date

28 Apr, 26

Salary

0.0

Posted On

28 Jan, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Software Engineering, Data Pipelines, AI Feature Operationalization, Microservices, API Design, Kubernetes, Docker, Cloud Platforms, CI/CD, Testing, Code Reviews, LLMs, RAG Architectures, SQL, NoSQL, Airflow

Industry

technology;Information and Media

Description
About the role We are looking for a versatile AI Software Engineer to serve as the critical link between our strategic AI initiatives and our core production engineering team. In this unique role, you will partner directly with our data team to architect and implement data pipelines and intelligent functions. However, you are not just building prototypes; you are a software engineer first. You will be responsible for operationalizing AI features, ensuring they meet the strict architectural, scalability, and CI/CD standards of our Engineering team. What you'll do Bridge the Gap: Translate high-level AI strategies and model prototypes into robust, production-grade microservices and libraries. Build Pipelines: Design and implement efficient data ingestion and processing pipelines (ETL/ELT) that feed our AI models. Integrate Systems: Work intimately with our Backend team (Node.js/TypeScript) to expose AI capabilities via clean, well-documented APIs. Infrastructure & Ops: Deploy and manage AI workloads using cloud services; own the deployment and orchestration of your services. Engineering Excellence: Enforce strong software development principles—comprehensive testing, code reviews, and maintaining CI/CD pipelines. Qualifications Software Engineering Foundation: 5+ years of professional experience in backend software development. You write clean, maintainable, and testable code. DevOps Mindset: Hands-on experience with Kubernetes, Docker, and Cloud Platforms (AWS/GCP/Azure). You know how to troubleshoot a crashing pod. CI/CD: Experience setting up and maintaining pipelines (GitHub Actions, GitLab CI, etc.). AI/ML Fluency: You understand the ecosystem. You know the difference between training and inference, you understand embeddings, and you can speak intelligently about LLMs and RAG architectures. API Design: Strong grasp of API design patterns. Database Skills: Proficiency in SQL (Postgres) and NoSQL environments. Data Handling: Experience designing pipelines using tools like Apache Airflow, Kafka, or cloud-native serverless functions. FAQ- This position is not remote and is located in our Cincinnati, Ohio Headquarters. Additionally, we are unable to support Visa needs now or in the future.
Responsibilities
This role involves translating high-level AI strategies and model prototypes into robust, production-grade microservices and libraries while designing and implementing efficient data ingestion and processing pipelines for AI models. The engineer will also integrate these AI capabilities via clean APIs and manage deployment using cloud services.
Loading...