Full Stack Data Engineer at Applico Capital
Chicago, Illinois, United States -
Full Time


Start Date

Immediate

Expiry Date

03 Feb, 26

Salary

0.0

Posted On

06 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, SQL, Data Engineering, Data Pipelines, Cloud Platforms, Automation, Data Modeling, APIs, Machine Learning, Data Quality, Open Source Tools, Data Governance, CI/CD, Data Integration, Graph Databases, AI Tools

Industry

Venture Capital and Private Equity Principals

Description
Applico Capital is bringing the tech-enabled private equity thesis to B2B distributors and building an all-star technical team to execute on an unprecedented opportunity. Our mission is to modernize the backbone of industrial commerce through automation, data, and intelligent systems This is not a “lab” or “innovation center” set apart from the business – our success depends on working hand-in-hand with stakeholders at every level, from executives to frontline employees. We must build empathy for real-world challenges, co-create meaningful solutions, and ensure adoption across the organization. We are looking for highly technical leaders who thrive in entrepreneurial, scrappy, and collaborative environments, who are comfortable moving fast and making an outsized impact. Every member of this team will be expected to use AI directly in their own work, automate workflows, and drive a data-driven, CI/CD, and automation-heavy culture that delivers measurable business outcomes. About the Role: We’re looking for a Full-Stack Data Engineer to help build this foundation from the ground up. You’ll work across ingestion, transformation, enrichment, and delivery layers — connecting ERP, CRM, PIM, CMS, and external data sources into a unified, intelligent data environment. This is a hands-on builder role ideal for someone who thrives in fast-moving, entrepreneurial environments. You’ll prototype, automate, and iterate quickly while helping establish engineering patterns that will scale across multiple operating companies. Key Responsibilities Design and implement end-to-end data pipelines for ingestion, transformation, and enrichment using modern open-source tools Integrate data from core enterprise systems (ERP, CRM, PIM, CMS) and third-party APIs Build automated ELT/ETL workflows with observability, testing, and monitoring baked in Partner with Data Architecture and AI teams to prepare data for analytics, machine learning, and agent-driven workflows Develop lightweight APIs and internal tools (e.g., FastAPI, Streamlit, or Retool) to expose clean data products to internal teams Implement data quality, lineage, and governance frameworks to ensure reliability and transparency Contribute to the definition of open data models and schemas, following best practices for standardization and interoperability Use LLMs and AI-augmented tools to accelerate integration, cleaning, and mapping tasks Collaborate with product and business stakeholders to understand workflows and translate them into scalable data solutions 4–7 years of experience in data engineering or full-stack data development, preferably in a modern cloud environment Strong skills in Python and SQL, with experience building production-grade data pipelines Hands-on experience with open-source data tools (e.g., dbt, Airbyte/Meltano, Dagster, Prefect, DuckDB, Postgres, Delta Lake, or Iceberg) Familiarity with data modeling and schema design (star/snowflake, normalized, or semantic/graph models) Experience working with cloud platforms (AWS, GCP, or Azure) and infrastructure as code (Terraform, GitHub Actions, etc.) Exposure to semantic or graph databases (Neo4j, Weaviate, ArangoDB) or an eagerness to learn Experience developing and consuming REST or GraphQL APIs Bonus: familiarity with LLM frameworks (LangChain, LangGraph, DSPy) or integrating AI enrichment into data pipelines Strong bias toward automation, testing, and documentation — you treat pipelines as products Comfortable operating in ambiguous, high-velocity environments where experimentation and impact matter most
Responsibilities
Design and implement end-to-end data pipelines for ingestion, transformation, and enrichment using modern open-source tools. Collaborate with product and business stakeholders to understand workflows and translate them into scalable data solutions.
Loading...