Lead Data Engineer at HealthCare
, Connecticut, United States -
Full Time


Start Date

Immediate

Expiry Date

08 Jul, 26

Salary

0.0

Posted On

09 Apr, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Airflow, AWS, SQL, Snowflake, Data pipelines, Data modeling, DynamoDB, S3, Snowpipe, Data engineering, Mentoring, Code review, Observability, Infrastructure-as-code, CI/CD

Industry

Insurance

Description
Join Us! HealthCare.com has become one of America’s fastest-growing insurtech companies, revolutionizing how consumers shop for health insurance. Leveraging advanced technology and data science, the company has developed customized proprietary products to better fit consumer requirements, enhance customer satisfaction, and take some of the guesswork and inefficiencies out of buying insurance. About the Role We’re hiring a Data Engineer to help own and scale our core data platform. This role is hands-on and production-focused: you’ll design, build, and maintain data pipelines that power analytics, machine learning, and operational systems. We’re especially interested in candidates who combine strong data engineering fundamentals with either analytics depth or DevOps / platform experience. What You’ll Do Design, build, and maintain production-grade data pipelines using Airflow and AWS services like Lambda, DynamoDB Own data ingestion from internal systems and third-party integrations (e.g., Google, Bing, external APIs). Manage data storage and movement across S3, Snowflake, Snowpipe, and DynamoDB. Write and maintain custom Python code that runs reliably in production. Work across dev, staging, and production environments with proper deployment and rollback practices. Partner with analytics, data science, and product teams to design reliable, usable data models. Review code, mentor junior engineers, and help establish best practices for data quality, reliability, and observability. Identify and fix performance, cost, or reliability issues in existing pipelines. \n What We’re Looking For Core Requirements Strong experience building and maintaining production data pipelines. Deep comfort with Python for data engineering (not just scripting). Hands-on experience with Airflow in a real production environment. Experience working in AWS, including S3 and managed services. Solid SQL skills and experience working with analytical warehouses (Snowflake strongly preferred). Experience operating systems across multiple environments (dev / prd). Ability to mentor others through code review, design discussions, and troubleshooting. Bonus Skills Experience with Kubernetes or containerized workloads. DevOps or platform experience (CI/CD, infrastructure-as-code, monitoring). Analytics experience (data modeling, working directly with analysts or stakeholders). Experience integrating with external APIs and managing schema or contract changes. History of improving data reliability, latency, or cost at scale. Experience as a technical mentor—reviewing code, setting standards, and helping the team make better engineering decisions Can explain why a system is built a certain way, not just how. \n Benefits Opportunity to work from home Excellent work environment Medical, dental, and vision insurance Up to 15 days of paid time off 12 company observed holidays 401k plan with company match Life insurance Professional growth opportunity Most importantly, an inclusive company culture established by an incredible team! Get to Know Us! https://www.healthcare.com/ linkedin.com/company/healthcare-com
Responsibilities
You will design, build, and maintain production-grade data pipelines using Airflow and AWS services to support analytics and machine learning. Additionally, you will mentor junior engineers and establish best practices for data quality, reliability, and observability.
Loading...