Data Engineer (Remote - US) at Jobgether
, , United States -
Full Time


Start Date

Immediate

Expiry Date

06 Feb, 26

Salary

0.0

Posted On

08 Nov, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Data Pipelines, ETL, ELT, NiFi, Kafka, Python, SQL, Data Security, Collaboration, Technical Documentation, Problem-Solving, Healthcare Data Standards, Event-Driven Architectures, Containerized Deployments, Integration Engines

Industry

Internet Marketplace Platforms

Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer in United States. As a Data Engineer, you will play a key role in modernizing the flow, transformation, and integration of public health data to drive actionable insights at scale. You will design and maintain high-performance, secure data pipelines, collaborate with cross-functional teams including data architects, data scientists, and product stakeholders, and lead the migration from legacy systems to modern architectures. This role offers the opportunity to work on meaningful, mission-driven projects that directly impact public health outcomes. Ideal candidates are technically strong, thrive in complex and fast-paced environments, and bring both hands-on engineering skills and the ability to mentor and elevate the data engineering team. Accountabilities: Design, implement, and operate secure, scalable data pipelines for real-time and batch processing of sensitive health data. Architect ETL/ELT pipelines using tools such as NiFi, Kafka, Python, and SQL, optimizing for performance and reliability. Collaborate with interoperability, product, and data science teams to modernize legacy pipelines and ensure smooth data exchange. Define monitoring, alerting strategies, and operational runbooks to support production teams. Troubleshoot production issues to maintain stable, high-performance pipelines. Drive continuous improvement across the data ecosystem through innovation and collaboration. Contribute to high-quality, scalable integrations that accelerate product development. Maintain technical documentation and share knowledge across teams to ensure efficient workflows. Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or related field. 3+ years implementing enterprise-grade data pipelines, or 7+ years of equivalent experience. Hands-on expertise with NiFi, Kafka, Python, and SQL in cloud environments. Experience with relational databases, CI/CD practices, and DevOps collaboration. Familiarity with OpenSearch/Elasticsearch and distributed data systems. Strong understanding of data security and compliance requirements for sensitive health data. Excellent cross-functional collaboration, communication, and technical documentation skills. Growth mindset and proactive problem-solving approach in complex, mission-driven projects. Preferred Qualifications: Experience with healthcare data standards (HL7 v2.x, FHIR) and terminologies (LOINC, CVX, SNOMED, ICD-10). Expertise in event-driven architectures, data de-identification, and containerized deployments (Docker, Kubernetes). Familiarity with integration engines (e.g., Rhapsody, Mirth) or Master Patient Index (MPI) solutions. Remote-first organization with flexible work arrangements. Paid Time Off (PTO) and company holidays. 401(k) retirement plan with corporate matching. Medical, prescription, vision, and dental coverage. Short-term and long-term disability coverage. Life insurance coverage for employees. Support for home office setup and remote work tools. Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements. 📊 It compares your profile to the job’s core requirements and past success factors to determine your match score. 🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role. 🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed. The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team. Thank you for your interest! #LI-CL1
Responsibilities
Design and maintain secure, scalable data pipelines for processing sensitive health data. Collaborate with cross-functional teams to modernize legacy systems and drive continuous improvement in the data ecosystem.
Loading...