Data Manager - Liver Transplant Research at Mount Sinai
New York, NY 10029, USA -
Full Time


Start Date

Immediate

Expiry Date

12 Sep, 25

Salary

65885.0

Posted On

13 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

R, Data Cleaning, Critical Care, Informatics, Biostatistics, Version Control, Privacy Regulations, Epidemiology, Hipaa, Biomedical Informatics, Computer Science, Sql, Data Science

Industry

Hospital/Health Care

Description

Description
The Icahn School of Medicine at Mount Sinai has an exciting full-time job opportunity to work in the PUMP Consortium, a multinational collaboration among leading liver-transplant centers in the United States, Canada, the United Kingdom, and continental Europe. The consortium is aggregating high-resolution clinical and physiologic data to accelerate research on machine preservation of donor livers and to improve transplant outcomes.
We are seeking a Data Manager / Data Quality Specialist to own the ingestion, harmonization, and ongoing quality-control (QC) of consortium data. Working within Mount Sinai’s high-performance computing environment, you will curate retrospective datasets and build automated QC pipelines for prospective data entering the cloud-based PUMP platform. You will be involved in delivering clean, analysis-ready datasets that meet HIPAA and GDPR standards, setting the foundation for downstream biostatistics, machine-learning, and clinical insights.
Your work will directly enable cutting-edge machine-learning models and clinical studies aimed at expanding the donor liver pool and improving patient survival worldwide. If you are passionate about advancing healthcare through collaborative research and innovation, we encourage you to apply and join our dynamic team at the Icahn School of Medicine at Mount Sinai.

Responsibilities

  • Data Curation & Harmonization: map and reconcile variable definitions from multiple centers; maintain detailed data dictionaries and change logs
  • Quality Control: design and run QC checks for completeness, plausibility, range and cross-field consistency in both retrospective and incoming prospective data
  • Pipeline Development: write reproducible ETL/QC scripts (primarily in R + tidyverse, Bash, optional Python/SQL) that schedule on Minvera and produce versioned, documented outputs
  • Regulatory Compliance: implement de-identification rule and auditing procedures that satisfy HIPAA (US) and GDPR (EU/UK) requirements; collaborate with the IRB and Privacy Office
  • Documentation & SOP’s: draft standard operating procedures, data handling manuals and QC dashboards for consortium sites
  • Collaboration & Support: serve as the data liaison to transplant surgeons; research coordinators, and biostatisticians; provide occasional data access troubleshooting but no primary analytic duties

Qualifications

  • Bachelors Degree in related fields such as Data Science, Biostatistics, Computer Science, Epidemiology, Informatics, etc. Bachelor’s degree preferred.
  • 3 years of professional data management/quality assurance/experience, data cleaning, or clinical research informatics
  • Proficiency in R (tidyverse) and Bash scripting within Linux/Unix environments
  • Demonstrated experience writing reproducible ETL or QC code for large, heterogenous datasets
  • Familiarity with privacy regulations governing health data (HIPAA, GDPR) and best practices for de-identification and secure transfer
  • Excellent written documentation skills and the ability to communicate data issues clearly to both technical and clinical audiences

PREFERRED QUALIFICATIONS:

  • Master’s degree in biostatistics, biomedical informatics or related discipline
  • Working knowledge of SQL, Python/Pandas, and version control (git)
  • Prior experience harmonizing multi site clinical datasets or REDCap/EMR exports
  • Exposure to Great Expectations, dbt, or similar data validation frameworks
  • Background in transplantation, critical care or physiological data is a plus
    Employer Description
Responsibilities
  • Data Curation & Harmonization: map and reconcile variable definitions from multiple centers; maintain detailed data dictionaries and change logs
  • Quality Control: design and run QC checks for completeness, plausibility, range and cross-field consistency in both retrospective and incoming prospective data
  • Pipeline Development: write reproducible ETL/QC scripts (primarily in R + tidyverse, Bash, optional Python/SQL) that schedule on Minvera and produce versioned, documented outputs
  • Regulatory Compliance: implement de-identification rule and auditing procedures that satisfy HIPAA (US) and GDPR (EU/UK) requirements; collaborate with the IRB and Privacy Office
  • Documentation & SOP’s: draft standard operating procedures, data handling manuals and QC dashboards for consortium sites
  • Collaboration & Support: serve as the data liaison to transplant surgeons; research coordinators, and biostatisticians; provide occasional data access troubleshooting but no primary analytic dutie
Loading...