Data Engineer at West Cancer Center
Memphis, Tennessee, USA -
Full Time


Start Date

Immediate

Expiry Date

08 Nov, 25

Salary

0.0

Posted On

09 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Computer Science, Relational Databases, Data Modeling, Analytical Skills, Python, Process Design, Version Control, Teams, System Migration, Data Engineering, Data Warehousing, Git, Communication Skills, Modeling

Industry

Information Technology/IT

Description

ABOUT US

At West Cancer Center, we are dedicated to providing compassionate, patient-centered care while advancing groundbreaking research. Our team fosters collaboration, innovation, and professional growth, ensuring that every role contributes to making a difference in patients’ lives. Join us in our mission to provide comprehensive support to those navigating the challenges of cancer treatment.

POSITION OVERVIEW

The Data Engineer plays a pivotal role in leading platform modernization efforts by architecting and designing scalable data solutions, optimizing data lakes and warehouses, and developing efficient data pipelines to migrate from legacy systems. This role supports analytics and business intelligence by maintaining high-quality data systems and collaborating closely with internal teams and external partners. The ideal candidate will bring strong technical expertise, a collaborative spirit, and a passion for driving innovation through data.

EDUCATION & EXPERIENCE

  • Bachelor’s degree in Computer Science, Data Engineering, or related field
  • Azure Data Engineering, Microsoft Fabric Data Engineer, or Microsoft Fabric Engineer certification required
  • Experience with healthcare data ecosystems, workflows, and compliance standards
  • Experience with relational databases, data modeling, and legacy system migration
  • Proficiency in Python, T-SQL, and familiarity with PySpark and big data frameworks
  • Knowledge of version control and DevOps pipelines using Git and Azure DevOps

SKILLS & ABILITIES

  • Expertise in ETL/ELT process design and implementation
  • Hands-on experience with data warehousing, modeling, and governance
  • Familiarity with Microsoft Fabric, Azure SQL, Azure Data Factory, and Synapse Analytics
  • Understanding of data lake and lakehouse architectures
  • Strong problem-solving and analytical skills
  • Excellent communication skills and ability to collaborate across teams
  • Ability to work independently and manage multiple priorities

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Develop, monitor, and optimize data pipelines for performance and scalability
  • Support AI/ML initiatives with structured, clean data
  • Use Jupyter Notebook for data processing, analysis, and reporting
  • Build and maintain cloud-based data infrastructure (Azure, AWS)
  • Design and implement pipelines for large-scale, distributed datasets
  • Collaborate with analysts and stakeholders to deliver technical solutions based on business needs
  • Lead migration of legacy systems and SQL databases to modern platforms
  • Maintain and troubleshoot SQL Server databases in production and development
  • Implement and manage ETL/ELT processes
  • Establish and enforce data governance and security protocols
  • Ensure data models and warehouse architecture align with business requirements
  • Support BI platforms with reliable, well-structured data
  • Provide technical documentation and ongoing support
Loading...