Data Engineer at Everest Technologies
Columbus, Ohio, USA -
Full Time


Start Date

Immediate

Expiry Date

28 Nov, 25

Salary

0.0

Posted On

29 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

#INDHP

Job Type: Contract

Benefits:

  • 401(k)
  • 401(k) matching
  • Dental insurance
  • Health insurance
  • Life insurance
  • Paid time off

Work Location: In perso

Responsibilities

ROLE OVERVIEW:

The Data Engineer will be a key builder on our AI journey, responsible for designing,
constructing, and maintaining the data infrastructure required to support our AI initiatives. This role will focus on building robust and scalable data pipelines to extract data from a variety of sources, integrate it with our data lake/warehouse, and prepare it for analysis by our Data Analysts and training custom AI models. This position is critical for enabling our focus on vendor-provided capabilities and eventually building custom solutions.

KEY RESPONSIBILITIES:

  • Design, build, and maintain scalable and efficient ETL/ELT data pipelines to ingest
  • data from internal and external sources (e.g., APIs from EPIC, Workday, relational databases, flat files). and data warehouse to ensure data is clean, accessible, and ready for analysis and model training.
  • Collaborate with the Data Analyst and other stakeholders to understand their data requirements and provide them with clean, well-structured datasets.
  • Implement data governance, security, and quality controls to ensure data integrity and compliance.with existing systems.tools.Required Skills & Qualifications:
  • Automate data ingestion, transformation, and validation processes.
  • Work with our broader IT team to ensure seamless integration of data infrastructure
  • Contribute to the evaluation and implementation of new data technologies and
  • ETL/ELT Development: Strong experience in designing and building data pipelines
  • using ETL/ELT tools and frameworks.
  • SQL: Advanced proficiency in SQL for data manipulation, transformation, and optimization.
  • Experience with EPIC EMR and Informatica is required.
  • Programming: Strong programming skills in Python (or a similar language) for scripting, automation, and data processing.data services (e.g., Microsoft Azure Data Factory, Azure Fabric, IICS). Informatica IDMC preferred.dimensional modeling).Kafka, Azure Event Hubs).
  • Data Warehousing: Experience with data warehousing concepts and technologies.
  • Cloud Computing: Hands-on experience with at least one major cloud platform’s
  • Version Control: Proficiency with Git for code management and collaboration.
  • Problem-Solving: Proven ability to troubleshoot and resolve data pipeline issues.
  • Data Modeling: Experience with various data modeling techniques (e.g.,
  • Real-time Processing: Familiarity with real-time data streaming technologies (e.g.,
  • Education: Bachelor’s degree in Computer Science, Engineering, or related field.
  • API Integration: Experience building data connectors and integrating with APIs from major enterprise systems (e.g., EPIC, Workday).
  • CI/CD: Knowledge of Continuous Integration/Continuous Deployment practices for data pipelines.
  • AI/ML MLOps: A basic understanding of the machine learning lifecycle and how tobuild data pipelines to support model training and deployment.integrated data platform (OneLake, Data Factory, Synapse Data Engineering).
  • Experience with Microsoft Fabric: Direct experience with Microsoft Fabric’s
Loading...