Engineer, Data at Holley Performance
Nashville, TN 37203, USA -
Full Time


Start Date

Immediate

Expiry Date

15 Oct, 25

Salary

0.0

Posted On

16 Jul, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Aws, Jenkins, Power Bi, Data Analytics, Soap, Xml, Amazon Redshift, Talend, Azure, Security, Mongodb, Json, Nosql, Data Governance, Google Cloud, Scala, Postgresql, Java, Kafka, Relational Databases, Computer Science, Spark, Python, Programming Languages, Kubernetes

Industry

Information Technology/IT

Description

Overview:
This role focuses on backend development and integrations for building and maintaining enterprise data warehouses and data lakes. The ideal candidate will possess a deep understanding of data architecture, ETL pipelines, and integration technologies, ensuring seamless data flow and accessibility across the organization.

Key Responsibilities:

  • Design, develop, and maintain scalable backend systems to support data warehousing and data lake initiatives.
  • Build and optimize ETL/ELT processes to extract, transform, and load data from various sources into centralized data repositories.
  • Develop and implement integration solutions for seamless data exchange between systems, applications, and platforms.
  • Collaborate with data architects, analysts, and other stakeholders to define and implement data models, schemas, and storage solutions.
  • Ensure data quality, consistency, and security by implementing best practices and monitoring frameworks.
  • Monitor and troubleshoot data pipelines and systems to ensure high availability and performance.
  • Stay up-to-date with emerging technologies and trends in data engineering and integration to recommend improvements and innovations.
  • Document technical designs, processes, and standards for the team and stakeholders.

Qualifications:

  • Bachelor’s degree in Computer Science, Engineering, or a related field; equivalent experience considered.
  • Proven experience as a Data Engineer or in a similar backend development role.
  • Strong proficiency in programming languages such as Python, Java, or Scala.
  • Hands-on experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Informatica, etc.).
  • Extensive knowledge of relational and non-relational databases (e.g., SQL, NoSQL, PostgreSQL, MongoDB).
  • Expertise in building and managing enterprise data warehouses (e.g., Snowflake, Amazon Redshift, Google BigQuery) and data lakes (e.g., AWS S3, Azure Data Lake).
  • Familiarity with cloud platforms (AWS, Azure, Google Cloud) and their data services.
  • Experience with API integrations and data exchange protocols (e.g., REST, SOAP, JSON, XML).
  • Solid understanding of data governance, security, and compliance standards.
  • Strong analytical and problem-solving skills with attention to detail.
  • Excellent communication and collaboration abilities.

Preferred Qualifications:

  • Certifications in cloud platforms (AWS Certified Data Analytics, Azure Data Engineer, etc.)
  • Experience with big data technologies (e.g., Apache Hadoop, Spark, Kafka).
  • Knowledge of data visualization tools (e.g., Tableau, Power BI) for supporting downstream analytics.
  • Familiarity with DevOps practices and tools (e.g., Docker, Kubernetes, Jenkins).

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design, develop, and maintain scalable backend systems to support data warehousing and data lake initiatives.
  • Build and optimize ETL/ELT processes to extract, transform, and load data from various sources into centralized data repositories.
  • Develop and implement integration solutions for seamless data exchange between systems, applications, and platforms.
  • Collaborate with data architects, analysts, and other stakeholders to define and implement data models, schemas, and storage solutions.
  • Ensure data quality, consistency, and security by implementing best practices and monitoring frameworks.
  • Monitor and troubleshoot data pipelines and systems to ensure high availability and performance.
  • Stay up-to-date with emerging technologies and trends in data engineering and integration to recommend improvements and innovations.
  • Document technical designs, processes, and standards for the team and stakeholders
Loading...