Senior Data Engineer at Huda Beauty
Dubai, دبي, United Arab Emirates -
Full Time


Start Date

Immediate

Expiry Date

06 May, 25

Salary

0.0

Posted On

06 Feb, 25

Experience

8 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Reliability, Encryption, Scala, Data Governance, Data Architecture, Java, Spark, Hive, Security, Devops, Distributed Systems, Security Tools, Privacy Regulations, Automation, Azure, Team Leadership, Kubernetes, Data Engineering, Computer Science, Kafka, Strategy, Python

Industry

Information Technology/IT

Description

Who We Are: At Huda Beauty, our Vision is to lead in creating a democratized beauty industry where power is given back to people to define, create, and enjoy beauty for themselves! Launched by award-winning beauty powerhouse Huda Kattan in 2013, Huda Beauty is one of the world’s fastest growing beauty brands. As a company, we are fueled by purpose and not profit, this allows us to approach things differently so that we can create products, content, and a community like no other. A lot has changed since our launch in 2013, but something that will forever remain at the core of Huda Beauty is our focus on business excellence and our unwavering passion for kindness!
Summary: We are seeking an accomplished Senior Data Engineer with over 8 years of professional experience in data engineering and architecture. The ideal candidate will bring advanced expertise in designing, building, and optimizing large-scale data platforms using GCP, Azure, and Databricks. This role requires a deep understanding of AI/ML technologies and their practical applications in enterprise environments.

REQUIREMENTS

Essential Duties and Responsibilities:

  • Data Architecture & Strategy: Lead the design and implementation of end-to-end data solutions to support complex analytics, reporting, and AI initiatives.
  • Cloud Data Ecosystems: Architect and manage scalable, high-performing data systems on GCP, Fabric and Azure platforms, ensuring cost-effectiveness and reliability.
  • Databricks & Advanced Analytics: Leverage Databricks for developing machine learning pipelines, real-time analytics, and advanced data processing workflows.
  • AI & ML Integration: Collaborate with AI teams to integrate predictive models and machine learning algorithms into production workflows.
  • Real-Time Data Processing: Build and optimize real-time data pipelines using streaming technologies such as Kafka, Apache Beam, or Spark Streaming.
  • Data Governance & Security: Implement robust governance frameworks and security protocols, including encryption, access control, and compliance with data privacy regulations.
  • Team Leadership: Mentor and guide junior data engineers, fostering a collaborative and high-performing engineering culture.
  • DevOps & Automation: Establish CI/CD pipelines, manage deployments, and optimize workflows using tools like Active Directory Jenkins, Kubernetes, and Terraform.

REQUIREMENTS:

Educational Background:

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.

Experience:

  • 8+ years of professional experience in data engineering or related roles.
  • Proven expertise in cloud data platforms, particularly GCP, Fabric, Azure, and Databricks.
  • Extensive experience in real-time data processing and streaming technologies.
  • A track record of successfully implementing machine learning models in production.

Technical Skills:

  • Proficient in Python, .NET, Scala, or Java for data engineering tasks.
  • Advanced knowledge of distributed systems and big data frameworks like Spark, Hive, and Hadoop.
  • Hands-on experience with workflow orchestrators like Airflow or Oozie.
  • Strong familiarity with data security tools such as Hashicorp Vault, Kerberos, or Ranger.

Certifications:

  • Certifications such as GCP Professional Data Engineer, Azure Data Engineer Associate, or Databricks certifications are highly desirable.
Responsibilities
  • Data Architecture & Strategy: Lead the design and implementation of end-to-end data solutions to support complex analytics, reporting, and AI initiatives.
  • Cloud Data Ecosystems: Architect and manage scalable, high-performing data systems on GCP, Fabric and Azure platforms, ensuring cost-effectiveness and reliability.
  • Databricks & Advanced Analytics: Leverage Databricks for developing machine learning pipelines, real-time analytics, and advanced data processing workflows.
  • AI & ML Integration: Collaborate with AI teams to integrate predictive models and machine learning algorithms into production workflows.
  • Real-Time Data Processing: Build and optimize real-time data pipelines using streaming technologies such as Kafka, Apache Beam, or Spark Streaming.
  • Data Governance & Security: Implement robust governance frameworks and security protocols, including encryption, access control, and compliance with data privacy regulations.
  • Team Leadership: Mentor and guide junior data engineers, fostering a collaborative and high-performing engineering culture.
  • DevOps & Automation: Establish CI/CD pipelines, manage deployments, and optimize workflows using tools like Active Directory Jenkins, Kubernetes, and Terraform
Loading...