Principal Data Engineer at Keurig Dr Pepper
Frisco, Texas, USA -
Full Time


Start Date

Immediate

Expiry Date

07 Sep, 25

Salary

0.0

Posted On

08 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Wireframing, Testing, Power Bi, Management Skills, Modeling, Azure, Alteryx, Computer Science, Platforms, Knime, Sql, Data Integration, Aws, Data Science, Information Systems, Dbt, Pipeline Development, Snowflake, Tableau, Integration, Microstrategy, Data Warehousing

Industry

Information Technology/IT

Description

JOB OVERVIEW:

Keurig Dr Pepper (NASDAQ: KDP) is a modern, leading coffee and beverage company with a bold vision built to deliver growth and opportunity. We operate with a differentiated business model and world-class brand portfolio, powered by a talented and engaged team that is anchored in our values. We seek diversity in our workforce and empower our team of ~28,000 employee to develop and grow. We care for our employees’ health, wellness, personal and financial well-being by offering robust benefits. We work with big, exciting beverage brands and the #1 single-serve coffee brewing system in North America at KDP, and we have fun doing it! Come be a part of a company where you can feel valued, inspired, and appreciated at work.

WHAT WE ARE LOOKING FOR:

Are you passionate about harnessing data to unlock business insights and drive strategic growth? At Keurig Dr Pepper, we’re seeking a Principal Data Engineer to lead the development and optimization of our modern data ecosystem. You’ll play a pivotal role in designing scalable data solutions, enabling advanced analytics, and mentoring teams across disciplines to maximize the value of data.
Join our innovative team and help shape the future of enterprise data at one of North America’s leading beverage companies.

KEY SKILLS & EXPERTISE:

Technical Mastery (Deep, hands-on expertise expected)

  • Expert-level knowledge of Snowflake architecture, SnowSQL, and data transformation workflows.
  • Advanced proficiency with dbt for modeling, testing, versioning, and orchestrating ELT pipelines.
  • Strong command of SQL, Python, and scalable data pipeline development.
  • Proven experience designing and managing enterprise data warehouses and cloud-native data platforms.
  • Deep understanding of modern data modeling techniques (e.g., dimensional, data vault, star/snowflake schemas).
  • Experience delivering platforms that support advanced analytics and machine learning solutions.
  • Solid grasp of data architecture frameworks such as Data Warehouses, Data Lakes, and Data Hubs.
  • Experience with commercial data science tools like KNIME, Alteryx, or similar platforms.

Some Experience / Familiarity With (Preferred, but not required at expert level)

  • AI/ML platforms such as Databricks, SageMaker, AutoML, or TensorFlow.
  • Data integration and ELT tools like Informatica Cloud, Fivetran, or Azure Data Factory.
  • BI tools: Power BI, Tableau, or MicroStrategy.
  • Working with SAP ERP as a data source.
  • Cloud platforms such as Azure, AWS, or GCP—especially storage, compute, and orchestration services.
  • DevOps practices including Git workflows, CI/CD, and automation.
  • UNIX/Linux environments and shell scripting.

EDUCATION:

  • Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or a related field.

EXPERIENCE:

  • 10+ years in data management, including integration, modeling, and optimization.
  • Hands-on experience with Analytics tools and platforms like Snowflake or Informatica Cloud.
  • Expert in SQL, SnowSql coding, ETL, and data warehousing.

CORE SKILLS:

  • Expertise in conceptual architecture, data integration design, and wireframing.
  • Familiarity with DevOps and Agile technology environments (preferred).
  • Strong presentation and change management skills.
Responsibilities

Please refer the Job description for details

Loading...