Senior Data Engineer - (GCP, API, Kafka) at CVS Health
Wellesley, MA 02481, USA -
Full Time


Start Date

Immediate

Expiry Date

08 Sep, 25

Salary

222480.0

Posted On

08 Jun, 25

Experience

1 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Query Optimization, Nosql, Metadata, Data Structures, Data Engineering, Apps, Unix Utilities, Python, Azure, Cloud, Data Warehouse, Data Analytics, Programming Languages, Machine Learning, Information Systems, Teams, Data Visualization, Kafka, Big Data, Soa

Industry

Information Technology/IT

Description

At CVS Health, we’re building a world of health around every consumer and surrounding ourselves with dedicated colleagues who are passionate about transforming health care.
As the nation’s leading health solutions company, we reach millions of Americans through our local presence, digital channels and more than 300,000 purpose-driven colleagues – caring for people where, when and how they choose in a way that is uniquely more connected, more convenient and more compassionate. And we do it all with heart, each and every day.

POSITION SUMMARY:

If you’re eager to make a real impact in the health care industry through your own meaningful contributions, explore a role in technology with CVS Health. Our journey calls for technical innovators and data visionaries: come help us pave the way.
At CVS Health, we possess an extensive repository of healthcare data that spans over 150 million individuals, providing an unparalleled foundation for ambitious Data Engineers. In this role, you will engage with complex business challenges, harnessing modern tools and technologies to securely store, process, transform, and enrich terabyte to petabyte scale healthcare data. Your work will underpin data-driven business decisions and contribute to our mission of delivering industry-best data products / software with a customer-first mindset and team-oriented approach.
As a Senior Data Engineer, you will be instrumental in designing, developing, and maintaining optimal data pipelines to assemble large and intricate datasets, catering to the business requirements of various CVS lines of business. Collaborating closely with teams, you will craft tools to provide actionable insights and integrate them with consumer touchpoints.

In this role, you will:

  • Architect and develop robust, scalable ETL/ELT pipelines using Cloud Dataflow, Cloud composer (Airflow), and Pub/Sub for both batch and streaming use cases. Leverage BigQuery as the central data warehouse and design integrations with other GCP services (e.g., Cloud storage, Cloud functions).
  • Build and optimize analytical data models in BigQuery. Implement partitioning, clustering, and materialized views for performance and cost efficiency. Ensure compliance with data governance, access controls, and IAM best practices.
  • Develop integrations with external systems (APIs, flat files etc.) using GCP-native or hybrid approaches. Utiilize tools like Dataflow or custom Python/Java services on Cloud Functions or Cloud Run to handle transformations and ingestion logic.
  • Build automated CI/CD pipeline using Cloud Build, GitHub Actions, or Jenkins for deploying data pipeline code and workflows. Set up observability using Cloud Monitoring, Cloud Logging, and Error Reporting to ensure pipeline reliability.
  • Lead architectural decisions for data platforms and mentor junior engineers on cloud-native data engineering patterns. Promote best practices for code quality, version control, cost optimization, and data security in a GCP environment. Drive initiatives around data democratization, including building reusable datasets and data catalogs via Datapelx or Data Catalog.

As leaders in healthcare, our analytics and engineering teams deliver innovative solutions to business problems by collaborating with cross-functional teams in a dynamic and agile environment. You will be part of a team that values collaboration and encourages innovative thinking at all levels. You will be intellectually challenged to solve problems associated with large scale complex, structured and unstructured data, that will allow you to grow your technical skills and engineering expertise.

REQUIRED QUALIFICATIONS:

  • 3+ years of experience with SQL, NoSQL
  • 3+ years of experience with Python (or a comparable scripting language)
  • 3+ years of experience with Data warehouses (such as data modeling and technical architectures) and infrastructure components
  • 3+ years of experience with ETL/ELT, and building high-volume data pipelines
  • 3+ years of experience with reporting/analytic tools
  • 3+ years of experience with Query optimization, data structures, transformation, metadata, dependency, and workload management
  • 3+ years of experience with Big data and cloud architecture
  • 3+ years of hands-on experience building modern data pipelines within a major cloud platform (GCP, AWS, Azure)
  • 3+ years of experience with deployment/scaling of apps on containerized environment (i.e. Kubernetes, AKS)
  • 3+ years of experience with real-time and streaming technology (i.e. Azure Event Hubs, Azure Functions, Kafka, Spark Streaming)
  • 1+ year(s) of soliciting complex requirements and managing relationships with key stakeholders
  • 1+ year(s) of experience independently managing deliverables

PREFERRED QUALIFICATIONS:

  • Experience in designing and building data engineering solutions in cloud environments (preferably GCP)
  • Experience with Git, CI/CD pipeline, and other DevOps principles/best practices
  • Experience with bash shell scripts, UNIX utilities & UNIX Commands
  • Ability to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sources
  • Knowledge of API development
  • Experience with complex systems and solving challenging analytical problems
  • Strong collaboration and communication skills within and across teams
  • Knowledge of data visualization and reporting
  • Experience with schema design and dimensional data modeling
  • Google Professional Data Engineer Certification
  • Knowledge of microservices and SOA
  • Formal SAFe and/or agile experience. Previous healthcare experience and domain knowledge
  • Experience designing, building, and maintaining data processing systems
  • Experience architecting and building data warehouse and data lakes

EDUCATION:

Bachelor’s Degree or equivalent work experience in Computer Science, Information Systems, Data Engineering, Data Analytics, Machine Learning, or related field required. Master’s Degree preferred.

Responsibilities
  • Architect and develop robust, scalable ETL/ELT pipelines using Cloud Dataflow, Cloud composer (Airflow), and Pub/Sub for both batch and streaming use cases. Leverage BigQuery as the central data warehouse and design integrations with other GCP services (e.g., Cloud storage, Cloud functions).
  • Build and optimize analytical data models in BigQuery. Implement partitioning, clustering, and materialized views for performance and cost efficiency. Ensure compliance with data governance, access controls, and IAM best practices.
  • Develop integrations with external systems (APIs, flat files etc.) using GCP-native or hybrid approaches. Utiilize tools like Dataflow or custom Python/Java services on Cloud Functions or Cloud Run to handle transformations and ingestion logic.
  • Build automated CI/CD pipeline using Cloud Build, GitHub Actions, or Jenkins for deploying data pipeline code and workflows. Set up observability using Cloud Monitoring, Cloud Logging, and Error Reporting to ensure pipeline reliability.
  • Lead architectural decisions for data platforms and mentor junior engineers on cloud-native data engineering patterns. Promote best practices for code quality, version control, cost optimization, and data security in a GCP environment. Drive initiatives around data democratization, including building reusable datasets and data catalogs via Datapelx or Data Catalog
Loading...