Data Engineer - Global Membership at Costco Wholesale
Seattle, WA 98134, USA -
Full Time


Start Date

Immediate

Expiry Date

15 Sep, 25

Salary

85000.0

Posted On

15 Jun, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed.
This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture. The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes. Our employees and our members come FIRST. Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others.
Come join the Costco Wholesale IT family. Costco IT is a dynamic, fast-paced environment, working through exciting transformation efforts. We are building the next generation retail environment where you will be surrounded by dedicated and highly professional employees.
Data Engineers are responsible for developing and operationalizing data pipelines/integrations to make data available for consumption (i.e. Reporting, Data Science/Machine Learning, Data APIs, etc.). This includes data ingestion, data transformation, data validation/quality, data pipeline optimization, orchestration; and deploying code to production via CI/CD. The Data Engineer role requires knowledge of software development/programming methodologies, various data sources (Relational Databases, flat files (csv, delimited), APIs, XML, JSON, etc.), data access (SQL, Python, etc.), followed by expertise in data modeling, cloud architectures/platforms, data warehousing, and data lakes. This role will also partner closely with Product Owners, Data Architects, Platform/DevOps Engineers, etc. to design, build, test, implement, and maintain data pipelines.
If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined.

Responsibilities

Develops complex SQL & Python against a variety of data sources.

  • Implements streaming data pipelines using event/message-based architectures.
  • Defines and maintains optimal data pipeline architecture.
  • Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery/orchestration.
  • Analyzes data to spot anomalies, trends, and correlate data to ensure Data Quality.
  • Identifies ways to improve data reliability, efficiency, and quality of data management.
  • Performs peer review for another Data Engineer’s work.
  • Develops and operationalizes data pipelines to create enterprise certified data sets that are made available for consumption (reporting, advanced analytics, APIs/Services).
  • Works in tandem with Architects, Data Analysts, and Software Engineers to design data requirements and recommends ongoing optimization of data storage, data ingestion, data quality, and orchestration.
  • Designs, develops, and implements ETL/ELT/CDC processes using Informatica Intelligent Cloud Services (IICS).
  • Uses Azure services, such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Cosmos, Databricks, and Delta-Lake to improve and speed delivery of data products and services.
  • Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization.
  • Builds required infrastructure for optimal extraction, transformation, and loading of data from various data sources using Azure and SQL technologies.
  • Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.
Loading...