Sr. Data Engineer at General Motors
Remote, Oregon, USA -
Full Time


Start Date

Immediate

Expiry Date

05 Dec, 25

Salary

94800.0

Posted On

07 Sep, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Hadoop, Life Insurance, Data Engineering, Automation, Data Solutions, Json, Cloud Services, Spark, Flexible Spending Accounts, Xml, Scripting Languages, Azure, Computer Science, Health, Processing, Airflow, Apache Kafka, Kafka, Scala, Privacy Regulations, Java

Industry

Information Technology/IT

Description

YOUR SKILLS & ABILITIES (REQUIRED QUALIFICATIONS)

  • 5 + years’ experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field or equivalent experience
  • Strong and hands-on experience in databricks data engineering and management.
  • Experience with setting up CI/CD environment in Azure and Databricks leveraging Azure Devops, ARM templates, GitHub actions and DBX Asset bundles and Terraform etc.
  • Strong understanding of Gen AI tools and proven ability to integrate AI tools such as Microsoft co-pilot, data bricks genie with daily tasks in Databricks.
  • Experience with big data frameworks and tools like Apache Hadoop, Apache Spark, or Apache Kafka for processing and analyzing large datasets.
  • Strong understanding and ability to provide mentorship in the areas of data ETL processes and tools for designing and managing data pipelines.
  • Experience in designing streaming data pipelines using FiveTran, Azure Event Hubs, Auto Loader and delta lake in Azure Databricks.
  • Hands on experience with data serialization formats like JSON, Parquet, YML and XML.
  • Background in designing data solutions that are highly automated and provide consistent and accurate outcomes.
  • Hands on experience in orchestrating automated workflows or batch jobs on Azure Databricks platform.
  • Understanding of data governance principles, data privacy regulations, and experience with implementing security measures for data protection.
  • Ability to work effectively in cross-functional teams, collaborate with data scientists, analysts, and stakeholders to deliver data solutions.
  • At least 3 years of hands-on experience with Big Data Tools: Hadoop, Spark, Kafka, etc.
  • Ability to identify tasks which require automation and automate them
  • A demonstrable understanding of networking/distributed computing environment concepts
  • Ability to multi-task and stay organized in a dynamic work environment

WHAT CAN GIVE YOU A COMPETITIVE ADVANTAGE (PREFERRED QUALIFICATIONS)

  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
    This job is not eligible for relocation benefits. Any relocation costs would be the responsibility of the selected candidate.

Compensation:

  • The expected base compensation for this role is: $94,800 - $159,600. Actual base compensation within the identified range will vary based on factors relevant to the position.
  • Bonus Potential: An incentive pay program offers payouts based on company performance, job level, and individual performance.
  • Benefits : GM offers a variety of health and wellbeing benefit programs. Benefit options include medical, dental, vision, Health Savings Account, Flexible Spending Accounts, retirement savings plan, sickness and accident benefits, life insurance, paid vacation & holidays, tuition assistance programs, employee assistance program, GM vehicle discounts and more.

LI-CC1

Responsibilities

THE ROLE

As a Data Engineer, you will build industrialized data assets and optimize data pipelines in support of Business Intelligence and Advance Analytic objectives. You will work closely with our forward-thinking Data Scientists, BI Developers, System Architects and Data Architects to deliver value to our vision for the future. Are you ready to join a future facing team?

WHAT YOU’LL DO

  • Communicate and maintain Master Data, Metadata, Data Management Repositories, Logical Data Models, Data Standards
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build industrialized analytic datasets and delivery mechanisms that utilize the data pipeline to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics
  • Work with business partners on data-related technical issues and develop requirements to support their data infrastructure needs
  • Ability to create highly consistent and accurate analytic datasets suitable for business intelligence and data scientist team members
Loading...