Data Engineer at Cummins
Indiana, Indiana, USA -
Full Time


Start Date

Immediate

Expiry Date

15 Sep, 25

Salary

0.0

Posted On

15 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Snowflake, Hive, Storage Systems, Data Governance, Continuous Delivery, Relational Databases, Power Bi, Mongodb, Sql, Etl Tools, Kafka, Metadata Management, Software Development, Agile Methodologies

Industry

Information Technology/IT

Description

Career Path: Systems/Information Technology
Organization: Cummins Inc.
Role Category: Remote
Job Type: Exempt - Experienced
ReqID: 2414992

DESCRIPTION

Cummins is seeking a SKILLED DATA ENGINEER to support the development, maintenance, and optimization of our enterprise data and analytics platform. This role involves hands-on experience in SOFTWARE DEVELOPMENT , ETL PROCESSES , and DATA WAREHOUSING , with strong exposure to tools like SNOWFLAKE , OBIEE , and POWER BI . The engineer will collaborate with cross-functional teams, transforming data into actionable insights that enable business agility and scale.
Please Note: While the role is categorized as remote, it will follow a HYBRID WORK MODEL based out of our PUNE OFFICE .

EXPERIENCE

Must have skills:

  • 5–7 years of experience in data engineering or software development , preferably within a finance or enterprise IT environment.
  • Proficient in ETL tools , SQL , and data warehouse development .
  • Proficient in Snowflake , Power BI , and OBIEE reporting platforms. Must have worked in implementation using these tools and technologies.
  • Strong understanding of data warehousing principles , including schema design (star/snowflake), ER modeling, and relational databases.
  • Working knowledge of Oracle databases and Oracle EBS structures.

Preferred Skills:

  • Experience with Qlik Replicate , data replication , or data migration tools.
  • Familiarity with data governance , data quality frameworks , and metadata management .
  • Exposure to cloud-based architectures, Big Data platforms (e.g., Spark, Hive, Kafka), and distributed storage systems (e.g., HBase, MongoDB).
  • Understanding of agile methodologies (Scrum, Kanban) and DevOps practices for continuous delivery and improvement.
Responsibilities

KEY RESPONSIBILITIES:

  • Design, develop, and maintain ETL pipelines using Snowflake and related data transformation tools.
  • Build and automate data integration workflows that extract, transform, and load data from various sources including Oracle EBS and other enterprise systems.
  • Analyze, monitor, and troubleshoot data quality and integrity issues using standardized tools and methods.
  • Develop and maintain dashboards and reports using OBIEE , Power BI , and other visualization tools for business stakeholders.
  • Work with IT and Business teams to gather reporting requirements and translate them into scalable technical solutions.
  • Participate in data modeling and storage architecture using star and snowflake schema designs.
  • Contribute to the implementation of data governance , metadata management , and access control mechanisms .
  • Maintain documentation for solutions and participate in testing and validation activities.
  • Support migration and replication of data using tools such as Qlik Replicate and contribute to cloud-based data architecture .
  • Apply agile and DevOps methodologies to continuously improve data delivery and quality assurance processes.
Loading...