Services Finance Data Engineer at Apple
Cupertino, California, United States -
Full Time


Start Date

Immediate

Expiry Date

22 Jun, 26

Salary

0.0

Posted On

24 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Pipelines, SQL, Python, Data Quality Checks, Data Transformation, Data Curation, ETL Process, Snowflake, PostgreSQL, Airflow, Jenkins, DBT, Dimensional Modeling, Data Warehousing, Data Lakehouse, CI/CD

Industry

Computers and Electronics Manufacturing

Description
Services Finance Data Science and Engineering team is looking for a passionate and highly motivated Data Engineer to drive our financial data platform forward. You will provide a key function in shaping the success of Apple’s current and future products. As members of the Services Finance Data Science and Engineering team, we work with various business and engineering teams to understand current and future business initiatives. We need to be persistent and flexible in extracting data from various sources, cleaning and curating this data, and then clearly and concisely communicating insights. DESCRIPTION The individual in this role will collaborate with data scientists, business analysts and subject matter experts (SME) to acquire, and transform raw data and develop sophisticated data products. This includes developing and maintaining data pipelines, extracting from different sources (databases, APIs, etc.), and transforming raw data into normalized tables after running data quality checks. A successful data engineer is skilled in taking a business problem and translating it into a data solution in close collaboration with data scientists, business analysts, and SMEs. MINIMUM QUALIFICATIONS Minimum of 5 years of working experience as a data engineer building end-to-end data pipelines Advanced SQL skills for complex transformations, optimization, and troubleshooting A Bachelor’s degree in Statistics, Computer Science, Computer Engineering, or equivalent practical experience Python proficiency, comfortable writing clean, maintainable Python for data pipelines and automation Hands-on experience working with relational database management system such as Snowflake, PostgreSQL or similar cloud data warehouses Ability to work both independently and within a team environment Strong written and verbal communication skills, capable of explaining technical results to a non-technical audience PREFERRED QUALIFICATIONS Experience with scheduling systems and CI/CD tools such as Airflow, Jenkins, etc. Understanding of dimensional modeling, data warehousing concepts, medallion and data lakehouse architectures Hands-on experience with building ETL process and writing unit tests using DBT or similar tools
Responsibilities
The individual will collaborate with data scientists, business analysts, and subject matter experts to acquire and transform raw data, developing sophisticated data products by building and maintaining data pipelines from various sources. This involves extracting data, running quality checks, and transforming it into normalized tables.
Loading...