AIML - Data Engineer, Data and ML Innovation
at Apple
Cupertino, California, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 29 Jul, 2024 | USD 256500 Annual | 02 May, 2024 | 3 year(s) or above | Java,Cs,Go,Mapreduce,Scala,Business Requirements,Python,Statistics,Data Engineering,Presto,Spark,Kafka,Sql | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
SUMMARY
Posted: Apr 25, 2024
Weekly Hours: 40
Role Number:200548947
The AIML Data organization seeks to improve products by using data as the voice of our customers. Within this organization, the Siri Data Engineering team builds systems that process data reliably at scale to generate scalable and high-quality datasets that support confident, data informed decision-making for Siri to be an effective product. We’re looking for exceptional data engineers who are passionate about our product and values; who love working with data at scale, and who are committed to continuously improve. As a part of this group, you will work with petabytes of data daily using diverse technologies. You will be expected to effectively partner with upstream engineering teams and downstream consumers, including data scientists and ML engineers.
KEY QUALIFICATIONS
- 7+ years of technical experience designing, building, and maintaining distributed data processing platforms.
- 5+ years of industry experience working with batch or streaming distributed data processing technologies (e.g. Hadoop, MapReduce, Spark, Flink, Kafka, Presto, etc.) for building efficient & large-scale data pipelines.
- 3+ years of data modeling experience designing data warehouse table schemas and logging schemas.
- Proficiency in at least one high-level programming language (Java, Scala, Python, Go or equivalent).
- Experience with large, complex, highly dimensional data sets; hands-on experience with SQL.
- Experience working with cross-functional teams to collect business requirements, build consensus, and manage expectations.
- You are self-directed and capable of operating amidst ambiguity.
- You are humble, continually growing in self-awareness, and possessing a growth mindset.
- You are curious and have excellent written and verbal communication as well as problem-solving skills.
- You are excited about digging into massive petabyte-scale semi-structured datasets.
DESCRIPTION
In this role, you will be building ultra large scale batch & streaming datasets to support analytics, experimentation and machine learning and helping to drive our self-serve strategy for reporting on-behalf of data scientists and product engineers as we collectively make product better. You will help design instrumentation required to log data from device and server side and validate data is flowing in the correct shape, frequency, and quality into the Data Warehouse. Curate a high performance and easy to understand data model that meets the needs of the many. Identify common patterns and build self-serve tools to scale data engineering, and automate lifecycle of datasets with highest standards of data quality. Educate your consumers on how to access your products, assuring transparency and understanding in logic definitions and enabling self-service.
EDUCATION & EXPERIENCE
Surprise us! Many will have an MS or BS in CS, Engineering, Math, Statistics, or a related field or equivalent practical experience in data engineering.
Responsibilities:
Please refer the Job description for details
REQUIREMENT SUMMARY
Min:3.0Max:7.0 year(s)
Information Technology/IT
IT Software - Other
Software Engineering
BSc
Engineering, Statistics, Math
Proficient
1
Cupertino, CA, USA