Staff Data Engineer, Ingestion Framework - Data & AI Platform at Rivian and Volkswagen Group Technologies
Vancouver, BC, Canada -
Full Time


Start Date

Immediate

Expiry Date

13 Dec, 25

Salary

96000.0

Posted On

13 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Computer Science, Interpersonal Skills, Data Science, Collaboration, Spark

Industry

Information Technology/IT

Description
  • Vancouver, Canada
  • Software Engineering
    About Us
    Rivian and Volkswagen Group Technologies is a joint venture between two industry leaders with a clear vision for automotive’s next chapter. From operating systems to zonal controllers to cloud and connectivity solutions, we’re addressing the challenges of electric vehicles through technology that will set the standards for software-defined vehicles around the world.
    The road to the future is uncharted. By combining our expertise across connectivity, AI, security and more, we’ll map a new way forward. Working together, we’ll create a future that’s more connected, more intelligent, more sustainable for everyone.
    Role Summary
    As a Senior Data Engineer in the Data & AI Platform team, you will play a pivotal role in enabling our big data platform to operate seamlessly at a petabyte scale. The successful candidate will be an architect and custodian of a custom-built, Go-powered vehicle data processing framework, ensuring its smooth integration with our Databricks platform.

Responsibilities

  • Framework Development & Maintenance: Design, develop, and enhance a cutting-edge, petabyte-scale vehicle data processing framework using Go. This framework is the backbone of our data operations; therefore, the ability to write clean, efficient, and maintainable code is essential.
  • Databricks Integration: Collaborate closely with the big data platform team to ensure seamless integration of the framework with our Databricks environment. This involves understanding the intricacies of both systems and optimizing their interaction for maximum efficiency.
  • CI/CD & Kubernetes Management: Take ownership of the framework’s deployment lifecycle. Implement and manage robust CI/CD pipelines on Kubernetes to ensure smooth, automated, and reliable deployments.
  • Performance Optimization: Continuously monitor the framework’s performance, identify bottlenecks, and implement optimizations to ensure it operates at peak efficiency even as data volumes grow.
  • Troubleshooting & Issue Resolution: Proactively identify and resolve issues that may arise with the framework. The ability to diagnose problems quickly and implement effective solutions will be critical to maintaining system uptime.
  • Collaborative Development: Collaborate with data producers and consumers to deliver dependable and scalable solutions.
  • Staying Ahead of the Curve: Keep abreast of the latest advancements in big data technologies, data engineering practices, and cloud computing. Explore new tools and techniques to improve our data infrastructure.

Qualifications

  • Education: Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.
  • Experience: 5+ years of hands-on experience in data engineering or a similar role, with a proven track record of building and maintaining large-scale data processing systems.
  • Go Expertise: Deep proficiency in Go programming language, with a strong understanding of its concurrency model and best practices for building high-performance applications.
  • Big Data Technologies: Solid understanding of big data ecosystems, such as Spark and Databricks. Experience with data warehousing and data lake concepts is a plus.
  • Problem-Solving Skills: Exceptional analytical and problem-solving skills, with the ability to break down complex problems into manageable components and develop effective solutions.
  • Communication & Collaboration: Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams.

Nice to Have:

  • Kubernetes: Experience with Kubernetes container orchestration, deployment strategies, and cluster management.
  • Delta Lake: Familiarity interacting with open source or Databricks proprietary Delta tables. Understanding of Delta protocol is an extra bonus.

Pay Disclosure
Salary Range for California Based Candidates: $96,000.00 - $128,000.00 CAD (actual compensation will be determined based on experience, location, and other factors permitted by law).
Benefits Summary: Rivian provides robust medical/Rx, dental and vision insurance packages for full-time employees, their spouse or domestic partner, and children up to age 26. Coverage is effective on the first day of employment, and Rivian covers most of the premiums.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Framework Development & Maintenance: Design, develop, and enhance a cutting-edge, petabyte-scale vehicle data processing framework using Go. This framework is the backbone of our data operations; therefore, the ability to write clean, efficient, and maintainable code is essential.
  • Databricks Integration: Collaborate closely with the big data platform team to ensure seamless integration of the framework with our Databricks environment. This involves understanding the intricacies of both systems and optimizing their interaction for maximum efficiency.
  • CI/CD & Kubernetes Management: Take ownership of the framework’s deployment lifecycle. Implement and manage robust CI/CD pipelines on Kubernetes to ensure smooth, automated, and reliable deployments.
  • Performance Optimization: Continuously monitor the framework’s performance, identify bottlenecks, and implement optimizations to ensure it operates at peak efficiency even as data volumes grow.
  • Troubleshooting & Issue Resolution: Proactively identify and resolve issues that may arise with the framework. The ability to diagnose problems quickly and implement effective solutions will be critical to maintaining system uptime.
  • Collaborative Development: Collaborate with data producers and consumers to deliver dependable and scalable solutions.
  • Staying Ahead of the Curve: Keep abreast of the latest advancements in big data technologies, data engineering practices, and cloud computing. Explore new tools and techniques to improve our data infrastructure
Loading...