Software Engineer, Data Engineering

at  Grammarly Inc

Deutschland, , Germany -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate25 Oct, 2024Not Specified28 Jul, 2024N/AKafka,Spark,Customer Value,Open Source,Features,Java,Scala,PythonNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

Grammarly is excited to offer a remote-first hybrid working model. Grammarly team members in this role must be based in Germany, and, depending on business needs, they must meet in person for collaboration weeks, traveling if necessary to the hub(s) where their team is based.
This flexible approach gives team members the best of both worlds: plenty of focus time along with in-person collaboration that fosters trust and unlocks creativity.

THE OPPORTUNITY

To achieve our ambitious goals, we are looking for an experienced Software Engineer, Data Engineering who can lead independently, and drive projects end to end. The person in this role will build highly automated, low-latency core datasets to help engineers and end users across Grammarly work with analytical data at scale. They will also create tools and own backend software frameworks, platforms, and tools that other teams can use to build analytics at scale.
Grammarly’s engineers and researchers have the freedom to innovate and uncover breakthroughs—and, in turn, influence our product roadmap. The complexity of our technical challenges is growing rapidly as we scale our interfaces, algorithms, and infrastructure.
The Data Engineering team has a critical mission to equip all Grammarlians with the data and tools they need to build analytical products and make decisions. To deal with the massive scale of data, our team employs software design principles to keep our data healthy and freely flowing.

In this role, you will:

  • Build Data Pipelines and Infrastructure for optimal extraction, transformation, and loading data from a wide variety of sources.
  • Design & improve data models, storage structures, and tools for easy data discovery, increased visibility, and accessibility to enable rapid development of Analytics Dashboards and ML experiments.
  • Work closely with ML and Analytics teams to streamline & optimize data delivery and processing and help land business-critical projects.
  • Address data latency and scalability issues and ensure frequent and reliable data refresh for downstream processes.
  • Model structure, storage, and access of data at very high volumes for our data lakehouse.
  • Improve developer productivity and self-serve solutions by contributing components to our stream data processing framework(s).
  • Own data engineering’s infrastructure-as-code for provisioning services that allow our engineers to deploy mature software installations within a few hours.
  • Build a world-class process that will allow our systems to scale.
  • Mentor other back-end engineers on the team and help them grow.

QUALIFICATIONS

  • Has experience building and owning Data Pipelines to structure, enrich, and aggregate data and generate features with technologies like Spark, Flink, Kafka, Kinesis, etc.
  • Leads design reviews and is a driving force to change how data is stored and accessed from the data platform.
  • Is familiar with Python, Scala, or Java.
  • Has experience with designing database objects and writing relational queries.
  • Has experience designing and standing up APIs and services.
  • Has experience with system design and building internal tools.
  • Can knowledgeably choose an open source or third-party service to accomplish what they need or can devise a quick and simple solution on their own.
  • Embodies our EAGER values—is ethical, adaptable, gritty, empathetic, and remarkable.
  • Is inspired by our MOVE principles: move fast and learn faster; obsess about creating customer value; value impact over activity; and embrace healthy disagreement rooted in trust.
  • Is able to meet in person for their team’s scheduled collaboration weeks, traveling if necessary to the hub where their team is based.

Responsibilities:

  • Build Data Pipelines and Infrastructure for optimal extraction, transformation, and loading data from a wide variety of sources.
  • Design & improve data models, storage structures, and tools for easy data discovery, increased visibility, and accessibility to enable rapid development of Analytics Dashboards and ML experiments.
  • Work closely with ML and Analytics teams to streamline & optimize data delivery and processing and help land business-critical projects.
  • Address data latency and scalability issues and ensure frequent and reliable data refresh for downstream processes.
  • Model structure, storage, and access of data at very high volumes for our data lakehouse.
  • Improve developer productivity and self-serve solutions by contributing components to our stream data processing framework(s).
  • Own data engineering’s infrastructure-as-code for provisioning services that allow our engineers to deploy mature software installations within a few hours.
  • Build a world-class process that will allow our systems to scale.
  • Mentor other back-end engineers on the team and help them grow


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Other

Software Engineering

Graduate

Proficient

1

Deutschland, Germany