Staff Software Engineer at Rapid7
Remote, Scotland, United Kingdom -
Full Time


Start Date

Immediate

Expiry Date

03 Jul, 25

Salary

0.0

Posted On

03 Apr, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Software Development, Scala, Github, Communication Skills, Spark, Technical Leadership, Kafka, Jenkins, Storage, Continuous Improvement, Sql

Industry

Information Technology/IT

Description

The Staff Software Engineer on the Data Platform’s Data Mesh team will help set technical direction to deliver scalable data pipelines, retrieval processes and analytics jobs which process data at an enterprise scale. They will serve as an expert and owner for the Data Platform’s Data Mesh, participate in architectural discussions and contribute significant hands-on implementation to successfully deliver new capabilities from conception to release. In addition to hands-on development, they will work closely with the product management team, mentor engineers and help drive roadmap planning. The Staff Software Engineer’s role is responsible for providing technical leadership and does not have people-management responsibilities.

The skills you’ll bring include:

  • A minimum of 8+ years experience in software development. Preferably 5+ years actively building solutions with common Data Engineering technologies e.g. Spark, SQL, Airflow etc.
  • Strong hands-on experience building and supporting analytics/ transformation workloads in Spark - ideally in Scala.
  • Experience working with technologies to support storage and high performant access of huge analytic data sets. Preferably Apache Iceberg and Parquet.
  • Hands on expertise building high performance data pipelines using kafka.
  • Experience implementing systems that use Change Data Capture(CDC) tools and patterns to replicate data to other systems, preferably Debezium.
  • Experience in continuously monitoring and optimising data pipelines for performance and cost-effectiveness.
  • Familiarity with CI/CD pipelines such as Jenkins and proficiency with version control systems such as GitHub.
  • Mentorship and guidance of junior engineers, providing technical leadership and fostering a culture of continuous improvement and innovation.
  • Excellent verbal and written communication skills.
  • Strong, creative problem solving ability
Responsibilities

We are seeking an innovative, self-motivated Staff Software Engineer with strong data engineering experience. The ideal candidate will act as technical leader for the Data Platform’s “Data Mesh” engineering team, building and supporting scalable data pipelines, retrieval processes and analytics jobs. The Staff Software Engineer on the data mesh team will work widely across product teams and collaborate within the Data Platform to deliver product adoption and pipeline scalability. They will also take ownership of monitoring/testing strategies to ensure performance, resilience and cost optimisation.
You will both help set technical direction and directly contribute with significant hands-on development. The Staff Software Engineer’s role is responsible for providing technical leadership and does not have people-management responsibilities.

In this role, you will:

  • Build, maintain, and release our well architected services and infrastructure by writing correct and clean code consistently and following best practices and conventions. You will understand and make well-reasoned design decisions and tradeoffs.
  • Work cross functionally with internal product tech teams and product managers.
  • Take a lead role in the design and implementation of solutions to ensure pipeline performance, resilience and cost optimisation.
  • Help set technical direction defining and implementing data models, access controls, data governance and data retention strategies.

The skills you’ll bring include:

  • A minimum of 8+ years experience in software development. Preferably 5+ years actively building solutions with common Data Engineering technologies e.g. Spark, SQL, Airflow etc.
  • Strong hands-on experience building and supporting analytics/ transformation workloads in Spark - ideally in Scala.
  • Experience working with technologies to support storage and high performant access of huge analytic data sets. Preferably Apache Iceberg and Parquet.
  • Hands on expertise building high performance data pipelines using kafka.
  • Experience implementing systems that use Change Data Capture(CDC) tools and patterns to replicate data to other systems, preferably Debezium.
  • Experience in continuously monitoring and optimising data pipelines for performance and cost-effectiveness.
  • Familiarity with CI/CD pipelines such as Jenkins and proficiency with version control systems such as GitHub.
  • Mentorship and guidance of junior engineers, providing technical leadership and fostering a culture of continuous improvement and innovation.
  • Excellent verbal and written communication skills.
  • Strong, creative problem solving ability.

We know that the best ideas and solutions come from multi-dimensional teams. Teams reflecting a variety of backgrounds and professional experiences. If you are excited about this role and feel your experience can make an impact, please don’t be shy - apply today.

Loading...