Principal Software Engineer

at  HawkEye360

Herndon, VA 20170, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate12 Feb, 2025USD 200000 Annual13 Nov, 2024N/AAmazon Ec2,Amazon S3,Python,Computer Science,EtlNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

The Principal Softare Engineer will be part of the Data Engineering team, under the Data & Analytics group. Data Engineering manages the transition to production for advance machine learning and geolocation algorithms developed by both the Processing Algorithms and Data Science teams. This team also develops and manages scalable data processing platforms for exploratory data analysis and real-time analytics to support our analysts in their geospatial data exploration needs. As a Software Engineer, you will be working closely with HawkEye 360’s scientists to optimize algorithms for low-latency, highly scalable production environments that directly support our customers.
We work in small teams to rapidly prototype and productize new ideas based on hands-on, in-the-weeds engineering. You’ll be responsible for designing and implementing distributed backend software systems. We support a broad range of software applications to accomplish our mission, especially favoring Python and C++ languages for batch processing within cloud deployments (Kubernetes + Docker).
Location: This position is hybrid with work from home flexibility.

As the Principal Software Engineer, your main responsibilities will be:

  • Lead to the architecture, design and implementation and maintenance of processing and data science algorithms, optimizing for scalable, low-latency deployment to a batch-processing cloud environment
  • Write clean, efficient, and well-documented Python code to implement data extraction, transformation, and loading processes
  • Work closely with Processing Algorithms & Data Science teams to integrate, optimize, and deploy state-of-the-art algorithms to production-ready applications
  • Develop, maintain, and optimize AWS-based ETL solutions leveraging AWS services like Lambda, S3, EC2, RDS, and others.
  • Apply analytical, debugging, problem solving skills to support and debug data-heavy applications in production to achieve long term product goals in terms of performance and reliability
  • Participate in collaborative software development practices, particularly performing merge request reviews, providing design feedback, etc.
  • Guide and mentor other individual contributors providing technical leadership, code reviews, and guidance on best practices.
  • Work in a fast-paced agile environment, effectively communicate and track development activities using agile tools like JIRA/Confluence.
  • Ability to work independently and within a team environment with geographically distributed team members.

Your skills and qualifications:Essential education and experience:

  • Bachelor’s or Master’s degree in Computer Science, Electrical/Computer Engineering, or comparable experience
  • 7+ years of professional software development experience using Python
  • Strong background in designing and developing Extract, Transform, and Load (ETL) processes, particularly within a cloud-native architecture.
  • Extensive experience working in an AWS environment, including knowledge of AWS services and solutions (Amazon S3, Amazon EC2, AWS Lambda)
  • Experience with modern data orchestration tools (e.g., Apache Airflow, AWS Step Functions)
  • Experience developing and supporting DevOps best-practices (e.g., GitLab-based CI/CD)
  • Demonstrated experience developing software in a Linux environment

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities:

  • Lead to the architecture, design and implementation and maintenance of processing and data science algorithms, optimizing for scalable, low-latency deployment to a batch-processing cloud environment
  • Write clean, efficient, and well-documented Python code to implement data extraction, transformation, and loading processes
  • Work closely with Processing Algorithms & Data Science teams to integrate, optimize, and deploy state-of-the-art algorithms to production-ready applications
  • Develop, maintain, and optimize AWS-based ETL solutions leveraging AWS services like Lambda, S3, EC2, RDS, and others.
  • Apply analytical, debugging, problem solving skills to support and debug data-heavy applications in production to achieve long term product goals in terms of performance and reliability
  • Participate in collaborative software development practices, particularly performing merge request reviews, providing design feedback, etc.
  • Guide and mentor other individual contributors providing technical leadership, code reviews, and guidance on best practices.
  • Work in a fast-paced agile environment, effectively communicate and track development activities using agile tools like JIRA/Confluence.
  • Ability to work independently and within a team environment with geographically distributed team members


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Computer Software/Engineering

IT Software - System Programming

Software Engineering

Graduate

Computer Science, Engineering

Proficient

1

Herndon, VA 20170, USA