Junior Software Engineer (Data Engineering) at First Resonance
Los Angeles, California, USA -
Full Time


Start Date

Immediate

Expiry Date

13 Jun, 25

Salary

80000.0

Posted On

14 Mar, 25

Experience

1 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Design, Avro, Data Quality, Data Solutions, Tableau, Graphql, Business Intelligence, Analytical Skills, Business Requirements, Data Models, Sql, Ecr, Data Preparation, Training, Airflow, Python, Data Governance, Looker

Industry

Information Technology/IT

Description

JUNIOR SOFTWARE ENGINEER (DATA ENGINEERING)

We’re looking for a full-time Data Engineer to join us in our mission of bringing the ION Factory OS to next-generation hardware builders around the globe. Our newest recruit will join the First Resonance team in Los Angeles, CA (HQ in Downtown) and become foundational members of the First Resonance team.
Are you excited by the opportunity to assist manufacturers working on eVTOLS, rockets, robots, and autonomous vehicles? The most important characteristic of our product team is their interest and eagerness to support companies that are tackling some of society’s greatest challenges such as climate change, space exploration, and autonomous transportation.
With this role, you will join a diverse team with a wide range of backgrounds and experiences. We pride ourselves on being fast learners, quick-thinkers, and agile executors. While a spur-of-the-moment ping pong tournament is occasionally required, our number one priority is always assisting our ION customers with our manufacturing platform. If you want to play a key role in the future of hardware and Industry 4.0, come join us!

QUALIFICATIONS

  • 1+ years of relevant Data Engineering technical experience.
  • Strong problem-solving and analytical skills.
  • Extensive proficiency in Python and SQL.
  • Experience in modern orchestration tools such as Airflow, Dagster, etc.
  • Proficiency in dbt (Data Build Tool) for data transformations, managing data models, and ensuring data quality within the analytics stack.
  • Experience with GraphQL, including querying, manipulating, and transforming data from GraphQL APIs to integrate into broader data pipelines.
  • Hands-on experience with modern analytics warehouses (e.g. Snowflake, BigQuery, etc).
  • Strong proficiency in AWS services (S3, RDS, EKS, ECR, etc.)
  • Collaborate on large-scale datasets and external API integration for enriched data.
  • Understanding of data lifecycles, data computation principles, data stores and a solid understanding of CI/CD principles.
  • Up-to-date on latest industry trends; able to articulate trends and potential clearly and confidently.
  • Must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.

DESIRED QUALIFICATIONS:

  • Architect, design, and implement scalable and efficient data pipelines and systems.
  • Familiarity with data governance and how that relates to compliance frameworks such as SOC2, GDPR, etc.
  • Familiarity with Machine Learning (ML) workflows, including data preparation for ML models, training, and model deployment, with a focus on ML infrastructure and MLOps.
  • Expertise in Business Intelligence (BI) tools such as Sigma Compute, Looker, Tableau
  • Familiarity with open-source file formats for storing data, such as Parquet, Avro, etc.
  • Experience with lake house frameworks, such as Iceberg or Delta Lake.
  • Customer-facing experience with data solutions, with an ability to communicate effectively with stakeholders, understand business requirements, and translate technical data solutions into value-driven outcomes.
    This role may be a fit for you if you enjoy solving problems with resourceful thinking, collaborating across departments, and flexing your creative muscle.
Responsibilities
  • Design, build, and optimize data pipelines for processing and transforming large volumes of structured and unstructured data.
  • Manage and monitor data infrastructure for reliability, scalability, and performance, collaborating with DevOps for CI/CD support.
  • Leverage dbt to transform, clean, and structure data for analytics, ensuring high data quality across models and pipelines.
  • Integrate external APIs and data sources, including GraphQL APIs, to enrich datasets for analytics and reporting.
  • Utilize AWS services to deploy and manage data infrastructure, ensuring cost-efficient and high-performing cloud usage.
  • Collaborate with internal teams to support data-driven projects, create interactive reports, and enable data access for stakeholders.
  • Oversee and maintain customer-facing data products (e.g., ION Analytics and Autoplan) to meet client needs and quality standards.
Loading...