Data Engineer (Backfill) at Piper Companies
Annapolis Junction, Maryland, USA -
Full Time


Start Date

Immediate

Expiry Date

21 Nov, 25

Salary

200000.0

Posted On

21 Aug, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Active Ts/Sci Clearance, Aws, Pipeline Management, Python, Sql, Azure, Apache Spark

Industry

Information Technology/IT

Description

DATA ENGINEER – ANNAPOLIS JUNCTION, MD

Zachary Piper Solutions is seeking a Data Engineer to join a long-term Department of Defense (DoD) program based in Annapolis Junction, MD. This is an exciting opportunity to work on mission-critical data integration efforts within a high-impact environment.

POSITION OVERVIEW:

As a Data Engineer/Integrator, you will play a key role in designing and implementing data integration solutions in support of Intelligence Community (IC ITE) initiatives. You’ll work closely with a team of developers to build and maintain Extract-Transform-Load (ETL) pipelines, ensuring data is transformed and delivered in the required formats. Your work will involve interfacing with external systems using protocols such as HTML and SFTP, and enhancing the ETL platform to streamline future integrations.
In addition to hands-on coding, you’ll contribute to the development and maintenance of software components, ensuring seamless integration into a fully functional system. You’ll also collaborate with external teams to validate data ingestion processes and produce comprehensive documentation covering system architecture, development, and enhancements.

REQUIRED QUALIFICATIONS:

  • Active TS/SCI clearance with CI Polygraph
  • Hands-on experience with Databricks for building and managing data and AI solutions
  • Proficiency in ETL development, data pipeline management, and working with large datasets
  • Strong experience with Apache Spark, Python, and SQL
  • Familiarity with Delta Lake, Delta Live Tables, and Databricks Workflows

PREFERRED QUALIFICATIONS:

  • Experience collaborating with data scientists on analytical projects
  • Familiarity with Advana or similar data platforms
  • Strong Python programming and SQL skills
  • Experience with cloud platforms such as AWS or Azure
Responsibilities
  • Design, develop, and maintain ETL pipelines using Databricks, Spark, Python, and SQL
  • Transform and integrate data from various sources to meet IC ITE standards
  • Interface with external systems and teams using protocols like HTML and SFTP
  • Enhance the ETL platform to improve scalability and reduce integration timelines
  • Collaborate with developers, data scientists, and stakeholders to ensure data quality and usability
  • Document system architecture, development processes, and platform enhancements
Loading...