Mid-Level Data Engineer (Top Secret/SCI) at VivSoft Technologies
Arlington, VA 20598, USA -
Full Time


Start Date

Immediate

Expiry Date

13 Jul, 25

Salary

120000.0

Posted On

14 Apr, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Scala, Python, Computer Science, Sql, Hadoop, Azure, Programming Languages, Information Technology, Java, Apache Spark, Processing

Industry

Information Technology/IT

Description

JOB SUMMARY:

The Mid-Level Data Engineer will play a crucial role in the DoD team, focusing on building and maintaining data pipelines and architectures that support the development of AI-enabled applications and analytics solutions. This position is essential for ensuring the effective integration and management of data across various platforms to enhance decision-making capabilities within the Department of Defense (DoD).

Key Responsibilities:

  • Data Pipeline Development: Design, implement, and optimize data pipelines to ingest, process, and store diverse datasets from multiple sources, ensuring high availability and reliability.
  • Data Integration: Collaborate with data scientists, analysts, and other engineers to integrate data from various systems into a cohesive architecture that supports advanced analytics and machine learning initiatives.
  • Performance Monitoring: Monitor data processing workflows to ensure optimal performance and troubleshoot any issues that arise in the data pipeline.
  • Data Quality Assurance: Implement data quality checks and validation processes to maintain the integrity and accuracy of data used in analytics.
  • Documentation: Develop and maintain comprehensive documentation for data architectures, processes, and workflows to facilitate knowledge sharing within the team.

Skills Required:

  • Education: Bachelor’s degree in computer science, Data Engineering, Information Technology, or a related field; master’s degree preferred.
  • Clearance Requirements: Active Top Secret (TS) clearance with Sensitive Compartmented Information (SCI) eligibility
  • Minimum of 5 years of experience in data engineering or a related field with a focus on building data pipelines and architectures.
  • Proven experience with big data technologies such as Apache Spark, Hadoop, or similar frameworks.
  • Familiarity with cloud platforms (e.g., AWS, Azure) for data storage and processing is advantageous.
  • Proficiency in programming languages such as Python, Java, or Scala; experience with SQL for database management.
  • Strong understanding of data modeling concepts and best practices for managing large datasets.
  • Excellent problem-solving skills with the ability to analyze complex data issues.

Benefits:

  • Comprehensive Medical, Dental, and Visions Plans (Healthcare benefits are 100% employer-paid for employees only)
  • Life Insurance
  • Paid Time Off (Flexible/Combined PTO, Bereavement Leave, 11 Company Paid Holidays)
  • 401K Retirement Plan with employer match
  • Professional Development Training Reimbursement
  • Flexible/remote work schedules

Salary range: $110,000 - $120,000.
ZbrpiS5Rc

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Data Pipeline Development: Design, implement, and optimize data pipelines to ingest, process, and store diverse datasets from multiple sources, ensuring high availability and reliability.
  • Data Integration: Collaborate with data scientists, analysts, and other engineers to integrate data from various systems into a cohesive architecture that supports advanced analytics and machine learning initiatives.
  • Performance Monitoring: Monitor data processing workflows to ensure optimal performance and troubleshoot any issues that arise in the data pipeline.
  • Data Quality Assurance: Implement data quality checks and validation processes to maintain the integrity and accuracy of data used in analytics.
  • Documentation: Develop and maintain comprehensive documentation for data architectures, processes, and workflows to facilitate knowledge sharing within the team
Loading...