Senior Data Engineer - AWS
at THE MITRE CORPORATION
McLean, Virginia, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 30 Apr, 2025 | Not Specified | 01 Feb, 2025 | 3 year(s) or above | Project Delivery,Kanban,Sql,Python,Optimization,Oracle,Data Modeling,Engineers,Scrum,Testing,Apache Spark,Developers,Hadoop,Kafka,Collaborative Environment,Design,Scalability,Data Manipulation,Computer Science | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
Why choose between doing meaningful work and having a fulfilling life? At MITRE, you can have both. That’s because MITRE people are committed to tackling our nation’s toughest challenges—and we’re committed to the long-term well-being of our employees. MITRE is different from most technology companies. We are a not-for-profit corporation chartered to work for the public interest, with no commercial conflicts to influence what we do. The R&D centers we operate for the government create lasting impact in fields as diverse as cybersecurity, healthcare, aviation, defense, and enterprise transformation. We’re making a difference every day—working for a safer, healthier, and more secure nation and world. Our workplace reflects our values. We offer competitive benefits, exceptional professional development opportunities for career growth, and a culture of innovation that embraces adaptability, collaboration, technical excellence, and people in partnership. If this sounds like the choice you want to make, then choose MITRE - and make a difference with us.
DEPARTMENT SUMMARY:
Join our Enterprise Data Warehouse team as we prepare for a major evolution in technology and process roll-out across MITRE. This is an exciting time to join the Enterprise Data Warehouse team as the use and gold source for our data evolves and grows exponentially. Your opportunity to make an impact is great as you help our team evolve to cloud-based hybrid data and real time data access. We are looking for a highly collaborative, team-oriented person to join us! A “can-do” and growth-focused attitude as well as the ability to work across more than one simultaneous project are musts!
BASIC QUALIFICATIONS:
- Typically, Bachelor’s degree with 5 years of related experience, or a master’s degree with 3 years of related experience, with a technical degree such as engineering, computer science, etc. or related discipline applicable to the job.
- Self-motivated, curious, and collaborative, with a passion to learn new technologies and develop new skills
- Demonstrated experience in developing ETL pipelines using Python and PySpark, with a strong understanding of data processing techniques.
- Expertise in SQL for data manipulation, querying, and optimization to work with various database platforms including Postgres, DynamoDB, Oracle, and Redshift.
- Hands-on experience with AWS Glue, EMR, Step Functions, and Lambda for building and orchestrating ETL workflows in a cloud environment.
- Experience implementing Continuous Integration/Continuous Deployment (CI/CD) pipelines using AWS CDK or similar tools for automating deployment and testing of ETL solutions.
- This position requires a minimum of 50% hybrid on-site.
PREFERRED QUALIFICATIONS:
- Bachelor’s degree with 8 years of related experience, or a master’s degree with 6 years of related experience, preferably with a technical major such as engineering, computer science, etc or related discipline.
- Previous experience leading or mentoring a team of developers/engineers in a collaborative environment.
- AWS certifications such as AWS Certified Developer or AWS Certified Solutions Architect, demonstrating proficiency in AWS services and best practices.
- Familiarity with big data technologies such as Apache Spark, Hadoop, or Kafka for processing large-scale datasets.
- Experience in data modeling and schema design for optimizing database performance and scalability.
- Experience working in Agile development methodologies, such as Scrum or Kanban, for iterative and collaborative project delivery.
Responsibilities:
- Design, develop, and implement robust ETL solutions using Python and PySpark to extract, transform, and load data from various sources into AWS data services.
- Optimize ETL processes for performance and scalability utilizing AWS Glue, EMR, Step Functions, and Lambda to ensure efficient data processing and timely delivery.
- Ensure data integrity and quality throughout the ETL process by implementing thorough data validation checks and error handling mechanisms.
- Manage AWS services such as Glue, EMR, Step Functions, and Lambda, including configuration, monitoring, and troubleshooting to maintain operational excellence.
- Collaborate with cross-functional teams including data engineers, data scientists, and business stakeholders to understand data requirements and deliver tailored ETL solutions.
- Troubleshoot complex technical issues and provide advanced operational support to internal MITRE customers in AWS
REQUIREMENT SUMMARY
Min:3.0Max:8.0 year(s)
Information Technology/IT
IT Software - DBA / Datawarehousing
Software Engineering
Graduate
Computer Science, Engineering
Proficient
1
McLean, VA, USA