Data Engineer at Steampunk
McLean, VA 22102, USA -
Full Time


Start Date

Immediate

Expiry Date

25 Jul, 25

Salary

160000.0

Posted On

26 Apr, 25

Experience

8 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Optimization, Public Trust, Hadoop, Kafka, Amazon S3, Sap, Customer Service Skills, Cloud, Data Products, Project Teams, Scripting Languages, Solr, Relational Databases, Cloud Services, Developers, Elasticsearch, Enterprise Architecture, Pipelines, Spark, Data Engineering

Industry

Information Technology/IT

Description

CONTRIBUTIONS

We are looking for seasoned Data Engineer to work with our team and our clients to develop enterprise grade data platforms, services, and pipelines. We are looking for more than just a “Data Engineer”, but a technologist with excellent communication and customer service skills and a passion for data and problem solving.

  • Lead and architect migration of data environments with performance and reliability.
  • Assess and understand the ETL jobs, workflows, BI tools, and reports.
  • Address technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products.
  • Experience in crafting database / data warehouse solutions in cloud (Preferably AWS. Alternatively Azure, GCP).
  • Key must have skill sets – Python, AWS.
  • Support an Agile software development lifecycle.
  • You will contribute to the growth of our Data Exploitation Practice!

Qualifications:

QUALIFICATIONS

  • Ability to hold a position of public trust with the US government.
  • 8+ years industry experience coding commercial software and a passion for solving complex problems.
  • 8+ years direct experience in Data Engineering with experience in tools such as:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including Postgres and Cassandra
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Data streaming systems: Storm, Spark-Streaming, etc.
  • Search tools: Solr, Lucene, Elasticsearch
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Amazon S3, Athena, Redshift Spectrum, AWS Glue, AWS Glue Catalog, AWS Functions, and Amazon EC2 with SQL Server Developer.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
  • Experience with message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Experience manipulating, processing, and extracting value from large, disconnected datasets.
  • Experience manipulating structured and unstructured data for analysis.
  • Experience constructing complex queries to analyze results using databases or in a data processing development environment.
  • Experience with data modeling tools and process.
  • Experience architecting data systems (transactional and warehouses).
  • Experience aggregating results and/or compiling information for reporting from multiple datasets.
  • Experience working in an Agile environment.
  • Experience supporting project teams of developers and data scientists who build web-based interfaces, dashboards, reports, and analytics/machine learning models.
  • Experience with SAP.
  • Must hold a current Public Trust clearance.
Responsibilities

Please refer the Job description for details

Loading...