Senior Data Engineer (10+)- W2 Role at Narvee Technologies
Dallas, TX 75201, USA -
Full Time


Start Date

Immediate

Expiry Date

05 Dec, 25

Salary

70.0

Posted On

07 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Performance Tuning, Jenkins, Communication Skills, Data Transformation, Version Control, Git, Processing, Kafka, Sql, Glue

Industry

Information Technology/IT

Description

We are seeking an experienced Data Engineer with strong expertise in Python, PySpark, AWS, Glue, Redshift, and Kafka. The ideal candidate will be responsible for designing and implementing large-scale data pipelines, ensuring data quality, and enabling real-time and batch data processing in a cloud environment. This is a highly technical role requiring strong problem-solving skills and the ability to collaborate with business and technology stakeholders.

REQUIRED SKILLS & QUALIFICATIONS

  • 5+ years of experience as a Data Engineer in cloud-based environments.
  • Strong hands-on expertise with Python and PySpark for data transformation.
  • Proven experience with AWS services (Glue, Redshift, S3, Lambda, EMR, IAM).
  • Expertise in Kafka for real-time event streaming and processing.
  • Solid knowledge of data warehousing concepts, schema design, and performance tuning.
  • Strong experience with SQL (Redshift/PostgreSQL/MySQL).
  • Knowledge of CI/CD pipelines and version control (Git, Jenkins, etc.).
  • Excellent analytical and communication skills.
    Job Type: Contract
    Pay: $60.00 - $70.00 per hour
    Expected hours: 40 per week
    Work Location: In perso

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design, develop, and maintain scalable ETL/ELT pipelines using Python, PySpark, and AWS Glue.
  • Build and manage data warehouses and data lakes on AWS (Redshift, S3, Glue Catalog).
  • Implement real-time streaming pipelines using Apache Kafka.
  • Optimize performance of data pipelines and queries for large datasets.
  • Work closely with analysts, data scientists, and business teams to deliver clean, reliable, and well-structured data.
  • Ensure data governance, quality, and security standards are followed.
  • Automate data workflows and monitoring to improve reliability and efficiency.
  • Troubleshoot and resolve complex data pipeline issues in production.
Loading...