AWS Data Engineer at Appex Innovation
, South Carolina, United States -
Full Time


Start Date

Immediate

Expiry Date

03 Jun, 26

Salary

0.0

Posted On

05 Mar, 26

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, PySpark, AWS Glue, AWS Lambda, Kafka, KSQL, Apache Flink, Data Quality Frameworks, ETL/ELT, Streaming, Data Validation, Data Monitoring, Data Reconciliation, Cloud Optimization, Data Integrity, Financial Services

Industry

Description
We have a job opportunity as an AWS Data Engineer with our Partner in Fort, Mill, SC for a Hybrid role. Job Details: Job Title: Sr.Data Engineer (Mid–Senior Level) – AWS & Streaming Experience Level: Mid-Senior (10+ Preferred) Location: Fort Mill, SC (hybrid - 3 days WFO) Type: Contract-W2 Domain: Financial Services​ Role Summary: We are seeking a Mid–Senior Data Engineer with strong expertise in AWS-based data engineering, real-time streaming technologies, and enterprise-grade data quality frameworks. The ideal candidate will design, build, and optimize scalable batch and streaming data pipelines, implement robust data validation and monitoring processes, and support mission-critical analytics platforms. Key Responsibilities: Develop and maintain scalable ETL/ELT pipelines using AWS Glue, PySpark, and Python Build event-driven workflows using AWS Lambda Design and manage real-time streaming solutions using Kafka, KSQL, and Apache Flink Implement and enforce comprehensive data quality frameworks, including validation, profiling, monitoring, and reconciliation Optimize data processing performance, scalability, reliability, and cost in cloud environments Collaborate with cross-functional teams to deliver reliable, production-grade data platforms and ensure data integrity across the pipeline Required Skills: Strong hands-on experience with Python and PySpark Proven expertise in AWS Glue, Lambda, and other cloud-native data services Solid experience with the Kafka ecosystem (topics, partitions, consumer groups, streaming patterns) Demonstrated experience building and supporting data quality frameworks (validation rules, reconciliation checks, profiling, anomaly detection) Strong understanding of distributed data processing and scalable architecture patterns Good-to-Have Skills: Experience with Apache Flink for real-time stream processing and stateful computations Knowledge of KSQL or other streaming SQL engines Exposure to CI/CD pipelines, IaC (Terraform/CloudFormation), and DevOps practices Familiarity with data lake/lakehouse architectures and table formats such as Iceberg, Delta, or Hudi Experience working in enterprise or financial data environments
Responsibilities
The role involves developing and maintaining scalable batch and streaming data pipelines using AWS services like Glue and Lambda, alongside building event-driven workflows. Key duties also include designing real-time streaming solutions with Kafka and Flink, and implementing comprehensive data quality frameworks.
Loading...