Data Engineer at Gainwell Technologies LLC
, Andhra Pradesh, India -
Full Time


Start Date

Immediate

Expiry Date

03 Jan, 26

Salary

0.0

Posted On

05 Oct, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, ETL Processes, Apache Spark, AWS, Azure, GCP, Python, Scala, Databricks, Project Planning, Communication, Leadership, Data Architecture, Data Transformation, Data Cleansing, Data Normalization

Industry

IT Services and IT Consulting

Description
Summary Your role in our mission Essential Job Functions Design, develop and deploy pipelines including ETL-processes using Apache Spark Framework. Monitor, manage, validate, and synthetic test data extraction, movement, transformation, loading, normalization, cleansing and updating processes in product development. Coordinates with the stakeholders to understand the needs and delivery with a focus on quality, reuse, consistency, and security. Collaborate with team-members on various models and schemas. Collaborate with team-members on documenting source-to-target mapping. Conceptualize and visualize frameworks Communicate effectively with various stakeholders. What we're looking for Basic Qualifications Bachelor's degree in computer sciences or related field 3 years of relevant experience and equivalent education in ETL Processing/data architecture or equivalent. 3+ years of experience working with big data technologies on AWS/Azure/GCP 2+ years of experience in the Apache Spark/DataBricks framework (Python/Scala) Databricks and AWS developer/architect certifications a big plus   Other Qualifications Strong project planning and estimating skills related to area of expertise Strong communication skills Good leadership skills to guide and mentor the work of less experienced personnel Ability to be a high-impact player on multiple simultaneous engagements Ability to think strategically, balancing long and short-term priorities What you should expect in this role Working environment : Remote
Responsibilities
Design, develop, and deploy data pipelines including ETL processes using Apache Spark. Collaborate with stakeholders to ensure data quality and effective communication throughout the development process.
Loading...