ETL Data Engineer at Empower Annuity Insurance Company of America
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

26 Mar, 26

Salary

0.0

Posted On

26 Dec, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, SQL, AWS, ETL, Data Engineering, CI/CD, Data Processing, Glue, EMR, Lambda, S3, Kafka, Problem-Solving, Debugging, Troubleshooting, Code Reviews

Industry

Financial Services

Description
Our vision for the future is based on the idea that transforming financial lives starts by giving our people the freedom to transform their own. We have a flexible work environment, and fluid career paths. We not only encourage but celebrate internal mobility. We also recognize the importance of purpose, well-being, and work-life balance. Within Empower and our communities, we work hard to create a welcoming and inclusive environment, and our associates dedicate thousands of hours to volunteering for causes that matter most to them. Chart your own path and grow your career while helping more customers achieve financial freedom. Empower Yourself. The ETL Data Engineer is responsible for designing, developing, and maintaining backend data pipelines and ETL solutions. This role focuses on building scalable and reliable data processing using Python, SQL, and AWS, supporting analytics, reporting, and downstream applications. The engineer contributes across the development lifecycle, from requirements and design through implementation, testing, deployment, and ongoing support. What you will do: Design, build, and maintain ETL pipelines and backend data workflows using Python and SQL. Develop and optimize data processing solutions on AWS using services such as Glue, EMR, Lambda, and S3 under guidance from senior engineers and architects. Implement and support CI/CD pipelines to automate build, test, and deployment processes. Collaborate with data engineers, architects, product owners, and business partners to understand requirements and translate them into clear technical tasks. Write clean, maintainable, and well-tested code that aligns with team standards and architectural guidelines. Troubleshoot and resolve data, performance, and reliability issues in lower and production environments with support from senior team members as needed. Participate in code reviews to provide and receive feedback and help uphold coding and quality standards. Contribute to design and technical discussions by sharing ideas and raising risks or improvement opportunities. Continuously learn and adopt tools, technologies, and development practices relevant to data engineering and cloud platforms. What you will bring: Bachelor’s degree in Computer Science or related field, or equivalent practical experience. 3+ years of experience in software development, with a focus on backend or data engineering. Proficiency in Python for backend development and data processing tasks. Strong SQL skills, including working with large datasets and query optimization. Hands-on experience with AWS services, ideally including Glue, EMR, Lambda, and S3, and Kafka topics Experience working with CI/CD pipelines and tools that support automated build, test, and deployment. Solid understanding of software development methodologies and best practices, including version control and automated testing. Strong problem-solving, debugging, and troubleshooting skills. What will set you apart: Experience with Java or JavaScript is a plus but not required. We are an equal opportunity employer with a commitment to diversity. All individuals, regardless of personal characteristics, are encouraged to apply. All qualified applicants will receive consideration for employment without regard to age, race, color, national origin, ancestry, sex, sexual orientation, gender, gender identity, gender expression, marital status, pregnancy, religion, physical or mental disability, military or veteran status, genetic information, or any other status protected by applicable state or local law. Joining our talent community will allow you to stay connected via email and receive news and updates. Want the latest money news and views shaping how we live, work and play? Sign up for Empower’s free newsletter and check out The Currency. Remote and Hybrid Positions For remote and hybrid positions you will be required to provide reliable high-speed internet with a wired connection as well as a place in your home to work with limited disruption. You must have reliable connectivity from an internet service provider that is fiber, cable or DSL internet. Other necessary computer equipment will be provided. You may be required to work in the office if you do not have an adequate homework environment and the required internet connection. Follow Empower Facebook, LinkedIn, X, Instagram, & Glassdoor

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
The ETL Data Engineer is responsible for designing, developing, and maintaining backend data pipelines and ETL solutions. This role focuses on building scalable and reliable data processing to support analytics and reporting.
Loading...