Spec, IT (ETL Developer) at Baxter International Inc.
Bangalore, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

26 Mar, 26

Salary

0.0

Posted On

26 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

ETL, Data Transformation, PySpark, IBM DataStage, SQL, Python, AWS, UNIX, Control-M, Troubleshooting, Documentation, Root Cause Analysis, Performance Tuning, Shell Scripting, SnowFlake, Technical Support

Industry

Medical Equipment Manufacturing

Description
This is where your work makes a difference. At Baxter, we believe every person—regardless of who they are or where they are from—deserves a chance to live a healthy life. It was our founding belief in 1931 and continues to be our guiding principle. We are redefining healthcare delivery to make a greater impact today, tomorrow, and beyond. Our Baxter colleagues are united by our Mission to Save and Sustain Lives. Together, our community is driven by a culture of courage, trust, and collaboration. Every individual is empowered to take ownership and make a meaningful impact. We strive for efficient and effective operations, and we hold each other accountable for delivering exceptional results. Here, you will find more than just a job—you will find purpose and pride. Perform development work and technical support related to our data transformation and ETL jobs in support of a global data warehouse. Can communicate results with internal customers. Requires the ability to work independently, as well as in cooperation with a variety of customers and other technical professionals. What you'll be doing Development of new ETL/data transformation jobs, using PySpark and IBM DataStage in AWS. Enhancement and support on existing ETL/data transformation jobs. Can explain technical solutions and resolutions with internal customers and communicate feedback to the ETL team. Perform technical code reviews for peers moving code into production. Perform and review integration testing before production migrations. Provide high level of technical support, and perform root cause analysis for problems experienced within area of functional responsibility. Can document technical specs from business communications. What you'll bring 5+ years of ETL experience. Experience with core Python programming for data transformation. Intermediate-level PySpark skills. Can read, understand and debug existing code and write simple PySpark code from scratch. Strong knowledge of SQL fundamentals, understanding of subqueries, can tune queries with execution hints to improve performance. IBM DataStage experience preferred. Able to write SQL code sufficient for most business requirements for pulling data from sources, applying rules to the data, and stocking target data Proven track record in troubleshooting ETL jobs and addressing production issues like performance tuning, reject handling, and ad-hoc reloads. Proficient in developing optimization strategies for ETL processes. Basic AWS technical support skills. Has ability to log in, find existing jobs and check run status and logs Will run and monitor jobs running via Control-M Can create clear and concise documentation and communications. Can document technical specs from business communications. Ability to coordinate and aggressively follow up on incidents and problems, perform diagnosis, and provide resolution to minimize service interruption Ability to prioritize and work on multiple tasks simultaneously Effective in cross-functional and global environments to manage multiple tasks and assignments concurrently with effective communication skills. A self-starter who can work well independently and on team projects. Experienced in analyzing business requirements, defining the granularity, source to target mapping of the data elements, and full technical specification. Understands data dependencies and how to schedule jobs in Control-M. Experienced working at the command line in various flavors of UNIX, with basic understanding of shell scripting in bash and korn shell. Candidates with SnowFlake background or certification will be Preferred Education and/or Experience Bachelors of Science in computer science or equivalent 5+ years of ETL and SQL experience 3+ years of python and PySpark experience 3+ years of AWS and UNIX experience 2+ years of IBM DataStage experience 1+ years of SnowFlake experience Preferred certifications: AWS Certified Cloud Practitioner (amazon.com) Certified DataStage Professional Python and PySpark certifications SnowPro Certification Equal Employment Opportunity Baxter is an equal opportunity employer. Baxter evaluates qualified applicants without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity or expression, protected veteran status, disability/handicap status or any other legally protected characteristic. Reasonable Accommodations Baxter is committed to working with and providing reasonable accommodations to individuals with disabilities globally. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the application or interview process, please click on the link here and let us know the nature of your request along with your contact information. Recruitment Fraud Notice Baxter has discovered incidents of employment scams, where fraudulent parties pose as Baxter employees, recruiters, or other agents, and engage with online job seekers in an attempt to steal personal and/or financial information. To learn how you can protect yourself, review our Recruitment Fraud Notice. No matter your role at Baxter, your talent, skills, and time has a direct impact on people's lives. Since 1931, we have been at the forefront of innovation by bringing smarter, more personalized care to patients around the world. Now, we're more determined than ever to make a lasting impact as we are redefining healthcare delivery across the care journey. Our Mission to Save and Sustain Lives motivates us as we create a culture in which each of us can succeed. This is where you belong.
Responsibilities
The role involves developing new ETL/data transformation jobs and enhancing existing ones using PySpark and IBM DataStage in AWS. Additionally, the developer will provide technical support and perform root cause analysis for issues within their functional area.
Loading...