Senior Hadoop Admin at Capco Singapore
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

22 Mar, 26

Salary

0.0

Posted On

22 Dec, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Hadoop Administration, Automation, Ansible, Shell Scripting, Python Scripting, DevOps, Troubleshooting, Debugging, Capacity Planning, Performance Tuning, Security Remediation, Self-Healing, Incident Management, Problem Management, Change Management

Industry

Financial Services

Description
Hadoop Admin Location: Bangalore Experience - 8-10 yrs About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Hadoop Admin Location: Bangalore Experience - 8-10 yrs Hadoop administration Automation (Ansible, shell scripting or python scripting) DEVOPS skills (Should be able to code at least in one language preferably python Program/Project Overview Role is part of PRE-Big Data team responsible for managing Hadoop platforms. Resource will work during IND hours, and it is hybrid role. Candidate will focus on improving performance, reliability and improving the efficiency of Big Data platforms. Engagement Deliverable(s) The role involves performing Big Data Administration and Engineering activities on multiple open-source platforms such as Hadoop, Kafka, HBase, and Spark. The successful candidate will possess strong troubleshooting and debugging skills. • Other responsibilities include effective root cause analysis of major production incidents and the development of learning documentation. The person will identify and implement high-availability solutions for services with a single point of failure. • The role involves planning and performing capacity expansions and upgrades in a timely manner to avoid any scaling issues and bugs. This includes automating repetitive tasks to reduce manual effort and prevent human errors. • The successful candidate will tune alerting and set up observability to proactively identify issues and performance problems. They will also work closely with Level-3 teams in reviewing new use cases and cluster hardening techniques to build robust and reliable platforms. • The role involves creating standard operating procedure documents and guidelines on effectively managing and utilizing the platforms. The person will leverage DevOps tools, disciplines (Incident, problem, and change management), and standards in day-to-day operations. • The individual will ensure that the Hadoop platform can effectively meet performance and service level agreement requirements. They will also perform security remediation, automation, and self-healing as per the requirement. • The individual will concentrate on developing automations and reports to minimize manual effort. This can be achieved through various automation tools such as Shell scripting, Ansible, or Python scripting, or by using any other programming language

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
The role involves performing Big Data Administration and Engineering activities on multiple open-source platforms such as Hadoop, Kafka, HBase, and Spark. The successful candidate will focus on improving performance, reliability, and efficiency of Big Data platforms.
Loading...