Lead Software Engineer (Ref 001139)
at Wells Fargo
Charlotte, North Carolina, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 23 Apr, 2025 | Not Specified | 24 Jan, 2025 | 1 year(s) or above | Shell Scripting,Java,Implementation Experience,Red Hat Linux,Hive,Spark,Python,Computer Science,High Availability,Perl,Kafka,Map | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
At Wells Fargo, we want to satisfy our customers’ financial needs and help them succeed financially. We’re looking for talented people who will put our customers at the center of everything we do. Join our diverse and inclusive team where you’ll feel valued and inspired to contribute your unique skills and experience.
Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you.
Wells Fargo Technology sets IT strategy; enhances the design, development, and operations of our systems; optimizes the Wells Fargo infrastructure footprint; provides information security; and enables continuous banking access through in-store, online, ATM, and other channels to Wells Fargo’s more than 70 million global customers.
Wells Fargo Bank N.A. seeks a Lead Software Engineer in Charlotte, NC.
Job Role and Responsibility: Wells Fargo Bank N.A. is seeking a Lead Software Engineer who will be responsible for building and managing a world-class big data platform for enterprise data lake in the Enterprise Information Technology group at Wells Fargo. This position involves the following job duties; management of the Hadoop/Ezmeral Data Fabric (MapR) on-premises Linux instances, including configuration, capacity planning and expansion, and performance tuning and monitoring. Frequent collaboration with data engineering team to support development and deployment of Spark and Hadoop jobs. Working with end users to troubleshoot and resolve service disruptions and data accessibility issues. Contribution to the architecture design of the cluster to support growing demands and requirements. Planning and implementation of software and hardware upgrades with the ability to utilize disaster recovery related to Hadoop platforms, as needed. Creation and implementation of standards and best practices related to cluster administration using Kafka, HBase, Spark, Hive etc. Design of cluster administration automation. Utilization of Yarn configuration in a multi-tenancy environment with Yarn capacity scheduler. Acting as a Technical Lead within the Hadoop Administration Engineering team, providing direction to less experienced staff, or develop highly complex support and remediation strategies. Leading incident, problem, and change management efforts, overseeing remediation, root cause analysis, preparation of test data, testing, and remediation efforts. Develop of new documentation, departmental technical procedures, and user guides. Quality assurance. Security and Compliance, ensuring that requirements are met for supported areas. Oversight of the creation and updates of the business continuation plan. Telecommuting is permitted up to 4 days a week. Position must appear in person to the location listed as the work address.
Travel required: NONE
REQUIRED QUALIFICATIONS:
Degree required: Bachelor’s degree in Computer Science, or related technical field.
Amount and Type of experience required: Five (5) years of experience in the job offered or in a related position involving application development and implementation experience.
Specific skills required:
- 5 years of Hadoop experience
- 5 years of Hadoop Administration experience
- 5 years of experience in deploying and administering Hadoop Clusters
- 5 years of Red Hat Linux or UNIX experience
- 5 years of advanced scripting experience using Unix Shell Scripting, Perl, Python, Java, or PL-SQL
- 3 years of Java experience
- 3 years of experience with Big Data or Hadoop tools such as Spark, Hive, Kafka and Map
- 3 years of experience in building and managing Hadoop cluster for High Availability (HA)
- 1 years of experience with machine learning tools
APPLICANTS WITH DISABILITIES
To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo .
WELLS FARGO RECRUITMENT AND HIRING REQUIREMENTS:
a. Third-Party recordings are prohibited unless authorized by Wells Fargo.
b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process
Responsibilities:
Please refer the Job description for details
REQUIREMENT SUMMARY
Min:1.0Max:5.0 year(s)
Information Technology/IT
IT Software - Other
Software Engineering
Graduate
Computer science or related technical field
Proficient
1
Charlotte, NC, USA