Developer
at Tata Consultancy Services
Charlotte, North Carolina, USA -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 30 Apr, 2025 | USD 130000 Annual | 31 Jan, 2025 | N/A | Query Optimization,Git,Java,Apache Kafka,Python,Scala,Data Processing,Hive,Apache Spark,Data Manipulation,Computer Science | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
JOB DESCRIPTION
Job Summary:
- Seeking talented and experienced Hadoop and Spark Developer with strong Java expertise to join data engineering team.
- The ideal candidate will have a solid understanding of big data technologies, hands-on experience with the Hadoop ecosystem, and the ability to build and optimize data pipelines and processing systems using Spark and Java.
Key Responsibilities:
- Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
- Write efficient and optimized code in Java to process large datasets.
- Design and implement batch and real-time data processing pipelines using Spark.
- Monitor, troubleshoot, and enhance the performance of Spark jobs.
- Work closely with cross functional teams to integrate big data solutions into existing systems.
- Debug and resolve complex technical issues related to distributed computing.
- Collaborate on system architecture and contribute to technical design discussions.
Required Skills:
- Strong expertise in Java, with experience in writing optimized, high-performance code.
- Solid experience in Hadoop ecosystem (HDFS, Hive, Apache Spark (RDD, Dataframe, Dataset, Spark SQL, Spark Streaming).
- Proficiency in designing and building ETL pipelines for big data processing.
- Experience with query optimization and data manipulation using SQL-based technologies like Hive or Impala.
- Hands on experience with Git or similar version control systems.
- Strong understanding of Linux/Unix based environments for development and deployment.
Preferred Skills:
- Experience with Apache Kafka.
- Exposure to DevOps practices, including CI/CD pipelines.
- Knowledge of Python or Scala is a plus.
Salary Range - $100,000-$130,000 a year
LI-NS2
Responsibilities:
- Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
- Write efficient and optimized code in Java to process large datasets.
- Design and implement batch and real-time data processing pipelines using Spark.
- Monitor, troubleshoot, and enhance the performance of Spark jobs.
- Work closely with cross functional teams to integrate big data solutions into existing systems.
- Debug and resolve complex technical issues related to distributed computing.
- Collaborate on system architecture and contribute to technical design discussions
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - Application Programming / Maintenance
Software Engineering
Graduate
Proficient
1
Charlotte, NC, USA