Developer

at  Tata Consultancy Services

Charlotte, North Carolina, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate30 Apr, 2025USD 130000 Annual31 Jan, 2025N/AQuery Optimization,Git,Java,Apache Kafka,Python,Scala,Data Processing,Hive,Apache Spark,Data Manipulation,Computer ScienceNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

JOB DESCRIPTION

Job Summary:

  • Seeking talented and experienced Hadoop and Spark Developer with strong Java expertise to join data engineering team.
  • The ideal candidate will have a solid understanding of big data technologies, hands-on experience with the Hadoop ecosystem, and the ability to build and optimize data pipelines and processing systems using Spark and Java.

Key Responsibilities:

  • Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
  • Write efficient and optimized code in Java to process large datasets.
  • Design and implement batch and real-time data processing pipelines using Spark.
  • Monitor, troubleshoot, and enhance the performance of Spark jobs.
  • Work closely with cross functional teams to integrate big data solutions into existing systems.
  • Debug and resolve complex technical issues related to distributed computing.
  • Collaborate on system architecture and contribute to technical design discussions.

Required Skills:

  • Strong expertise in Java, with experience in writing optimized, high-performance code.
  • Solid experience in Hadoop ecosystem (HDFS, Hive, Apache Spark (RDD, Dataframe, Dataset, Spark SQL, Spark Streaming).
  • Proficiency in designing and building ETL pipelines for big data processing.
  • Experience with query optimization and data manipulation using SQL-based technologies like Hive or Impala.
  • Hands on experience with Git or similar version control systems.
  • Strong understanding of Linux/Unix based environments for development and deployment.

Preferred Skills:

  • Experience with Apache Kafka.
  • Exposure to DevOps practices, including CI/CD pipelines.
  • Knowledge of Python or Scala is a plus.

Salary Range - $100,000-$130,000 a year

LI-NS2

Responsibilities:

  • Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
  • Write efficient and optimized code in Java to process large datasets.
  • Design and implement batch and real-time data processing pipelines using Spark.
  • Monitor, troubleshoot, and enhance the performance of Spark jobs.
  • Work closely with cross functional teams to integrate big data solutions into existing systems.
  • Debug and resolve complex technical issues related to distributed computing.
  • Collaborate on system architecture and contribute to technical design discussions


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - Application Programming / Maintenance

Software Engineering

Graduate

Proficient

1

Charlotte, NC, USA