Developer at Tata Consultancy Services
Charlotte, North Carolina, USA -
Full Time


Start Date

Immediate

Expiry Date

30 Apr, 25

Salary

130000.0

Posted On

31 Jan, 25

Experience

0 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Query Optimization, Git, Java, Apache Kafka, Python, Scala, Data Processing, Hive, Apache Spark, Data Manipulation, Computer Science

Industry

Information Technology/IT

Description

JOB DESCRIPTION

Job Summary:

  • Seeking talented and experienced Hadoop and Spark Developer with strong Java expertise to join data engineering team.
  • The ideal candidate will have a solid understanding of big data technologies, hands-on experience with the Hadoop ecosystem, and the ability to build and optimize data pipelines and processing systems using Spark and Java.

Key Responsibilities:

  • Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
  • Write efficient and optimized code in Java to process large datasets.
  • Design and implement batch and real-time data processing pipelines using Spark.
  • Monitor, troubleshoot, and enhance the performance of Spark jobs.
  • Work closely with cross functional teams to integrate big data solutions into existing systems.
  • Debug and resolve complex technical issues related to distributed computing.
  • Collaborate on system architecture and contribute to technical design discussions.

Required Skills:

  • Strong expertise in Java, with experience in writing optimized, high-performance code.
  • Solid experience in Hadoop ecosystem (HDFS, Hive, Apache Spark (RDD, Dataframe, Dataset, Spark SQL, Spark Streaming).
  • Proficiency in designing and building ETL pipelines for big data processing.
  • Experience with query optimization and data manipulation using SQL-based technologies like Hive or Impala.
  • Hands on experience with Git or similar version control systems.
  • Strong understanding of Linux/Unix based environments for development and deployment.

Preferred Skills:

  • Experience with Apache Kafka.
  • Exposure to DevOps practices, including CI/CD pipelines.
  • Knowledge of Python or Scala is a plus.

Salary Range - $100,000-$130,000 a year

LI-NS2

Responsibilities
  • Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
  • Write efficient and optimized code in Java to process large datasets.
  • Design and implement batch and real-time data processing pipelines using Spark.
  • Monitor, troubleshoot, and enhance the performance of Spark jobs.
  • Work closely with cross functional teams to integrate big data solutions into existing systems.
  • Debug and resolve complex technical issues related to distributed computing.
  • Collaborate on system architecture and contribute to technical design discussions
Loading...