Java Backend Developer with PySpark at Qode
Pittsburgh, PA 15219, USA -
Full Time


Start Date

Immediate

Expiry Date

17 Oct, 25

Salary

0.0

Posted On

19 Jul, 25

Experience

13 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Accountability, Kafka, Java, Spring Boot, Azure, Docker, Object Oriented Design, Computer Science, Aws, Soft Skills, Hive, Platforms, Architecture, Java Frameworks, Apache Spark, Data Processing, Hibernate, Ownership, Sql

Industry

Information Technology/IT

Description

JOB SUMMARY:

We are seeking a skilled Java Backend Developer with PySpark experience to join our dynamic engineering team. The ideal candidate will have a strong foundation in backend development using Java and hands-on experience working with large-scale data processing using PySpark. You will contribute to building scalable APIs, robust data processing pipelines, and backend systems that support complex business workflows and analytics.

REQUIRED QUALIFICATIONS:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • Strong experience in Java (Java 8 or above), with a solid understanding of object-oriented design and microservices architecture.
  • Hands-on experience with Apache Spark and PySpark for distributed data processing.
  • Good understanding of RESTful API design and implementation.
  • Familiarity with SQL and working knowledge of relational and/or NoSQL databases.
  • Experience with version control systems like Git and CI/CD practices.

PREFERRED QUALIFICATIONS:

  • Experience with Spring Boot, Hibernate, or similar Java frameworks.
  • Familiarity with big data tools (HDFS, Hive, Kafka, etc.).
  • Exposure to cloud platforms such as AWS, GCP, or Azure.
  • Understanding of containerization and orchestration tools (Docker, Kubernetes).
  • Knowledge of data lake/data warehouse architectures.

SOFT SKILLS:

  • Strong analytical and problem-solving skills.
  • Excellent communication and collaboration abilities.
  • Self-starter with a strong sense of ownership and accountability.
Responsibilities
  • Develop and maintain scalable backend services and APIs using Java (Spring Boot or similar frameworks).
  • Design and implement data pipelines using PySpark for processing and transforming large datasets.
  • Integrate backend services with data platforms, databases, and external APIs.
  • Collaborate with data engineers, analysts, and product teams to understand requirements and deliver solutions.
  • Optimize performance of backend applications and Spark jobs.
  • Write unit tests and participate in code reviews to maintain code quality.
  • Ensure secure, reliable, and maintainable backend infrastructure.
Loading...