Data Software Engineer - AWS - Java
at Epam Systems
Desde casa, Yucatán, Mexico -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 15 Nov, 2024 | Not Specified | 16 Aug, 2024 | N/A | Hive,Agile Methodologies,Hadoop,Aws,Kafka,Spark,Pair Programming,Kubernetes,Code,Scalability,Grep,Algorithms,Docker,Design | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
We are looking for a dynamic Data Engineer who has a blend of traditional Java engineering skills and experience within the Big Data space. This role requires strong knowledge in API development, AWS, and a passion for working with big data technologies like Hive, Hadoop, and Spark. If you enjoy working on an industry-leading Data Platform and contributing to open-source communities, this opportunity is for you!
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.
REQUIREMENTS
- 7+ years of core and server-side Java programming (Spring, concurrency, streams, lambdas)
- Strong knowledge of the Hadoop ecosystem (Spark, Hadoop, Hive)
- Extensive experience with cloud computing platforms (AWS, EMR, S3, Kubernetes, Docker)
- Proficiency with infrastructure-as-code tools (Terraform, Helm)
- Experience with microservice architecture, design, and best practices for scalability
- Familiarity with Agile methodologies (Scrum, code reviews, pair programming)
- Experience with Kafka, including Spark Streaming or Flink
- Strong skills in performance and scalability tuning, algorithms, and computational complexity
- Proficiency in Linux/Python (bash scripting, grep, sed, awk)
- Passionate about open-source contributions
Responsibilities:
- Write clear, efficient, and well-tested code
- Collaborate with other experienced software engineers to drive improvements to our technology
- Design and develop new services and software solutions
- Build and track metrics to ensure high-quality results
- Work independently with little to no guidance, and support and coach junior team members
- Build rapport with other engineering teams to ensure seamless integration
- Develop scalable and highly performant distributed systems, focusing on availability, monitoring, and resiliency
- Help shape the future of Data Lakes and take architectural ownership for various critical components and systems
- Evolve development standards and design patterns
- Deploy and maintain applications in production environments
- Communicate and document solutions and design decisions effectively
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Computer Software/Engineering
IT Software - Application Programming / Maintenance
Software Engineering
Graduate
Proficient
1
Desde casa, Mexico