Principal Software Engineer at Hewlett Packard Enterprise
San Jose, CA 95002, USA -
Full Time


Start Date

Immediate

Expiry Date

30 Nov, 25

Salary

340500.0

Posted On

01 Sep, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Go, Enterprise Networking, Apache Storm, Solutions Design, Redis, Devops, Analytical Skills, Design Thinking, Kubernetes, Computer Science, Testing, Python, Distributed Systems, User Experience, Etl, Automation, Data Processing, Kafka, Databases, Computer Engineering, Pandas

Industry

Computer Software/Engineering

Description

Principal Software Engineer
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office.

WHO WE ARE:

Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE.

JOB DESCRIPTION:

We are seeking a software developer to join our engineering team to design, develop, and test software related to the cloud-based network configuration and reporting system
This individual will be responsible for solving complex problems and designing subsystems that will make the Mist platform the premier Enterprise networking solution in the industry. This individual is expected to have ownership of the various software subsystems running in cloud.

REQUIREMENTS

  • Bachelor or Masters degree in Computer science, Computer Engineering or a related field
  • 10+ years of experience in software engineering with a focus on Python, Go or Java
  • Strong understanding of RESTful API design and development
  • 2+ years of Experience working with large scale distributed systems based on either cloud technologies or Kubernetes
  • 2+ years of experience on event-driven technologies like Kafka and Apache Storm/Flink.
  • 2+ years of experience in Big-data technologies like Apache spark/Databricks.
  • Proficient in working with Redis and databases like Cassandra/Datastax
  • Excellent problem-solving and analytical skills
  • Strong communication and collaboration skills

ADDITIONAL PREFERRED QUALIFICATIONS

  • Knowledge of Enterprise Networking features, WiFi protocols and implementations.
  • Knowledge of microservices architecture, grpc
  • Experience with distributed systems and large-scale data processing
  • Knowledge of DevOps principles and practices
  • Knowledge of ETL pipelines
  • Knowledge of ML training and Inference
  • Knowledge of Postgres, Pandas/Duckdb
  • Knowledge of Linux

ADDITIONAL SKILLS:

Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Security-First Mindset, Solutions Design, Testing & Automation, User Experience (UX)

HEWLETT PACKARD ENTERPRISE IS EEO PROTECTED VETERAN/ INDIVIDUAL WITH DISABILITIES.

HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

RESPONSIBILITIES

  • Develop software for highly scalable and fault-tolerant cloud-scale distributed applications.
  • Develop microservices using Python, and/or Go (golang).
  • Develop event-driven systems using Python and Java.
  • Develop software for AIDE’s real-time data pipeline and batch processing.
  • Develop ETL pipelines aiding in training and inference of various ML models using big-data frameworks like Apache Spark.
  • Build metrics, monitoring and structured logging into the product enabling fast detection and recovery during service degradation.
  • Write unit, integration and functional tests that make your code is safe for refactoring and continuous delivery.
  • Participate in collaborative, DevOps style, lean practices with the rest of the team.
Loading...