Lead GCP Network and Data Engineer at EPAM Systems Inc
Remoto, Sicilia, Portugal -
Full Time


Start Date

Immediate

Expiry Date

22 May, 25

Salary

0.0

Posted On

19 Apr, 25

Experience

1 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Cloud Computing, Data Engineering, Kubernetes, Cloud Storage, Code, Sql, Infrastructure

Industry

Information Technology/IT

Description

We are currently looking for a competent Lead GCP Network and Data Engineer to enhance our talented team.
The position involves an expert who will concentrate on the planning, deployment, and optimization of our network and data architecture on the Google Cloud Platform (GCP) and has fundamental experience in Microsoft Azure. The ideal applicant will excel in comprehensive data engineering and demonstrate strong network management capabilities within cloud platforms.

REQUIREMENTS

  • Minimum of 5 years in network and data engineering
  • At least 1 year of relevant leadership experience
  • Profound knowledge and experience with GCP cloud computing, networking, and infrastructure
  • Basic skills in Azure Networking and Azure Identity/Principal Management
  • Proficiency in Python, PySpark, and SQL
  • Prior use of Databricks and extensive expertise in Kubernetes
  • Familiarity with Cloud Dataflow, Cloud Pub/Sub, and Cloud Storage
  • Capability to engineer and sustain data pipelines on PySpark
  • Management of data quality and governance requirements
  • Expertise in Infrastructure as Code using Terraform
  • Background in CI/CD processes for data pipelines
Responsibilities
  • Develop and manage secure, well-regulated cloud infrastructures primarily on GCP and Azure
  • Build and sustain scalable and reliable cloud network structures utilizing GCP Networking
  • Create and implement data pipelines using PySpark, ensuring data quality and governance
  • Employ Google Cloud Dataflow and Google Cloud Pub/Sub for data processing and event-based architectures
  • Apply infrastructure as code using Terraform for consistent and reproducible infrastructure setup
  • Oversee continuous integration and continuous deployment (CI/CD) strategies for data pipelines
  • Analyze and enhance the performance of SQL and Python applications
  • Work collaboratively with the team to develop our Kubernetes environment, focusing on scalability and security
  • Progress the organization’s proficiency in data modeling and augment existing data architectures
  • Adhere to security best practices and organizational policies
Loading...