Data Engineer at Gathern
Riyadh, منطقة الرياض, Saudi Arabia -
Full Time


Start Date

Immediate

Expiry Date

16 Dec, 24

Salary

0.0

Posted On

18 Sep, 24

Experience

3 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Relational Databases, Aws, Information Technology, Programming Languages, Airflow, Talend, Data Modeling, Kubernetes, Azure, Communication Skills, Statistics, Google Cloud Platform, Sql, Data Engineering, Docker, Snowflake, Etl Tools, Computer Science, Kafka

Industry

Information Technology/IT

Description
  • Develop and maintain scalable ETL (Extract, Transform, Load) pipelines that ingest, clean, and structure data from various sources (internal and external).
  • Data Infrastructure: Design, implement, and maintain cloud-based data storage solutions (e.g., GCP, AWS, Azure,
  • Snowflake, etc.), ensuring data availability, reliability, and scalability.
  • Collaborate with data governance teams to ensure that all data adheres to quality standards, security protocols, and
  • compliance with relevant regulations (e.g., GDPR, CCPA).
  • Continuously monitor and optimize data pipelines, database performance, and storage solutions to ensure efficient data
  • processing and access.
  • Work closely with data scientists, analysts, and business intelligence teams to understand data requirements and provide reliable data solutions.
  • Automate recurring data tasks, such as data cleaning, transformation, and loading, to ensure the system operates
  • smoothly and with minimal manual intervention.
  • Design and implement database schemas, dimensional models, and data lakes to support analytics and business
  • intelligence efforts.
  • Create and maintain detailed documentation for data pipelines, infrastructure, and workflows to ensure transparency and
  • ease of collaboration.
  • Implement and maintain data security measures such as role-based access control (RBAC), encryption, and backups to
  • protect sensitive data.
  • Set up monitoring tools and proactively identify and resolve data-related issues (performance bottlenecks, failures, etc.).

REQUIREMENTS

  • Bachelor’s degree in a field such as computer science, information technology, statistics, or mathematics.
  • A minimum of 3 years of experience in data engineering or a related field.
  • Strong experience with SQL and relational databases (e.g., PostgreSQL, MySQL).
  • Understanding of data modeling, data lakes, and data warehouses.
  • Knowledge of data warehousing solutions such as BigQuery, Snowflake, or Redshift.
  • Proficiency with cloud platforms like AWS, or Azure , Google Cloud Platform (GCP) as preferred .
  • Proficiency in programming languages such as Python.
  • Experience with big data technologies such as Apache Hadoop, Spark, Kafka, etc..
  • Familiarity in ETL tools like Airflow, Talend, or Apache NiFi.
  • Familiarity with containerization and orchestration tools like Docker and Kubernetes.
  • Ability to analyze and optimize large, complex datasets. Aware of Business acumen and objectives oriented.
  • Strong problem-solving skills and ability to troubleshoot complex data and infrastructure issues.
  • Excellent communication skills to work effectively with cross-functional teams (e.g., analytics, product, marketing and

growth).

  • Familiarity with CI/CD practices for data pipelines.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities

Please refer the Job description for details

Loading...