Data Engineer at KAISHI PARTNERS PTE LTD
Singapore, , Singapore -
Full Time


Start Date

Immediate

Expiry Date

30 Nov, 25

Salary

9000.0

Posted On

31 Aug, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Sql, Python, Azure, Aws, Data Engineering, Data Science, Excel

Industry

Information Technology/IT

Description

We are currently working with a cutting edge research firm to seek a motivated Data Engineer to join their team in Singapore. You’ll be the architect and maintainer of the pipelines, data models, and infrastructure that power our industry models, research, and consulting work.

Key Responsibilities

  • Design, develop, and maintain robust and scalable ETL pipelines in Python to power our industry models and analytics products.
  • Work with lead analysts to ensure data accuracy, completeness, and utility value across multiple sources and formats.
  • Build scalable and reusable data workflows in cloud environments (GCP, AWS, or Azure).
  • Implement and maintain data quality monitoring.
  • Able to effectively use a SQL database via automated cron jobs.
  • Maintain and extend dashboards and APIs that deliver data to both internal analysts and external clients.
  • Support the integration of new datasets, tools, and infrastructure components to enhance our analytics capabilities.

Qualifications – Must Haves

  • 1–3 years of experience in a Data Engineering, Data Science or reasonably equivalent role.
  • Capable in Python, SQL, and Excel.
  • Strong ETL development experience.
  • Hands-on experience with at least one cloud platform (GCP, AWS, or Azure).
  • Highly autonomous—able to take a problem from definition to deployment with minimal oversight.

Nice to Haves

  • Experience with Flask, Redis, Dash, Airflow, GitHub Actions, and/or Kubernetes.
  • Familiarity with automated regression, smoke, or unit testing methodologies.
  • Experience working with messy, real-world data.
Responsibilities
  • Design, develop, and maintain robust and scalable ETL pipelines in Python to power our industry models and analytics products.
  • Work with lead analysts to ensure data accuracy, completeness, and utility value across multiple sources and formats.
  • Build scalable and reusable data workflows in cloud environments (GCP, AWS, or Azure).
  • Implement and maintain data quality monitoring.
  • Able to effectively use a SQL database via automated cron jobs.
  • Maintain and extend dashboards and APIs that deliver data to both internal analysts and external clients.
  • Support the integration of new datasets, tools, and infrastructure components to enhance our analytics capabilities
Loading...