Software Consultant at SAKSOFT PTE LIMITED
Singapore, Southeast, Singapore -
Full Time


Start Date

Immediate

Expiry Date

17 Jul, 25

Salary

8500.0

Posted On

18 Apr, 25

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Structures, Hive, Spark, Reliability, Reporting, Exploratory Data Analysis, Python, Etl Tools, Databases, Data Warehouse

Industry

Information Technology/IT

Description

Experience : 10+ Years
Role : Software Consultant
Key Skills :
Data analytics using Python
Big Data
Data Architecture

KEY REQUIREMENTS:

  • Collaborate with cross-functional teams to understand data requirements and business objectives.
  • Extract, clean, and transform data from various sources to ensure accuracy and reliability.
  • Develop and maintain data pipelines using Python, Spark, and PySpark.
  • Perform exploratory data analysis to identify trends, patterns, and anomalies.
  • Create and present clear and concise reports to communicate findings to non-technical stakeholders.
  • Work closely with data engineers to optimize and streamline data processing workflows.
  • Hands-on experience with Hadoop ecosystem components like HIVE, Impala, HDFS, and Spark.
  • Ability to clearly explain data and analytics strengths and weaknesses to both technical and senior business
  • Responsible for developing batch ingestion and data transformation routines using ETL tools or other ingestion techniques
  • Experience in migrating historical data from external systems to the datalake
  • Develop and maintain data pipelines that extract, transform, and load (ETL) data from various sources into a centralized data storage system, such as a data warehouse or data lake.
  • Integrate data from multiple sources and systems, including databases, APIs, log files, external data providers.
  • Develop data transformation routines to clean, normalize, and aggregate data. Apply data processing techniques to handle complex data structures, handle missing or inconsistent data, and prepare the data for analysis, reporting, or machine learning tasks.
  • Contribute to common frameworks and best practices in code development, deployment, and automation/orchestration of data pipelines.
Responsibilities

Please refer the Job description for details

Loading...