Senior Data Engineer - Databricks at Unison Group
Singapore, , Singapore -
Full Time


Start Date

Immediate

Expiry Date

11 Jun, 26

Salary

0.0

Posted On

13 Mar, 26

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, PySpark, Databricks Platform, Data Modelling, Delta Lake, Python, SQL, Cloud Platforms, Data Governance, Streaming Data Processing, DevOps, CI/CD, Git, Data Quality, Problem-Solving, Communication

Industry

Business Consulting and Services

Description
Essential Technical Skills Data Engineering: Strong foundation in data engineering principles, ETL/ELT processes, and data pipeline design patterns PySpark: Proven hands-on experience developing data pipelines using PySpark, including DataFrames API, Spark SQL, and performance optimization Databricks Platform: Practical experience with Databricks workspace, cluster management, notebooks, and job orchestration Workspace AI Agent: Knowledge of Databricks Workspace AI Agent capabilities and integration Data Modelling: Experience implementing data models including dimensional modeling, data vault, or lakehouse architectures Delta Lake: Understanding of Delta Lake features including ACID transactions, schema evolution, and optimization techniques Python: Strong Python programming skills for data processing and automation Additional Technical Skills SQL proficiency for data querying and transformation Experience with cloud platforms (Azure, AWS, or GCP) Understanding of data governance and security best practices Knowledge of streaming data processing (Structured Streaming) Familiarity with DevOps practices and CI/CD pipelines Experience with version control systems (Git) Understanding of data quality frameworks and testing methodologies Professional Experience Minimum 8 years in data engineering or related roles At least 2-3 years of hands-on experience with Databricks platform Proven track record of refactoring legacy code to modern frameworks Experience building and maintaining production data pipelines at scale Background working across multiple data sources and formats Experience in agile development environments Required Certifications Databricks Certified Data Engineer Associate OR Databricks Certified Data Engineer Professional Additional Certifications (Preferred) Databricks Certified Associate Developer for Apache Spark Cloud platform certifications (Azure Data Engineer Associate, AWS Certified Data Analytics, or Google Cloud Professional Data Engineer) Relevant data engineering or big data certifications Soft Skills Strong problem-solving and analytical thinking abilities Excellent communication skills to explain technical concepts clearly Ability to work collaboratively in cross-functional teams Self-motivated with strong attention to detail Adaptable to changing priorities and technologies Client-focused mindset with commitment to quality delivery
Responsibilities
This role involves developing robust data pipelines using PySpark and leveraging the Databricks platform, focusing on ETL/ELT processes and modern data pipeline design patterns. The engineer will also be responsible for implementing data models such as dimensional modeling or lakehouse architectures while ensuring data quality and optimization.
Loading...