Design, develop, and maintain scalable data pipelines and ETL processes to ensure the efficient collection, storage, and processing of large volumes of structured and unstructured data
Collaborate with BI developers, data scientists, software engineers, and business stakeholders to understand data requirements and implement robust data solutions.
Optimise data storage and retrieval mechanisms for performance and scalability, including implementing data partitioning, indexing, and compression strategies.
Implement data quality and data governance frameworks to ensure data accuracy, consistency, and compliance with industry standards.
Perform data modelling and schema design to support data analysis, reporting, and visualisation needs
Monitor and troubleshoot data pipelines, identify, and resolve performance bottlenecks, and ensure data integrity and availability.