Data Engineer at Hugo
, Lagos State, Nigeria -
Full Time


Start Date

Immediate

Expiry Date

26 May, 26

Salary

0.0

Posted On

25 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

SQL, Python, dbt, Git, GitHub, CI/CD, BigQuery, GCP, IAM, Terraform, Data Modeling, Airflow, Prefect, GCP Workflows, Elementary, Troubleshooting

Industry

IT Services and IT Consulting

Description
ABOUT HUGO Hugo Technologies, Inc. (“Hugo”) is committed to nurturing the best of Africa’s young talent. We build and manage remote teams across Africa for some of the world’s largest technology and media companies. We specialize in omnichannel customer support, digital and AI operations, and trust and safety solutions. Our “Why?” is simple. Outsourcing generates billions in income and opportunities globally, but less than 2% reaches African communities. We’re changing that. By winning a share of the multi-billion dollar global BPO market for Africa, we are investing in a brighter future for the continent. As a company, we’re obsessed with excellence. We ask smart questions to build a thorough understanding of our client's needs; then pour ourselves into delivering not just great work, but also a perfect user experience so that we stand out in what is a very crowded market. The only commitment greater than the one we have to our clients is the one we have to our community. We are dedicated to carving out a place in the digital economy for young Africans, and we work tirelessly to equip them with the skills needed to build meaningful careers. WHAT YOU’LL BE DOING Key Responsibilities: Data Pipeline Execution: Build and maintain reliable data pipelines that automate ingestion from both structured and unstructured sources. Leverage Python and SQL to ensure data flows are secure and traceable. Transformation Layer: Develop and manage transformation workflows using dbt, ensuring data models are modular, tested, and version-controlled via Git. Orchestration & Scheduling: Schedule data workflows using tools such as Airflow, Prefect, or GCP Workflows, ensuring automated and timely data delivery across our systems. Cloud Warehouse Support: Maintain the data warehouse environment (BigQuery), focusing on query performance, cost monitoring, and schema organization. Observability & Quality: Implement data validation tests and lineage tracking using frameworks like Elementary to ensure high levels of data integrity and trust. Infrastructure as Code (IaC): Assist in managing and deploying cloud resources (BigQuery datasets, IAM roles, GCS buckets) using Terraform to ensure a reproducible and documented environment. Version Control & CI/CD: Maintain the integrity of our codebase using GitHub. Ensure that every dbt change or Python script follows our CI/CD patterns (GitHub Actions) for automated testing and deployment. Analytics Support: Collaborate with BI analysts and product teams to provide clean, optimized data sets for reporting and internal tools. Operational Documentation: Maintain clear documentation (dbt docs/SOPs) for pipelines and models to support team-wide data discovery and "self-service" BI. WHAT QUALIFICATIONS YOU’LL NEED Competencies: Proficiency in SQL and Python for data manipulation and automation. Practical experience with dbt for building and maintaining modular data models. Familiarity with Git/GitHub workflows and CI/CD principles for managing code. Hands-on experience with BigQuery (or similar cloud DWH like Snowflake). Practical experience with GCP (preferred) or AWS/Azure. Understanding of IAM permissions and cloud storage. Familiarity with Terraform or a strong desire to learn how to manage infrastructure through code rather than manual console clicks. Solid understanding of Data Modeling, knowledge of star schemas and how to structure data for efficient reporting. Ability to troubleshoot broken pipelines and optimize slow-running queries. Strong communication skills and a desire to work collaboratively under the guidance of the Lead Data Engineer. Proactive about catching data issues and suggesting improvements to existing workflows. Experiences: 2 to 3 years of progressive experience in Data Engineering or Analytics Engineering. Prior experience in a scale-up, product-led, or data-centric organization. Familiarity with BI tools is a plus (Tableau, PowerBI, Looker), though not a core requirement. Proven track record of building and managing dbt models in a production environment. Experience with API-based ingestion (using Airbyte, dlt, and/or custom scripts) is a plus. Ability to work effectively with people at all levels in an organization. Excellent written and oral communication skills, with the ability to present to various audiences and distill key messages in order to effectively inform. Skills to communicate complex ideas effectively. WHAT WE PROVIDE Hugo offers a hybrid work environment that balances employee flexibility with a collegial, fun office culture. We pride ourselves on offering a dynamic environment where ambitious professionals can make a measurable impact and accelerate their career. Our compensation and benefits are highly competitive. PRIVACY STATEMENT Any information you submit to Hugo as part of your application will be processed in accordance with Hugo’s Privacy Policy. EQUAL OPPORTUNITY STATEMENT Diversity, equity and inclusion are part of our DNA. Promoting and, where possible, improving diversity, equity and inclusion are a value-based and commercial necessity. We are an equal opportunity employer and welcome applications from all qualified individuals, regardless of race, sex, gender identity, sexual orientation, neurodiversity, disability, or any other legally protected status.
Responsibilities
The role involves building and maintaining reliable data pipelines using Python and SQL, and developing transformation workflows with dbt, ensuring data flows are secure and traceable. Responsibilities also include scheduling workflows via tools like Airflow, maintaining the BigQuery data warehouse, and implementing data quality checks.
Loading...