Cloud Data Engineer at Kepler Group
, , Costa Rica -
Full Time


Start Date

Immediate

Expiry Date

10 Feb, 26

Salary

0.0

Posted On

12 Nov, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Cloud Data Engineering, ETL, ELT, Python, SQL, Google Cloud, BigQuery, API Integration, RESTful APIs, OAuth, DevOps, Data Quality, Security, Problem-Solving, Communication, Critical Thinking

Industry

Advertising Services

Description
Role Description: The Cloud Data Engineer is a key contributor to the Data Solutions team, focused on building, optimizing, and maintaining scalable data infrastructure and pipelines, primarily within Google Cloud. This role is centered on leveraging data technologies and programming/scripting (typically in Python) to handle the entire ETL/ELT process—from extraction and transformation to loading data from diverse sources. The engineer will automate processes, improve data delivery, and ensure the scalability, performance, and security of data solutions. Furthermore, this position involves close collaboration with Napkyn’s team to prototype, pilot, and document potential GCP-based use cases that address specific client needs and market opportunities, ensuring robust data quality and adherence to security and privacy best practices. Primary Responsibilities: ● Develop data integration solutions for customer private cloud environments (most often Google Cloud), leveraging available tools in that platform, REST APIs, and scripting (typically in Python) ● Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources ● Implement pipelines and data warehousing based on specifications provided by customers or Napkyn team members ● Assess existing customer pipelines and offer improvements for efficiency, security and capabilities ● Participate in prototyping efforts to proof technologies against a client’s objectives ● Participate and engage in pilot exercises with selected customers Skills & Experience Required ● Proficiency building ETL/ELT pipelines in private cloud environments (Google Cloud, AWS, Azure) ● Strong Computer Science (CS) fundamentals, problem-solving skills and software engineering skills ● Ability to communicate effectively with stakeholders to define requirements ● A strong ability to understand and organize data from various sources ● Strong expertise in a programming language (preferably Python) ● Proficiency writing queries with SQL ● Experience with Google Cloud, especially BigQuery ● Experience building solutions via API integration, especially RESTful APIs ● Knowledge of OAuth protocols for API authentication ● Experience with quality assurance and devops processes ● Ability to effectively communicate with developers internally and in client organizations ● Strong understanding of security implications in data pipelines ● Ability to identify and resolve performance and data quality in data pipelines ● Strong critical thinking and problem-solving skills with attention to detail ● Strong writing and communication skills ● Ability to prioritize projects and handle multiple tasks efficiently ● A degree in Computer Science, Statistics, Information Systems, or other quantitative fields, or comparable industry experience. Preferred: ● Experience in additional Google Cloud products for data pipelines, such as Cloud Dataflow, Google Dataproc ● Experience with Google APIs and SDKs, especially those for Google Marketing Platform products ● Experience with data privacy concerns and requirements ● Experience with Kotlin/JVM or JavaScript in addition to Python is an asset ● Google Cloud Professional certifications, particularly the Data Engineer, Cloud Database Engineer or ML Engineer certifications, are an asset Napkyn welcomes and encourages applications from people with disabilities. Accommodations are available upon request for applicants in all aspects of the selection process. We are committed to diversity of background, thought and experience, and we work to create an environment in which all our employees thrive by bringing their authentic self to work. MORE ABOUT KEPLER Benefits & Perks Competitive health and dental benefits Tuition reimbursement and training stipend Hybrid Office/WFH Schedules: office supplies, internet, and phone stipend “Work from Anywhere” 4 weeks per year Stocked kitchen, and other team outings Collaborative and friendly work space, easily commutable location Volunteering & altruism opportunities Team building lunches and events, and company celebrations: Summer, Halloween, Holidays, and many multicultural holidays recognized/celebrated If something is important to you that’s not listed here, let us know! Career & Development Focus Ongoing learning and development for education opportunities such as webinars, books, classes, relevant conferences and events Opportunities to pursue business related side projects and Hackathons Environment of learning from peers, including 30+ class training program, Kepler University, powered by Center of Excellence and Tiger Team subject matter expert groups Opportunity to work with cutting edge technology and industry thought leaders Kepler Rocket Mentorship Program: beneficial for the development of both mentors and mentees **All applications must include a Resume & Cover Letter in English to be considered. Kepler is a people first organization. If this roles piques your interest but you may not check every box, we still encourage you to apply! Studies show that imposter syndrome can prevent women and people of color from applying unless they meet every single qualification. We welcome all who are interested to apply, you just might be a great candidate for this role or others. Protect yourself from recruitment fraud. The only way to apply for a position at Kepler is by submitting a direct application via the Keplergrp.com website or working with a recruiter employed by Kepler with a @keplergrp.com email address. Learn how to stay safe by clicking here
Responsibilities
The Cloud Data Engineer will develop data integration solutions and build the infrastructure for optimal data extraction, transformation, and loading. This role also involves assessing existing pipelines and collaborating with the team to prototype and pilot GCP-based use cases.
Loading...