Freelance Principal Data Engineer - Remote Costa Rica at SeamlessAI
San José, Provincia de San José, Costa Rica -
Full Time


Start Date

Immediate

Expiry Date

27 Aug, 25

Salary

0.0

Posted On

28 May, 25

Experience

7 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Warehousing, Pipeline Development, Etl Tools, Sql, International Clients, Data Security, Data Governance, Spanish, Information Systems, English, Python, Data Integration, Computer Science, Data Modeling, Analytical Skills, Deduplication

Industry

Information Technology/IT

Description

THE OPPORTUNITY

At Seamless.AI, we’re seeking a highly skilled and experienced Freelance Principal Data Engineer with expertise in Python, Spark, AWS Glue, and other ETL (Extract, Transform, Load) technologies. The ideal candidate will have a proven track record in data acquisition and transformation, as well as experience working with large data sets and applying methodologies for data matching and aggregation methodologies. This independent contractor role requires exceptional organizational skills and the ability to work autonomously while delivering high-quality solutions
This is an independent contractor position. The selected individual will provide services as a self-employed professional and will not be classified as an employee of Seamless.AI. Contractors are responsible for managing their own taxes, social security contributions, and business expenses.

SKILLSET

  • Strong proficiency in Python and experience with related libraries and frameworks (e.g., pandas, NumPy, PySpark).
  • Hands-on experience with AWS Glue or similar ETL tools and technologies.
  • Solid understanding of data modeling, data warehousing, and data architecture principles.
  • Expertise in working with large data sets, data lakes, and distributed computing frameworks.
  • Experience developing and training machine learning models.
  • Strong proficiency in SQL.
  • Familiarity with data matching, deduplication, and aggregation methodologies.
  • Experience with data governance, data security, and privacy practices.
  • Strong problem-solving and analytical skills, with the ability to identify and resolve data-related issues.
  • Excellent communication and collaboration skills, with the ability to work effectively independently.
  • Highly organized and self-motivated, with the ability to manage multiple projects and priorities simultaneously.

EDUCATION AND REQUIREMENTS

  • Bachelor’s degree in Computer Science, Information Systems, related fields or equivalent years of work experience.
  • 7+ years of experience as a Data Engineer, with a focus on ETL processes and data integration.
  • Professional experience with Spark and AWS pipeline development required.
  • Fluency in English and Spanish is required, as this role involves regular communication with international clients and team members. Applicants should have an advanced level of English in both written and verbal communication.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design, develop, and maintain scalable ETL pipelines to acquire, transform, and load data from various sources into the data ecosystem.
  • Work with stakeholders to understand data requirements and propose effective data acquisition and integration strategies.
  • Implement data transformation logic using Python and relevant frameworks, ensuring efficiency and reliability.
  • Utilize AWS Glue or similar tools to create and manage ETL jobs, workflows, and data catalogs.
  • Optimize ETL processes to improve performance and scalability, particularly for large datasets.
  • Apply data matching, deduplication, and aggregation techniques to enhance data accuracy and quality.
  • Ensure compliance with data governance, security, and privacy best practices within the scope of project deliverables.
  • Provide recommendations on emerging technologies and tools that enhance data processing efficiency.
Loading...