Data Engineer at WebCreek
Remote, Oregon, USA -
Full Time


Start Date

Immediate

Expiry Date

27 Nov, 25

Salary

0.0

Posted On

27 Aug, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

Available for the following offices: Latin America, Remote
Are you looking to join a highly regarded IT development firm?
Well, look no further! WebCreek is hiring a skilled Data Engineer with 3+ years of experience and a high level of English to work remotely from Latin America.
The role focuses on building and maintaining scalable data pipelines and models using Azure Databricks, Spark, Python, and SQL. The ideal candidate has strong knowledge of Azure data services, and a solid understanding of data architecture and governance.

WHO WE ARE

WebCreek provides world-class software development teams and technical staff augmentation to Fortune 500 companies and other global industry leaders. With over 29 years of experience and a global presence spanning 20+ offices across North America, Latin America, Asia, and Europe, we deliver top-tier digital solutions to the companies that power the world.
WebCreek is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, nationality, genetics, pregnancy, disability, age, veteran status, or other characteristics.

Responsibilities
  • Design, develop, and maintain data pipelines and data models using Azure Databricks and related Azure data services.
  • Collaborate with data analysts and business stakeholders to understand data needs and deliver robust, high-performance solutions.
  • Build and optimize data architecture to support data ingestion, processing, and analytics workloads.
  • Implement best practices for data governance, security, and performance tuning in a cloud-native environment.
  • Work with structured and unstructured data from various sources including APIs, files, databases, and data lakes.
  • Create reusable code and components for data processing and modeling workflows.
  • Monitor and troubleshoot jobs, ensuring data quality, reliability, and efficiency.
Loading...