Data Engineer (Databricks)

at  FLUENT LLC

Remote, Oregon, USA -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate23 Jan, 2025USD 160000 Annual24 Oct, 20241 year(s) or aboveData Engineering,Spark,Computer Science,SqlNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

Fluent is building the next generation advertising network, Partner Monetize & Advertiser Acquisition. Our vision is to build an ML/AI first network of advertisers and publishers to achieve a common objective, elevating relevancy in E-commerce for everyday shoppers.
As a Data Engineer you will bring your Databricks pipeline expertise to execute on building data products to power Fluent’s business lines. These data products will be the foundation for sophisticated data representation of customer journeys and marketplace activity.
You are known as a strong and efficient IC Data Engineer, with the ability to assist the Data Architect vetting the translation of an Enterprise Data Model to physical data models and pipelines. You are familiar with Databricks medallion architecture, and how to work backwards from your enterprise models of Gold Data Products. You are considered an expert in Spark.
You will work with your counterparts to build and operate high-impact data solutions. This role is fully Remote in the United States or Canada, with occasional travel to NYC.
Fluent is looking for an experienced Data Engineer, who thrives in writing robust code in the Databricks ecosystem.

REQUIREMENTS

  • Bachelors or Masters degree in computer science
  • 3+ years of industry experience in Data Engineering, including expertise in Spark and SQL.
  • 1+ years of experience with Databricks environment
  • Nice to have: Familiarity with real-time ML systems within Databricks will be very beneficial

Responsibilities:

  • Majority of the role will be software engineering – Tables, Views, Spark jobs, orchestration within Databricks environment, following an enterprise data model design. You will help elevate standards across testing, code repository, naming conventions, etc.
  • Develop, deploy, and manage scalable pipelines on Databricks, ensuring robust integration with a Feature Store leveraging online tables for machine learning models.
  • Investigate and leverage Databricks’ capabilities to implement real-time data processing and streaming, potentially using Spark Streaming, Online Tables, Delta Live Tables.
  • Contribute and maintain the high quality of the code base with comprehensive data observability, metadata standards, and best practices.
  • Partner with data science, UI and reporting teams to understand data requirements and translate them into models.
  • Keep track of emerging tech and trends within the Databricks ecosystem
  • Share your knowledge by giving brown bags, tech talks, and evangelizing appropriate tech and engineering best practices.
  • Empower internal teams by providing communication on architecture, target gold tables, execution plans, releases and training.


REQUIREMENT SUMMARY

Min:1.0Max:3.0 year(s)

Information Technology/IT

IT Software - Other

Software Engineering

Graduate

Computer Science

Proficient

1

Remote, USA