Data Engineer II at Everspring Inc
Chicago, IL 60606, USA -
Full Time


Start Date

Immediate

Expiry Date

09 Nov, 25

Salary

90000.0

Posted On

10 Aug, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Airflow, Structured Data, Databases, Data Infrastructure, Python, Git, Data Solutions, Dbt, Version Control, Performance Tuning

Industry

Information Technology/IT

Description

ABOUT EVERSPRING

Everspring is a leading provider of education technology and service solutions. Our advanced technology, proven marketing approach, research-based instructional design services, and robust faculty support deliver outstanding outcomes for our university partners, powering their success online.

REQUIREMENTS:

  • 3+ years of experience in a data engineering or software engineering role, with a strong track record of delivering robust data solutions
  • Proficiency in Python and advanced SQL for complex data transformations and performance tuning
  • Experience building and maintaining production pipelines using tools like Airflow, dbt, or similar workflow/orchestration tools
  • Strong understanding of cloud-based data infrastructure (e.g., AWS, GCP, or Azure)
  • Knowledge of data modeling techniques and data warehouse design (e.g., star/snowflake schemas)
  • Experience working with structured and semi-structured data from APIs, SaaS tools, and databases
  • Familiarity with version control (Git), CI/CD, and Agile development methodologies
  • Strong communication and collaboration skills

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Design and implement scalable, maintainable ETL/ELT pipelines for a variety of use cases (analytics, operations, product enablement)
  • Build and optimize integrations with cloud services, databases, APIs, and third-party platforms
  • Own production data workflows end-to-end, including testing, deployment, monitoring, and troubleshooting
  • Collaborate with cross-functional stakeholders to understand business needs and translate them into technical data solutions
  • Lead technical discussions and participate in architecture reviews to shape our evolving data platform
  • Write clean, well-documented, production-grade code in Python and SQL
  • Improve data model design and data warehouse performance (e.g., partitioning, indexing, denormalization strategies)
  • Champion best practices around testing, observability, CI/CD, and data governance
  • Mentor junior team members and contribute to peer code reviews
Loading...