Principal Data Engineer (MA or REMOTE) at The Hanover Insurance Group
Remote, Oregon, USA -
Full Time


Start Date

Immediate

Expiry Date

03 Dec, 25

Salary

0.0

Posted On

03 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Good communication skills

Industry

Information Technology/IT

Description

For more than 170 years, The Hanover has been committed to delivering on our promises and being there when it matters the most. We live our values every day, demonstrating we CARE through our values, Sustainability initiatives and inclusive corporate culture.
Our Corporate Actuarial and Analytics team is currently seeking a Principal Data Engineer to join our growing team in our Worcester, MA office, hybrid or remote work consideration.
This is a full-time, exempt position.

POSITION OVERVIEW:

Data engineering is the aspect of data science that focuses on practical applications of data architecture, modeling, collection and analysis. This role will involve becoming proficient with all internal and external data produced and consumed by THG.
This role will be driving larger, more complicated projects independently as well as mentoring less experienced engineers. The Principal Data Engineer will collaborate with business partners and cross-functional teams on data/architecture requests. The assignments will usually require originality and ingenuity.
This role will be accountable for successful outcomes related to designing, developing, implementing, optimizing, and maintaining data integrations, pipelines, and solutions.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
  • Proven experience in data engineering and best practices for data storage, integration, transformation, and access within Azure and on-premises platforms.
  • Expertise in Azure Data Services (Azure Synapse Analytics, Azure SQL Database, Azure Data Lake Storage).
  • Expertise in Databricks.
  • Strong experience with data integration, ETL processes, data modeling and data warehousing.
  • Advanced proficiency in SQL and Python.
  • Knowledge of data modeling, governance, and security best practices.
  • Hands-on experience with high-performance data ingestion pipelines.
  • Familiarity with integration connectors (APIs, File, SFTP/FTPS, Streaming Data, DB Connectors, SOAP, REST).
  • Strong problem-solving, communication, and collaboration skills.
  • Ability to create documentation for data processes and solutions.
  • Ensure data security and compliance with industry standards.
  • Experienced in agile development methodologies and DevOps best practices.
  • Git/Github experience for CI/CD.
  • Coaching, mentoring, and technical guidance for junior team members.
  • Analytical skills for profiling and analyzing large data sets.
  • Critical thinking and attention to detail.
  • Debugging skills for data solution issues.
  • Collaborate with data scientists, analysts, and stakeholders.
  • Monitor and maintain data pipelines.
  • Design, develop, and test data mapping with transformation rules.
  • Ensure data quality and reliability.
  • Develop and maintain documentation for data integration and ETL solutions.
Loading...