Senior Data Engineer at Lexipol LLC
Remote, Oregon, USA -
Full Time


Start Date

Immediate

Expiry Date

23 May, 25

Salary

0.0

Posted On

23 Feb, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Orc, Automation, Metadata Management, Data Processing, Python, Azure, Kafka, Data Governance, Query Optimization, Glue, Data Solutions, Athena, Performance Tuning, Code, Communication Skills, Public Safety, Law Enforcement, Ems, Data Engineering, Structured Data

Industry

Information Technology/IT

Description

SENIOR DATA ENGINEER

This is a remote role.
Remote position. Candidates must already live in the United States.

LI-Remote

Applicants must be authorized to work for ANY employer in the United States.
No visa sponsorship. We are unable to sponsor or take over sponsorship of an employment Visa (H1-B or Student visa) at this time.
At Lexipol, our mission is to create safer communities and empower the individuals on the front lines with market-leading content and technology. Our top-notch team works closely with law enforcement, fire, EMS, corrections, and local government professionals to tailor our solutions to better address today’s challenges and keep first responders coming home safely at the end of each shift.
Working at Lexipol means making a difference – day in and day out.

Responsibilities

This role will contribute to Lexipol’s 2025 strategic data goals, including:

  • Building and scaling our data infrastructure, including both the data lake and Redshift warehouse.
  • Delivering reporting requirements for in-product reporting.
  • Maintaining data lake architecture diagrams and ensuring data flow documentation is up to date.
  • Documenting job schedules and automating data pipeline monitoring.
  • Migrating all ETL processes, jobs, and data sources from Azure to Redshift.
  • Establishing backup and recovery schedules for all critical data.
  • Cost-optimizing the data lake for efficient storage and processing

Role Responsibilities:

  • Design, develop, and maintain an AWS-based data lake and Redshift data warehouse to support enterprise-wide data needs.
  • Build and optimize data pipelines using AWS services such as Glue, Kinesis, Firehose, Lambda, Step Functions, and Athena.
  • Oversee data lake cost management, ensuring efficient resource utilization and cost reduction.
  • Design and document data lake architecture, including data flow diagrams, processing logic, and schema design.
  • Develop and maintain playbooks and configuration guides for data lake operations, troubleshooting, and best practices.
  • Design and maintain Redshift schemas, tables, and views, optimizing for performance and scalability.
  • Implement data ingestion and ETL/ELT processes, ensuring efficient transformation and integration of data sources.
  • Contribute to the migration from Azure to Redshift, ensuring seamless transition of ETL jobs, data sources, and workflows.
  • Document and maintain job schedules, data flow diagrams, and system architecture.
  • Define and enforce backup and disaster recovery policies for data infrastructure.
  • Collaborate with Engineering, DevOps, and Data Services teams to ensure smooth data integration and ongoing system optimization.
  • Set up roles, permissions, and security policies to ensure secure access and governance across the data lake.
  • Proactively troubleshoot and optimize data pipeline performance, identifying areas for automation and improvement.
  • Provide technical mentorship and guidance to junior data engineers on AWS best practices and Redshift optimization

Requirements: To be considered for this role, you will have this experience:

  • 5+ years of experience in data engineering, with a focus on AWS cloud-based data solutions.
  • Proven expertise with AWS data lake services, including Glue, S3, Kinesis, Firehose, Lambda, Redshift, and Athena.
  • Strong SQL and Redshift experience, including query optimization, workload management, and performance tuning.
  • Hands-on experience with Terraform for infrastructure-as-code (IaC) deployment.
  • Experience in ETL/ELT processes, including ingesting and transforming structured and semi-structured data (e.g., JSON, Parquet, ORC).
  • Ability to lead large-scale migrations, particularly moving ETL pipelines and data sources from Azure to AWS/Redshift.
  • Experience with data security best practices, IAM roles, and encryption standards.
  • Strong problem-solving skills with the ability to diagnose and resolve data pipeline performance issues.
  • Excellent communication skills, with the ability to work cross-functionally and explain technical concepts to non-technical stakeholders
Loading...