Senior Data Engineer at Fiduciary Tech
Bellevue, Washington, USA -
Full Time


Start Date

Immediate

Expiry Date

13 Jun, 25

Salary

80000.0

Posted On

13 Mar, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Jenkins, Amazon Redshift, Scripting, Amazon S3, Deliverables, Hadoop, Data Modeling, Sql, Airflow, Query Optimization, Ec2, Data Solutions, Glue, Ecs, Agile Methodologies, Computer Science, Data Warehousing, Communication Skills, Data Processing, Python, Data Loading

Industry

Information Technology/IT

Description

BACKGROUND

We have developed a custom internal application using AWS for a large organization and collaborate with other vendors to continuously enhance and improve its features. The Data Engineer we hire will join an experienced team that has been working on this project for years. For more details, please refer to the responsibilities and qualifications section below.
This role can be either project-based or full-time, but we typically hire full-time employees to handle consulting work. Please note, we are not a staffing agency. The client and work are based in Bellevue, WA.
The Data Engineer will support ongoing IT projects by developing new features, optimizing existing ones, and occasionally working on quick iterations to deliver the best possible outcomes for our customers. You’ll ensure that our services are maintained and continue to evolve toward scalable solutions. We’re looking for someone who enjoys working on complex software systems in a customer-focused environment and is passionate about not only building high-quality software but also ensuring its success in real-world operations.

MINIMUM QUALIFICATIONS

  • Bachelor’s degree in IT related field such as Information Technology, Computer Science, Information Systems, or a related field, or equivalent experience
  • 5+ years of experience in data engineering or related roles with a focus on AWS.
  • Proven track record in designing, building, and deploying data solutions in AWS.
  • Experience working with large-scale data processing, data integration, and cloud-based data storage solutions.
  • Hands-on experience with data modeling, ETL processes, and cloud architectures.
  • Experience with cloud-based big data platforms and services, especially AWS.
  • Prior experience with Agile methodologies and collaborative team environments
  • Strong working knowledge of AWS Cloud Development (experience with ECS, RDS, Lambda, SQS, and SNS is a plus)
  • Ability to work independently and collaboratively in a fast-paced environment
  • Strong problem-solving skills and attention to detail
  • A consultant’s mindset, adaptable and eager to acquire new skills
  • Strong communication skills and the ability to work cross-functionally to align stakeholders on the goals and deliverables

DESIRED QUALIFICATIONS

  • Strong experience with AWS services, including but not limited to:
  • Amazon S3, Redshift, Glue, EC2, Lambda, RDS, DynamoDB, Kinesis, CloudFormation, CloudWatch, Athena, and EMR.
  • Infrastructure as Code (IaC) using tools like AWS CDK, CloudFormation, or Terraform.
  • Data pipelines: Experience with designing, building, and maintaining scalable and efficient ETL pipelines (using AWS Glue, Airflow, etc.).
  • Experience with SQL and NoSQL databases: Strong proficiency in SQL, data modeling, query optimization, and experience working with both relational and non-relational databases.
  • Big data technologies: Familiarity with Hadoop, Spark, Kafka, and other distributed data processing frameworks.
  • Experience with data warehousing: Expertise in working with data warehouses like Amazon Redshift, including schema design, data loading, and query performance tuning.
  • CI/CD: Familiarity with continuous integration and continuous deployment (CI/CD) pipelines, using tools like Jenkins, CodePipeline, or similar.
  • Scripting and Automation: Strong skills in scripting languages such as Python, Bash, or PowerShell.
    Job Type: Full-time
    Pay: $80,000.00 - $150,000.00 per year

Benefits:

  • 401(k)
  • Dental insurance
  • Health insurance

Compensation Package:

  • Bonus opportunities

Schedule:

  • Monday to Friday

Work Location: In perso

Responsibilities
  • Handle installation, configuration, administration, and automation of data infrastructure in AWS Cloud.
  • Support multiple environments for development, testing, and production application releases.
  • Identify and implement process improvements and infrastructure optimizations for database management.
  • Contribute to back-end development as necessary, ensuring seamless integration with cloud services and maintaining system reliability.
  • Utilize AWS Cloud Development Kit (CDK) for Infrastructure as Code (IaC) to deploy and manage cloud infrastructure.
  • Implement and manage CI/CD pipelines to streamline deployment processes and ensure robust practices.
  • Create or update Data.net jobs to extract data from various teams and load it into our Redshift cluster for QuickSight dashboard processing.
  • Develop or update Glue jobs to pull data from APIs of platforms like BIM360, Tokenflex, Asana, Workato, and Workdocs, and store/process it in the Redshift cluster for QuickSight dashboards.
  • Manage and deploy infrastructure as code using Pipeline/CDK for AWS services.
  • Migrate and refactor existing pipeline, CDK, and code packages as needed.
  • Transition Glue Jobs to ECS, Airflow, or other AWS services.
  • Leverage GenAI, Machine Learning, and Amazon Bedrock to enhance AI services for the data in Redshift.
Loading...