Data Technology Analyst III at Servus Credit Union
Alberta, Alberta, Canada -
Full Time


Start Date

Immediate

Expiry Date

15 May, 25

Salary

0.0

Posted On

16 Feb, 25

Experience

6 year(s) or above

Remote Job

No

Telecommute

No

Sponsor Visa

No

Skills

Code, Etl, Data Analysis, Utilization, Languages, Airflow, Testing, Programming Languages, Step, Indexing, Data Collection, Performance Tuning, Data Warehouse, Computer Science, Maintenance, Pentaho, Data Warehousing, Cloud, Information Systems, Python

Industry

Information Technology/IT

Description

EMPLOYMENT STATUS: FULL TIME

Additional Information: This is a full time position, and is fully remote. The successful candidate must reside in Canada and will be required to come to our office in Edmonton Alberta periodically.
Servus is growing! We are currently looking for a Data Technology Analyst III - ETL Developer \ SQL Developer in our Data Technology & Infrastructure Team.
Servus Credit Union is Alberta’s largest member-owned credit union, known for building strong, resilient communities by helping our members feel good about their money. One of Canada’s Best Managed Companies for 20 consecutive years and ranked as one of the top banks in Canada on Forbes World’s Best Banks list for two years in a row, we are a team of smart, gutsy and driven individuals.
Reporting to the Data Technology Lead and as a member of the Data Management team you will provide solutions, best practices, and support all areas of the organization with the data and information flowing in and out of the credit union. While working in step with the rest of the Data Technology & Infrastructure team, this role will be accountable for the implementation of data movement across various systems.
As an experienced, senior level Data Technology Analyst you possess an expert level of specialization in designing, developing and maintaining ETL workflows for data extraction, transformation and loading.
As an experienced, senior level Data Technology Analyst you possess an expert level of specialization in writing and optimizing complex SQL queries, stored procedures, and functions to ensure efficient data processing.
This role will collaborate with the data architect, report developers, data engineers, and business teams to understand data needs and implement solutions.

REQUIREMENTS

  • This role requires an individual who is highly proficient in using various data analysis tools, programming languages, and software applications to perform data analysis, interpretation, and data movement.
  • This role will be responsible for designing, coding, testing, and implementing code for data collection and utilization. This resource will also be required to contribute to the maturity data practices across Servus.
  • This role will be responsible for monitoring, maintenance and incident resolution related to ETL data flows and query performance in both on-prem and cloud data platforms.
  • While working in step with the Data Technology team, this role will be accountable for the implementation of data movement across various systems.

EXPERIENCE:

  • 10+ years of hands-on experience in Data Technology, Data Cloud Platform, or related field.
  • 6+ years of hands-on experience in SQL development, writing complex queries, performance tuning, indexing, and stored procedures.
  • 6+ years of hands-on experience developing and troubleshooting ETL data pipelines.
  • Excellent programming skills in languages such as TSQL, Python.
  • Expert level knowledge of ETL and data pipeline tools such as Pentaho, SSIS, ADF (Azure Data Factory), AirFlow, Azure Databricks pipeline.
  • Experience in the migrating Data Warehouse and Analytics platforms to the cloud is desirable.
  • Familiarity with cloud-based data platforms.
  • Experience in data modelling, data warehousing, and best practices in ETL development.
  • Master Data Management experience and system support is desirable.

EDUCATION:

  • Bachelor’s degree or diploma required in Computer Science, Engineering, Management Information Systems or equivalent
Responsibilities
  • Collecting and analyzing data from various sources such as databases, logs, and other data repositories. Collaborate and mentor junior data roles to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision making across the organization.
  • Lead the development and implementation of data processing and analysis procedures to improve data quality and accuracy. Brings forward data quality issues and collaborates with Data Governance teams on solutions and remedies. Work with specific stakeholders within the organization to understand the impact of their data issues.
  • Lead the current and future needs in data designs, structures, content, and inventory. Contributes towards the implementation of processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Collaborates with others to problem solve production data issues.
  • Use programming skills, ETL tools, and MDM tools, to develop, customize and manage integration tools, databases, warehouses, and analytical systems. Assemble large sets of data that meet non-functional and functional business requirements.
Loading...