Start Date
Immediate
Expiry Date
02 Nov, 25
Salary
0.0
Posted On
03 Aug, 25
Experience
8 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Disabilities, Systems Engineering, Excel, Sql, Relational Databases, Life Insurance, Savings Accounts, Eligibility, Complex Analysis
Industry
Information Technology/IT
THE POSITION IS DESCRIBED BELOW. IF YOU WANT TO APPLY, CLICK THE APPLY NOW BUTTON AT THE TOP OR BOTTOM OF THIS PAGE. AFTER YOU CLICK APPLY NOW AND COMPLETE YOUR APPLICATION, YOU’LL BE INVITED TO CREATE A PROFILE, WHICH WILL LET YOU SEE YOUR APPLICATION STATUS AND ANY COMMUNICATIONS. IF YOU ALREADY HAVE A PROFILE WITH US, YOU CAN LOG IN TO CHECK STATUS.
Need Help?
If you have a disability and need assistance with the application, you can request a reasonable accommodation. Send an email to Accessibility (accommodation requests only; other inquiries won’t receive a response).
PLEASE REVIEW THE FOLLOWING JOB DESCRIPTION:
* This position is on-site 4 days per week *
Responsible for building, optimizing and maintaining the data pipelines and aiding in building the data ecosystem for delivering enterprise data for wide consumption including developing data models, corresponding data architecture documents and API’s. This individual would be responsible for ensuring target state implementation of development efforts in this space are successful and to support data analysis needs supporting incident response.
REQUIRED QUALIFICATIONS:
The requirements listed below are representative of the knowledge, skill and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
1. Bachelor’s degree and 8 years of experience in systems engineering or administration or equivalent education and related training or experience
2. Specialized knowledge of SQL, relational databases, ETL/ELT architecture and concepts, data integration concepts and big data concepts
3.Previous experience in planning and managing IT projects
PREFERRED QUALIFICATIONS:
Following is a summary of the essential functions for this job. Other duties may be performed, both major and minor, which are not mentioned below. Specific activities may change from time to time.
1. Builds, manages, and implements the data and/or Big Data pipeline capabilities including data modeling, process design and overall data pipeline architecture and all phases of the ETL (extract, transform, and load) processes.
2. Leads efforts related to partially to completely automate repeatable data preparation and integration tasks. Partner with technology teams to understand data capture, testing needs, and to build and test end-to-end solutions.
3. Partners with engineers, data scientists, and the data office leadership to define and refine data architecture and technology choices.
4. Takes a new perspective on existing solutions to solve problems of the highest complexity and exercise judgment based on the analysis (e.g. modeling, testing, etc.) of multiple sources of information and make recommendations.
5. Leads larger, more complex data engineering projects and initiatives with significant risks and resource requirements.
6. Serve as technical implementation lead supporting Snowflake efforts and related tooling/processes supporting data exploration and discovery efforts.
No Sponsorship: For this opportunity, Truist will not sponsor an applicant for work visa status or employment authorization, nor will we offer any immigration-related support for this position (including, but not limited to H-1B, F-1 OPT, F-1 STEM OPT, F-1 CPT, J-1, TN-1 or TN-2, E-3, O-1, or future sponsorship for U.S. lawful permanent residence status.)