Senior Associate, Full-Stack Engineer at BNY
Wrocław, dolnoslaskie, Poland -
Full Time


Start Date

Immediate

Expiry Date

07 Jun, 26

Salary

0.0

Posted On

09 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Oracle, Snowflake, PL/SQL, SQL, Data Engineering, AI Agentic Workflows, Automation, Unix, Python, Linux, ETL/ELT, RESTful Web Service, Microservices, Kafka, Shell Scripting, Splunk

Industry

Financial Services

Description
At BNY, our culture allows us to run our company better and enables employees’ growth and success. As a leading global financial services company at the heart of the global financial system, we influence nearly 20% of the world’s investible assets. Every day, our teams harness cutting-edge AI and breakthrough technologies to collaborate with clients, driving transformative solutions that redefine industries and uplift communities worldwide. Recognized as a top destination for innovators, BNY is where bold ideas meet advanced technology and exceptional talent. Together, we power the future of finance – and this is what #LifeAtBNY is all about. Join us and be part of something extraordinary. We’re seeking a future team member for the role of Senior Associate, Full-Stack Engineer, to join our accting and admin team. This role is based in Wroclaw, Poland.   Title in the contract: Senior Associate, Full-Stack Engineer In this role, you’ll make an impact in the following ways:  * Work closely with source system owners, reporting teams, and data consumers to gather requirements and translate them into scalable database solutions. * Understand end-to-end data flows, key entities, attributes, and relationships across upstream and downstream systems. * Convert analytical and reporting needs into logical and physical data models. * Create and maintain model documentation (ERDs, entity/attribute definitions, mapping specs, naming standards). * Develop and optimize database objects such as tables, views, indexes, materialized views, sequences, stored procedures/functions, and packages. * Apply performance tuning techniques (SQL tuning, indexing strategy, partitioning, query plans) to ensure efficient processing at scale. * Implement validation checks, reconciliation logic, and exception handling to detect and resolve data issues. * Enforce standards for data definitions and ensure consistency across environments (e.g., dev/qa/prod, multiple platforms). * Maintain data dictionaries, reference/master data rules, and metadata needed for governance and traceability. * Promote standard naming conventions, code sets, and reusable patterns. * Build/maintain database-side ETL/ELT logic and scheduling dependencies where applicable. * Monitor loads, troubleshoot failures, analyze root cause, and implement corrective actions. * Identify opportunities to improve data sharing and reduce duplicated data transformations across teams. * Contribute to data stewardship practices: definitions, ownership, access controls, and auditability. * Recommend long-term fixes for recurring data defects and performance bottlenecks. * Support changes through impact analysis, backward/forward compatibility planning, and controlled deployments. To be successful in this role, we’re seeking the following:  * Strong experience with Oracle and exposure to Snowflake (or similar cloud data platforms) * Expert-level PL/SQL development and advanced SQL skills * Strong data engineering capability with real-world, production-grade delivery experience * Experience with AI Agentic Workflows and Automation. * Hands-on ability to build scalable workflows using Unix, PL/SQL, and Python * Experience building and optimizing batch pipelines running in Unix/Linux environments * Ability to write complex, performance-tuned SQL (execution plans, indexing awareness, tuning approach) * Practical experience with ETL/ELT (extraction, transformation, load, controls, audit, restartability) * Proven ability to handle large datasets with strong focus on data quality, reconciliation, and reliability * Strong troubleshooting skills for batch failures, data issues, and performance bottlenecks * Comfortable working with cross-functional teams to understand requirements and deliver solutions * Follows coding standards, best practices, and maintains documentation * Nice to have: AI exposure, ETL tool experience, GitLab CI/CD, and JIRA * Strong scripting knowledge: Shell scripting + good Unix command-line understanding * Must have RESTful web service experience. * Understand microservices based, scalable architecture (previous experience working with Kafka). * Experience implementing caching (using Hazel cast/ earache / Memcached/others) will be plus. * Strong experience in SDLC, DevOps processes – CI/CD tools, Git, etc. * Should be capable to look into the production issue by checking code & Splunk logs  At BNY, our culture speaks for itself, check out the latest BNY news at: BNY Newsroom [https://www.bny.com/corporate/global/en/about-us/newsroom.html] BNY LinkedIn  [https://www.linkedin.com/company/bnyglobal/posts/?feedView=all]    Here’s a few of our recent awards:  * America’s Most Innovative Companies, Fortune, 2025 * World’s Most Admired Companies, Fortune 2025 * “Most Just Companies”, Just Capital and CNBC, 2025 Our Benefits and Rewards: BNY offers highly competitive compensation, benefits, and wellbeing programs rooted in a strong culture of excellence and our pay-for-performance philosophy. We provide access to flexible global resources and tools for your life’s journey. Focus on your health, foster your personal resilience, and reach your financial goals as a valued member of our team, along with generous paid leaves, including paid volunteer time, that can support you and your family through moments that matter.  BNY is an Equal Employment Opportunity/Affirmative Action Employer - Underrepresented racial and ethnic groups/Females/Individuals with Disabilities/Protected Veterans.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
This role involves working closely with data stakeholders to translate requirements into scalable database solutions, designing logical and physical data models, and developing/optimizing database objects like tables, views, and stored procedures. The engineer will also implement validation checks, maintain data governance standards, and build/maintain database-side ETL/ELT logic while troubleshooting failures.
Loading...