Big Data Engineer (Financial Services) Consultant/Senior Consultant, Techno at EY
Singapore 048583, Central, Singapore -
Full Time


Start Date

Immediate

Expiry Date

02 Aug, 25

Salary

0.0

Posted On

02 May, 25

Experience

3 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

It, Service Delivery, Presentation Skills, Excel, Powerpoint

Industry

Information Technology/IT

Description

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all.
We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Financial Services (FSO) and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including asset management, banking and capital markets, insurance and private equity, we provide integrated advisory, assurance, tax, and transaction services.

THE OPPORTUNITY

We are growing our Data & Analytics team in FSO and now is an exciting time to join us on this journey. We are looking for Data Engineers who are passionate about technology & data with financial services experience, to join our diverse team. Whether you are a Consultant at the start of your career through to Manager looking for your next role across the whole breadth and depth of Data & Analytics, we would love to talk to you.

SKILLS AND ATTRIBUTES FOR SUCCESS

  • Strong problem solver, someone comfortable to challenge the status quo
  • Knowledge and experience in end-to-end project delivery, either traditional SDLC or agile delivery methodologies (or hybrid approaches)
  • Leverage technology to continually learn, improve service delivery and maintain our leading-edge best practices
  • Strong presentation skills and proficiency in the use of PowerPoint, Word and Excel
  • Good understanding of financial services industry
Responsibilities

YOUR KEY RESPONSIBILITIES

  • Ability to clearly explain data and analytics strengths and weaknesses to both technical and senior business stakeholders

  • Develop and maintain strong effective working relationships with key management personnel and internal client base including data engineers, BA’s and directly with the services/businesses

  • Responsible for developing batch ingestion and data transformation routines using ETL tools or other ingestion techniques
  • Responsible for developing real-time ingestion and data transformation routines using Kafka and similar technologies
  • Responsible for migrating data from legacy data platform to cloud data platform
  • Present data in graphs, charts, tables, etc. and designing and developing relational databases for collecting data.
  • Keep track of trends, patterns, and correlation in case of complex data sets.
  • Be involved in all aspects of the project life-cycle, including strategy, road-mapping, architecture and implementation to gain maximum exposure to set you up for a successful consulting career.
  • Configuration and coordination of data pipelines across projects
  • Data pipeline development, including Azure Data Factory, AWS Kinesis, Spark
  • Migration and transformation of large complex data sets from legacy systems like Teradata to cloud platform using Azure tools and other ETL tools like DataStage (ideally 5+ Years)
  • Data and Reporting Platform development on environments using technologies such as Microsoft Azure, SAP BW, Teradata, PowerBI or Cognos

TO QUALIFY FOR THE ROLE, YOU MUST HAVE

  • Bachelor or Master’s degree in computer science, Engineering, or other related fields.
  • Minimally 3 years of relevant experience. Preferably at least a year of experience in the consulting industry.
  • Understanding or even practical experience of handling and manipulating semi-structured and unstructured data.
  • Deep understanding of big data technology, concepts, tools, features, functions and benefits of different approaches available.
  • Experience with one of Java, C# or C++.
  • Hands-on experience with HiveQL.
  • Familiarity with data ingestion tools such as Kafka, Flume and Sqoop.
  • Knowledge of hadoop related workflow/scheduling tools such as Oozie.
  • A strong background in coding with possible experience with Python, SAS, SQL or R.
  • Project delivery toolset experience in one or more batch ETL tools (such as DataStage, Informatica, Microsoft SSIS, Azure Data Factory or Talend) OR open source data integration tools (such as Kafka or Nifi)
Loading...