Cloud Data Engineer at Thakral One
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

16 Jun, 26

Salary

0.0

Posted On

18 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Warehouses, Data Lakes, AWS, Snowflake, Starburst, Data Virtualization, ETL, SQL, PL/SQL, Unix Scripts, IDQ, Hive QL, Python, Spark, Scala, Data Modeling

Industry

IT Services and IT Consulting

Description
Job Profile Position details: Cloud Data Engineer with a good technology background and hands-on experience working in an enterprise environment, designing, and implementing data warehouses, data lakes, and data marts for large financial institutions. In this role you will work with technology and business leads to build or enhance critical enterprise applications in the cloud (AWS). In this role you will work with technology leads to build or enhance critical enterprise applications both on-prem and in the cloud (AWS) along with Modern Data Stack containing Snowflake Data Platform and Starburst Data Virtualization tool for Semantic Layer Build out. Successful candidates will possess in-depth knowledge of current and emerging technologies and demonstrate a passion for designing and building elegant solutions and for continuous self-improvement. Roles and Responsibilities: Manage data analysis and data integration of disparate systems. Work with business users to translate functional specifications into technical designs for implementation and deployment. Extract, transform, and load large volumes of structured and unstructured data from various sources into AWS data lakes or data warehouses. Work with cross functional team members to develop prototypes, produce design artifacts, develop components, perform, and support SIT and UAT testing, triaging and bug fixing. Optimize and fine-tune data pipelines jobs for performance and scalability. Implement data quality and data validation processes to ensure data accuracy and integrity. Provide problem-solving expertise and complex analysis of data to develop business intelligence integration designs. Convert physical data integration models and other design specifications to source code. Ensure high quality and optimum performance of data integration systems to meet business solutions. Job Requirements: Bachelors’ Degree (or foreign equivalent degree) in Information Technology, Information Systems, Computer Science, Software Engineering, or a related field. Experience in the financial services or banking industry is preferred. 5 Years of experience working as a Data Engineer, with a focus on building data pipelines and processing large datasets using Informatica Powercenter, IDMC 2 - 3 Years of experience working on Informatica Data Quality (IDQ) 5 Years of experience working with Datawarehouses, should be able to write and understand complex SQL queries and PL/SQL scripts, Unix scripts 2 - 3 years of experience with Snowflake Data Platform is highly desirable. Exposure to Data Virtualiation Platforms (Starburst, Denodo) is a plus. 1-2 Years of proficiency in AWS services, including AWS Glue, Redshift, EMR, RDS, Kinesis, S3, Athena, DynamoDB, Step Functions and Lambda. 1-2 Years of Expertise in Hive QL, Python programming, experience with Spark, Python, Scala and Spark for big data processing and analysis. Solid understanding of data modeling, database design, and ETL principles. Experience working with data lakes, data warehouses, and distributed computing systems. Familiarity with data governance, data security, and compliance practices in cloud environments. Strong problem-solving skills and the ability to optimize and fine-tune data pipelines and Spark jobs for performance. Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
Responsibilities
Manage data analysis and integration across disparate systems, translating functional specifications into technical designs for implementation and deployment in the cloud and on-premise environments. Responsibilities include extracting, transforming, and loading large data volumes, developing prototypes, performing testing, optimizing data pipelines, and ensuring data quality and integrity.
Loading...