Start Date
Immediate
Expiry Date
10 Nov, 25
Salary
8500.0
Posted On
11 Aug, 25
Experience
0 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Good communication skills
Industry
Information Technology/IT
Key Responsibilities:
Design, build, and manage scalable ETL pipelines using Talend, Spark, and SQL-based tools.
Manage data extraction, transformation, and loading from diverse sources including Oracle, Teradata, and SQL Server.
Perform database schema design, optimization, and performance tuning in Hive, Oracle, and SQL Server.
Develop and maintain data lakes and data marts using Hadoop, Hive, and HDFS.
Collaborate with cross-functional teams to define requirements and deliver timely, insightful reporting dashboards via Power BI.
Automate data ingestion and reporting processes using Python, Shell scripting, and Control-M.
Ensure data integrity, quality, and compliance with best practices and regulatory requirements.
Requirements:
8+ years of experience in Data Engineering or Database Development.
Strong expertise in Hive, Spark, Talend, HDFS, SQL, and Python.
Proficient in developing Power BI dashboards with DAX, Power Query, and complex KPIs.
Familiarity with Control-M, Unix/Linux environments, and Agile methodologies.
Knowledge of data modeling, metadata management, and large-volume data handling.
Hands-on experience with RDBMS (Oracle, Teradata, SQL Server) and NoSQL databases.
Experience in automation testing using Selenium and Java.
Knowledge of compliance standards such as FDA 21 CFR Part 11 and ISO 9001.
How To Apply:
Incase you would like to apply to this job directly from the source, please click here
Please refer the Job description for details