Start Date
Immediate
Expiry Date
08 Dec, 25
Salary
90000.0
Posted On
09 Sep, 25
Experience
5 year(s) or above
Remote Job
Yes
Telecommute
Yes
Sponsor Visa
No
Skills
Azure, Python
Industry
Information Technology/IT
PRIMARY OR MANDATORY SKILLS :-
GOOD TO HAVE SKILLS :
Data lake, Azure, Telecom Domain Knowledge
DETAILED JOB DESCRIPTION : -
Minimum 5 years of experience in Enterprise Data Warehouse solutioning, Exposure of Databricks, Big Data technology stacks like Cloudera, HBase Analytics tools like Python, PySpark etc
· In depth knowledge of Teradata Utilities, Macros
· Strong SQL analytical skills
· Experience in Databricks and DBT
· Knowledge in Power BI
· Good to have knowledge in Airflow scheduling
· Involve in Business requirement gatherings, Analysis of requirements, Design, solution walkthrough, Workshops & Identify gap’s in solution & Business requirement with Business & IT team.
· Basic knowledge in Spark leveraging Scala or Python and Optimize the performance of the built Spark applications in Big data Platform.
· Strong analytical mindset and ability to work independently and in fast-paced and quickly changing environment
· Work and continuously improve the DevOps pipeline and tooling to provide active management of the continuous integration/continuous deployment processes
· Good to have experience in any ETL tools
· Experience working in an Agile delivery model
· Preparing Test Strategy , Test Plan, reports, manuals and other documentation on the status, operation and maintenance of Data ware housing Applications.
Job Type: Full-time
Pay: $76,000.00 – $90,000.00 per year
Work Location: In perso
Please refer the Job description for details