Senior Associate Developer - Databricks, PySpark at Datavail Career Site
Mumbai, maharashtra, India -
Full Time


Start Date

Immediate

Expiry Date

24 Jun, 26

Salary

0.0

Posted On

26 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Databricks, PySpark, Spark Sql, Delta Lake, Bronze–Silver–Gold Architecture, Lakehouse Patterns, Aws, Azure, Gcp, Data Warehousing, Dimensional Modeling, Big-Data Concepts, Etl/Elt Pipelines, Spark Streaming, Unity Catalog, Ci/Cd Pipelines

Industry

IT Services and IT Consulting

Description
Job Title: Senior Associate Developer - Databricks, PySpark, and Spark SQL Education: Any Graduate Experience: 5+years Location: Mumbai   Key Skills: * Strong hands-on experience with Databricks, PySpark, and Spark SQL. * Expertise in Delta Lake, Bronze–Silver–Gold architecture, and Lakehouse patterns. * Strong experience with cloud platforms (AWS/Azure/GCP). * Solid understanding of data warehousing, dimensional modeling, and big‑data concepts.   Job Description: * Build scalable ETL/ELT pipelines using Databricks (PySpark, SQL, Spark Streaming). * Develop and optimize Delta Lake tables, ACID transactions, schema evolution, and time travel. * Implement Unity Catalog, data governance, and access control.Optimize cluster configurations, job workflows, and performance tuning in Databricks. * Design and implement batch and streaming pipelines using Spark Structured Streaming. * Integrate Databricks with multiple data sources (RDBMS, APIs, cloud storage, message queues).Develop reusable, modular, and automated data processing frameworks. * Implement CI/CD pipelines for Databricks using GitHub Actions / Azure DevOps / GitLab.Automate cluster management and job orchestration using Databricks REST APIs. * Maintain code quality, unit tests, and documentation.  * Write and optimize complex SQL queries and statements to ensure high performance and efficient data retrieval. * Strong database design including normalization, data modelling, and relational schema creation. * Conduct performance analysis, troubleshoot database issues like slow queries or deadlocks and implement solutions * Design and implement database structures, including tables, schemas, views, stored procedures, functions, and triggers. * Optimize database performance through query tuning, indexing, and performance analysis. * Ensure data integrity, security, and compliance standards * Need strong Python skills combined with expertise in Apache Spark for large scale data processing. Core abilities include building efficient ETL pipelines, optimizing distributed jobs, and handling large-scale data transformations * Expertise in Python programming, Spark APIs, and parallel processing. * Proficiency in Python (including Pandas, NumPy) for data manipulation and scripting * Deep knowledge of PySpark APIs like DataFrames, RDDs, Spark SQL for querying and processing. * Familiarity with RESTful APIs, batch processing, CI/CD, and monitoring data jobs. * Optimize Spark jobs for performance, troubleshoot issues, and ensure data quality across systems. * Collaborate with data engineers and scientists to implement workflows, conduct code reviews, and integrate with cloud platforms like AWS or Azure. * Design, develop, and maintain scalable data pipelines and ETL processes using Azure Databricks * Build data transformation workflows using Python or Scala. * Work with data lakes using Delta Lake. * Integrate data from multiple sources such as APIs, databases, and cloud storage. * Monitor and optimize data workflows for performance and reliability. * Collaborate with data scientists, analysts, and business teams.   Datavail is a leading provider of data management, application development, analytics, and cloud services, with more than 1,000 professionals helping clients build and manage applications and data via a world-class tech-enabled delivery platform and software solutions across all leading technologies. For more than 17 years, Datavail has worked with thousands of companies spanning different industries and sizes, and is an AWS Advanced Tier Consulting Partner, a Microsoft Solutions Partner for Data & AI and Digital & App Innovation (Azure), an Oracle Partner, and a MySQL Partner. Datavail’s Data Management Services:  Datavail’s Data Management and Analytics practice is made up of experts who provide a variety of data services including initial consulting and development, designing and building complete data systems, as well as ongoing support and management of database, data warehouse, data lake, data integration, and virtualization and reporting environments. Datavail’s team is comprised of not just excellent BI & analytics consultants, but great people as well. Datavail’s data intelligence consultants are experienced, knowledgeable and certified in the best in breed BI and analytics software applications and technologies. We ascertain your business objectives, goals and requirements, assess your environment, and recommend the tools which best fit your unique situation. Our proven methodology can help your project succeed, regardless of stage. With the combination of a proven delivery model and top-notch experience ensures that Datavail will remain the Data Management experts on demand you desire. Datavail’s flexible and client focused services always add value to your organization.
Responsibilities
The role involves building scalable ETL/ELT pipelines using Databricks technologies like PySpark and Spark SQL, focusing on developing and optimizing Delta Lake tables, implementing governance, and tuning performance.
Loading...