Module Lead - SQL Job at Yash Technologies
, , India -
Full Time


Start Date

Immediate

Expiry Date

26 Dec, 25

Salary

0.0

Posted On

27 Sep, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

SQL, Snowflake, Data Visualization, Power BI, Tableau, Data Engineering, ETL Processes, Data Pipelines, Data Quality, Data Governance, Problem-Solving, Data Modelling, Performance Tuning, Cloud Platforms, Python, Agile Methodologies

Industry

IT Services and IT Consulting

Description
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.   At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.   We are looking forward to hire SQL Professionals in the following areas :   Job Description:   This position is on the Connectivity Enabled Solutions Team in Connectivity Department within the Client Digital organization. This team has accountability for Building common datasets to support Connectivity initiatives. Streamline reporting of data through various sources, enhancing business logic to improve data reporting, and delivering visualize data in a consistent way for internal and external business users. Building solutions for Clients and dealers to better support the customers   We are seeking a skilled and motivated Data engineer with 5-7 years of experience to join our team. The resource should have hands-on experience in SQL development, Snowflake data warehousing, and data visualization tools such as Power BI and Tableau. The resource will be responsible for designing, building, and maintaining views, scalable data pipelines and reporting solutions that support business intelligence and analytics initiatives.   Key Responsibilities Design, develop, and optimize robust data pipelines using SQL and Snowflake. Build and maintain data models and ETL processes to support analytics and reporting. Collaborate with business stakeholders to understand reporting needs and translate them into actionable dashboards. Develop and maintain interactive dashboards and reports using Power BI and Tableau. Ensure data quality, integrity, and governance across all data sources. Monitor performance and troubleshoot issues in data workflows and reporting layers. Document technical solutions and maintain best practices for data engineering.   Required Qualifications/Experience Bachelor’s degree in Computer Science, Engineering, or a related field. 5-7 years of experience in data engineering or a similar role. Strong proficiency in SQL for data manipulation and transformation - relational databases such as MySQL, PostgreSQL, NoSQL database, etc Big data and data modelling: Experience with Snowflake or similar cloud data platforms. Hands-on experience in data modelling, ETL processes, Data pipeline development, performance tuning and data warehousing Proven experience in building dashboards with Power BI and Tableau Power BI - Data Modelling, DAX, Power Query, Report & Dashboard Design, Data Connectivity, Performance Optimization, Row-Level Security Tableau - Data Preparation, Calculated Fields, Visual Analytics, Data Blending & Joins, Parameters & Actions, Publishing & Sharing Advance SQL such as CTEs, functions, procedures, pivoting, window functions, performance tuning, etc. Snowflake environment such as task, procedures, streams, Snowpark, Streamlit, AI/ML studio, etc. Excellent problem-solving skills and attention to detail.   Preferred (But not required) Experience Experience with cloud platforms like AWS: Message brokers such as Kafka, AWS SQS, AWS SNS, Kinesis AWS environment such as Lambda functions, SQS, Step Functions Knowledge of scripting languages like Python - Python data libraries such as Pandas, NumPy, Airflow, PySpark, Matplotlib, etc. Understanding of CI/CD pipelines and version control systems - Git repositories, code versioning, development and integration of CI/CD pipelines, etc. Container Platforms and orchestration such as Docker, ECR, Kubernetes. Exposure to Agile methodologies and tools like Azure DevOps.    At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale.   Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Responsibilities
The role involves designing, developing, and optimizing data pipelines using SQL and Snowflake, as well as building and maintaining data models and ETL processes. The candidate will collaborate with stakeholders to create actionable dashboards and ensure data quality across all sources.
Loading...