Bigdata Scala Engineer Vice president at Citi
Pune, maharashtra, India -
Full Time


Start Date

Immediate

Expiry Date

02 Mar, 26

Salary

0.0

Posted On

02 Dec, 25

Experience

10 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Scala, Apache Spark, Big Data, Distributed Systems, Cloud Platforms, Data Warehousing, ETL/ELT Processes, Data Modeling, Functional Programming, Problem-Solving, Leadership, Communication, Real-Time Data Processing, Machine Learning, Open-Source Contributions, Financial Services

Industry

Financial Services

Description
Lead the architecture, design, and development of high-performance, scalable, and reliable big data processing systems using Scala and Apache Spark. Drive technical vision and strategy for big data initiatives, ensuring alignment with overall company goals and industry best practices. Evaluate and recommend new technologies and tools to enhance our big data capabilities and maintain a competitive edge. Mentor and guide a team of talented big data engineers, fostering a culture of technical excellence, continuous learning, and collaboration. Conduct code reviews, provide constructive feedback, and contribute to the professional growth of team members. Participate in the recruitment and hiring of top engineering talent. Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 12+ years of progressive experience in software development, with at least 5+ years focusing on big data technologies. 3+ years of experience in a leadership or senior architectural role. Extensive hands-on experience with Scala for big data processing. Demonstrated expertise with Apache Spark (Spark Core, Spark SQL, Spark Streaming). Strong experience with distributed systems and big data ecosystems (e.g., Hadoop, Kafka, Cassandra, HBase, Delta Lake, Snowflake, Databricks). Proficiency with cloud platforms (AWS, Azure, GCP) and their big data services (e.g., EMR, Redshift, Glue, DataProc, BigQuery). Experience with containerization technologies (Docker, Kubernetes) and CI/CD pipelines. Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. Familiarity with functional programming paradigms in Scala. Exceptional problem-solving and analytical skills. Strong leadership, communication, and interpersonal skills. Ability to work independently and collaboratively in a fast-paced, dynamic environment. Proactive and results-oriented with a strong sense of ownership. Experience with real-time data processing and stream analytics. Knowledge of machine learning frameworks and their application in big data. Contributions to open-source big data projects. Experience in the financial services industry. ------------------------------------------------------ For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Design, develop, and optimize data pipelines for ingestion, transformation, and storage of massive datasets from various sources. Implement robust and efficient data processing jobs using Scala and Spark (batch and streaming). Ensure data quality, integrity, and security across all big data platforms. Work closely with DevOps and SRE teams to ensure operational excellence, monitoring, and troubleshooting of big data systems. Contribute to the strategic roadmap for big data engineering, identifying opportunities for innovation and improvement.
Responsibilities
Lead the architecture, design, and development of big data processing systems using Scala and Apache Spark. Mentor a team of engineers and drive the technical vision for big data initiatives.
Loading...