Big Data Engineer at Müller`s Solutions
Delhi, delhi, India -
Full Time


Start Date

Immediate

Expiry Date

02 Jun, 26

Salary

0.0

Posted On

04 Mar, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

Yes

Skills

Apache Spark, Hadoop, Kafka, Java, Scala, Python, ETL, AWS, Azure, Google Cloud, HDFS, S3, BigQuery, Data Validation, Data Architecture

Industry

IT Services and IT Consulting

Description
Müller’s Solutions is looking for an experienced Big Data Engineer to join our innovative team. In this role, you will be responsible for designing and developing scalable big data solutions that enable advanced analytics and insights across the organization. You will work with large data sets and utilize cutting-edge technologies to ensure data processing is efficient and reliable. Key Responsibilities: Design, build, and maintain scalable data pipelines using big data technologies such as Apache Spark, Hadoop, and Kafka. Collaborate with data scientists, analysts, and IT teams to identify data requirements and deliver solutions that meet business needs. Ensure high data quality and integrity by implementing robust data validation and testing processes. Optimize data storage solutions for performance and cost-efficiency across cloud and on-premises environments. Monitor and troubleshoot data processing workflows to ensure timely data delivery. Document data architectures, processes, and workflows for knowledge sharing and compliance. Stay ahead of industry trends and best practices in big data technologies and methodologies. Requirements: Bachelor's degree in Computer Science, Information Technology, or a related field. 3+ years of experience as a Big Data Engineer or in a similar data engineering role. Proficiency in big data technologies such as Apache Spark, Hadoop, and Apache Kafka. Strong coding skills in programming languages like Java, Scala, or Python. Familiarity with data processing frameworks and ETL tools. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and data storage solutions (e.g., HDFS, S3, BigQuery). Analytical mindset and excellent problem-solving abilities. Strong communication skills to collaborate effectively with technical and non-technical stakeholders. A self-motivated and proactive approach to work, with the ability to manage multiple tasks and deadlines. Preferred Qualifications: Experience with machine learning frameworks and algorithms is a plus. Knowledge of data governance and security best practices. Familiarity with containerization technologies and orchestration tools like Docker and Kubernetes. 1- Attractive Package. 2- Family Benefits. 3- Visa. 4-Air Tickets.
Responsibilities
The role involves designing and developing scalable big data solutions using technologies like Apache Spark, Hadoop, and Kafka to support advanced analytics. Key duties include collaborating with stakeholders, ensuring data quality, optimizing storage, and monitoring data workflows.
Loading...