Data Architect (Remote - India) at Jobgether
, , India -
Full Time


Start Date

Immediate

Expiry Date

15 Jan, 26

Salary

0.0

Posted On

17 Oct, 25

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Architecture, ETL Processes, Data Modeling, Big Data, Python, SQL, Hadoop, Spark, PySpark, Kafka, Cloud Platforms, Databricks, AI/ML, Problem-Solving, Analytical Mindset, Collaboration

Industry

Internet Marketplace Platforms

Description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Architect in India. As a Data Architect, you will play a key role in designing, building, and maintaining scalable data solutions that support critical business operations and analytics initiatives. You will collaborate with cross-functional teams to implement robust data models, ETL pipelines, and big data solutions while ensuring performance, reliability, and security. The role offers an opportunity to work on cutting-edge projects involving cloud platforms, AI/ML integration, and modern data engineering tools. You will drive innovation in data management, help solve complex data challenges, and contribute to the development of next-generation data systems in a collaborative and dynamic environment. This position is ideal for individuals passionate about data architecture, technology evolution, and continuous learning. Accountabilities Design, develop, and maintain large-scale data systems and architecture. Develop and implement ETL processes using tools such as Teradata, Informatica, ADF, or Snowflake. Collaborate with cross-functional teams to design and implement optimized data models. Work with big data technologies including Hadoop, Spark, PySpark, and Kafka to create scalable solutions. Build efficient, high-performance data pipelines and troubleshoot data-related issues. Transition and upskill into Databricks and AI/ML projects to support future initiatives. Ensure data quality, security, and compliance with industry best practices. Proven experience in data engineering or data architecture roles. Strong proficiency in Python, SQL, ETL processes, and data modeling. Hands-on experience with one or more of the following: Teradata, Informatica, Hadoop, Spark, PySpark, ADF, Snowflake, Big Data, Scala, Kafka. Cloud platform knowledge (AWS, Azure, or GCP) is a plus. Willingness to learn and adapt to new technologies, specifically Databricks and AI/ML solutions. Strong problem-solving skills, analytical mindset, and ability to work collaboratively across teams. Nice to have: experience with Databricks, AI/ML tools, and relevant technology certifications. Competitive salary and performance-based benefits. Opportunities to work on innovative and cutting-edge projects. Collaborative, dynamic, and inclusive work environment. Professional growth, upskilling, and continuous learning opportunities. Remote work options and flexible working hours. Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements. 📊 It compares your profile to the job’s core requirements and past success factors to determine your match score. 🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role. 🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed. The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or assessments) are then made by their internal hiring team. Thank you for your interest! #LI-CL1
Responsibilities
Design, build, and maintain scalable data solutions that support business operations and analytics initiatives. Collaborate with teams to implement data models, ETL pipelines, and big data solutions while ensuring performance and security.
Loading...