Foundry Python Pyspark SSE at Impetus Technologies
Abu Dhabi, , United Arab Emirates -
Full Time


Start Date

Immediate

Expiry Date

14 Sep, 25

Salary

0.0

Posted On

15 Jun, 25

Experience

0 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Version Control, Git, Aws, Python, Data Engineering, Algorithms, Software Development, Ontology, Computer Science, Optimization, Data Structures, Transformations, Object Oriented Programming, Data Processing, Data Science, Azure

Industry

Information Technology/IT

Description

Abu Dhabi - United Arab Emirates
Qualification
:

Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, Data Science, or a related quantitative field.

  • Experience: 4+ years of professional experience in software development, with a strong focus on data engineering.
  • Python Proficiency: Expert-level proficiency in Python, including object-oriented programming, data structures, and algorithms.
  • PySpark Expertise: Strong experience with PySpark for large-scale data processing and transformations.
  • Palantir Foundry: Proven hands-on experience designing, developing, and deploying solutions within Palantir Foundry.
  • Familiarity with Foundry data integration patterns and best practices.
  • Experience with Foundry applications like Code Workbook, Pipeline Builder, Data Health, Ontology, and Transformations (Batch/Streaming).
  • SQL Skills: Excellent SQL skills for data querying, manipulation, and optimization.
  • Data Warehousing/Lakes: Experience with data warehousing concepts, data lake architectures, and ETL/ELT principles.
  • Cloud Platforms: Experience with at least one major cloud platform (AWS, Azure, GCP), particularly with data-related services.
  • Version Control: Strong experience with Git and collaborative development workflows.

Skills Required
:
Big Data, Pyspark, ETL, SQL, Palantir Foundry
Role
:

Key Responsibilities:

  • Platform & Data Pipeline Development: Design, develop, test, and deploy robust, scalable, and efficient data pipelines using Python and PySpark within the Palantir Foundry ecosystem.
  • Foundry Expertise: Leverage Palantir Foundry’s capabilities (e.g., Code Workbook, Pipeline Builder, Contour, Data Health, Ontology) to ingest, transform, integrate, and manage complex datasets.
  • Data Transformation & Modeling: Implement complex data transformations and build sophisticated data models to support analytics, reporting, and machine learning initiatives.
  • Code Quality & Best Practices: Ensure high code quality, performance, and adherence to engineering best practices, including testing, documentation, and version control.
  • Troubleshooting & Optimization: Identify and resolve performance bottlenecks, data quality issues, and system failures within the data pipelines and Foundry environment.
  • Collaboration: Work closely with data scientists, data analysts, product managers, and other engineering teams to understand data requirements and translate them into technical solutions.
  • Security & Compliance: Ensure all data solutions comply with data governance, security, and privacy regulations.

Experience
:
4 to 6 years
Job Reference Number
:
13112

Responsibilities
  • Platform & Data Pipeline Development: Design, develop, test, and deploy robust, scalable, and efficient data pipelines using Python and PySpark within the Palantir Foundry ecosystem.
  • Foundry Expertise: Leverage Palantir Foundry’s capabilities (e.g., Code Workbook, Pipeline Builder, Contour, Data Health, Ontology) to ingest, transform, integrate, and manage complex datasets.
  • Data Transformation & Modeling: Implement complex data transformations and build sophisticated data models to support analytics, reporting, and machine learning initiatives.
  • Code Quality & Best Practices: Ensure high code quality, performance, and adherence to engineering best practices, including testing, documentation, and version control.
  • Troubleshooting & Optimization: Identify and resolve performance bottlenecks, data quality issues, and system failures within the data pipelines and Foundry environment.
  • Collaboration: Work closely with data scientists, data analysts, product managers, and other engineering teams to understand data requirements and translate them into technical solutions.
  • Security & Compliance: Ensure all data solutions comply with data governance, security, and privacy regulations
Loading...