Data Engineer at Athens Technology Center
Thessaloniki, Macedonia and Thrace, Greece -
Full Time


Start Date

Immediate

Expiry Date

24 Mar, 26

Salary

0.0

Posted On

24 Dec, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, SQL, Python, Apache Spark, Data Warehousing, Data Modeling, Agile, Scrum, Data Cleaning, ETL, Machine Learning, AI, Custom Software Development, BI Tools, Databricks, Cloud Technologies

Industry

IT Services and IT Consulting

Description
Athens Technology Center seeks for a Data Engineer (Athens/ Thessaloniki/ hybrid) About Us ATC is an Information Technology Company offering solutions and services targeting specific sectors incl. the Media, Banking, Retail Sectors, Utilities and Public Sector Organisations. As a full-service software development company, we apply modern design principles, along with the latest data science, machine learning, cloud, mobile and desktop technologies. We strive to deliver quality software solutions for top clients and global leaders in numerous industries, while at the same time being at the forefront of research and innovation. The Position ATC is looking for a Data Engineer to join our Digital Solutions Unit. You will work together with experienced developers to add value to ATC 's line of products and services. You will follow all steps of deployment, from design and implementation to testing and deployment. ATC’s Digital Solutions (DS) Business Unit is mainly focusing on projects in the private sector, both in Greece and abroad. DS covers a wide range of services from AI and ML to Custom Software Development and Outsourcing. Key Responsibilities Assemble large, complex data sets that meet functional and non-functional business requirements. Build the infrastructure required for optimal extraction, transformation, and loading (ETL/ELT) of data from a wide variety of data sources. Design and implement big data processing jobs using Apache Spark. Run data cleaning, providing normalized and structured data for reporting and AI usage. Develop and maintain scalable data infrastructures with high availability, performance, and capability to integrate new technologies. Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. Qualifications Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics, Data Science, or related fields; a Master’s degree is considered a plus. 2+ years of experience in a data-focused role such as Data Engineer, Analytics Engineer, Data Analyst, or Data Scientist. Strong SQL skills with the ability to navigate and analyze complex data models. Working experience developing data pipelines using Python and Apache Spark (PySpark / Spark SQL). Experience working with data warehousing concepts and designing analytical data models. Excellent spoken and written communication skills in English. Agile and Scrum methodologies. Nice to Have Visualization Tools: Experience with BI tools such as PowerBI or Tableau for creating dashboard insights. Data Analytics Cloud Platforms: Experience with Databricks is considered a strong asset. Other cloud experience such as Microsoft Fabric is considered a plus. Benefits Competitive compensation package Private health coverage Experience and knowledge in diverse scientific areas and the possibility to explore a variety of topics Tailored training programme and access to cut-edge skills. Working with international teams and world-class institutions and clients. Flexibility in working conditions (blend teleworking with office). Friendly, pleasant and creative working environment If you are searching for a company and a team that takes into account your ideas and individual growth, recognizes you for your unique contributions, fills you with a sense of purpose, and provides a fun, flexible, and inclusive work environment – apply now. Interested candidates should send their CVs in English.
Responsibilities
The Data Engineer will assemble large, complex data sets and build the infrastructure for optimal data extraction, transformation, and loading. They will also design and implement big data processing jobs and maintain scalable data infrastructures.
Loading...