Data Engineer at BrainRocket
Valencia, Valencian Community, Spain -
Full Time


Start Date

Immediate

Expiry Date

08 Feb, 26

Salary

0.0

Posted On

10 Nov, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

Yes

Skills

Python, SQL, Data Manipulation, Data Transformation, Data Warehouse, Data Modelling, Apache Airflow, Snowflake, Redshift, Kafka, Streaming Data, Real-Time Data Processing, Integration, Monitoring, Collaboration, Data Pipelines

Industry

Software Development

Description
BrainRocket is a global company creating end-to-end tech products for clients across Fintech, iGaming, and Marketing. ‍Young, ambitious, and unstoppable, we've already taken Cyprus, Malta, Portugal, Poland, and Serbia by storm. Our BRO team consists of 1,300 bright minds creating innovative ideas and products. We don’t follow formats. We shape them. We build what works, launch it fast, and make sure it hits. ❗️Please note that this role is office based for Valencia, Spain (Carrer de Catarroja, 13, 46940 Manises). ❗️We can provide relocation assistance if you're outside of the city or country. We are seeking a highly skilled Data Engineer with expertise in managing, designing, and optimizing data pipelines utilizing Apache Airflow, Snowflake, and Apache Kafka. This individual will play a pivotal role in architecting robust, scalable, and efficient data solutions, ensuring the integrity, reliability, and accessibility of our data infrastructure ✅ Responsibilities: Develop and implement data models to support business requirements, optimizing for performance and scalability; Design, build, and maintain scalable data pipelines using Apache Airflow; Implement and maintain Kafka-based streaming data pipelines for real-time data processing and integration with various systems; Integration to third party databases and APIs; Establish monitoring, alerting, and maintenance procedures to ensure the health and reliability of data pipelines; Collaborate with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements. ✅ Requirements: Proficiency in Python, SQL, and experience with data manipulation and transformation; Data warehouse and data modelling techniques; Experience in designing, building, and maintaining complex data pipelines using Airflow; Proven track record in data engineering roles, with a focus on designing and implementing scalable data solutions using Snowflake or Redshift; In-depth understanding and practical experience in implementing Kafka-based streaming architectures for real-time data processing. ✅ We offer excellent benefits, including but not limited to: 🏥 Six additional days of undocumented sick leaves; 🏥 Medical Insurance; 🥳 Birthdays, milestones and employee anniversaries celebrations; 🏢 Modern offices with snacks and all the essentials; 🎉 Social Club and more than 50 events per year; 🍳 Partial coverage of breakfasts and lunches; 💻 Learning and development opportunities and interesting, challenging tasks; ✈️ Relocation package (tickets, staying in a hotel for up to 2 weeks, and visa relocation support for our employees and their family members); 📚 Opportunity to develop language skills, with partial compensation for the cost of English; 📈 Competitive remuneration level with annual review; 🤝 Teambuilding activities. Bold moves start here. Make yours. Apply today! By submitting your application, you agree to our Privacy Policy.
Responsibilities
The Data Engineer will develop and implement data models to support business requirements and design, build, and maintain scalable data pipelines. This role involves ensuring the integrity, reliability, and accessibility of the data infrastructure.
Loading...