Kafka / S3 Engineer at Capco Singapore
, , Poland -
Full Time


Start Date

Immediate

Expiry Date

03 May, 26

Salary

0.0

Posted On

02 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Apache Kafka, Amazon S3, Java, Python, Scala, Data Pipeline Orchestration, Airflow, Apache NiFi, Data Security, Governance, Compliance, Agile

Industry

Financial Services

Description
CAPCO POLAND *We are looking for Poland based candidates* Capco Poland is a leading global technology and management consultancy, dedicated to driving digital transformation across the financial services industry. Our passion lies in helping our clients navigate the complexities of the financial world, and our expertise spans banking and payments, capital markets, wealth, and asset management. We pride ourselves on maintaining a nimble, agile, and entrepreneurial culture, and we are committed to growing our business by hiring top talent. Role Overview As a Kafka / S3 Engineer, you will be responsible for designing, implementing, and maintaining real-time data streaming and cloud-based storage solutions. You will work closely with Data Engineers, Architects, and Application Teams to deliver reliable, scalable, and secure data pipelines supporting complex client environments. Key Responsibilities Design, configure, and manage Apache Kafka clusters, topics, and consumer groups for real-time data streaming Develop and maintain data ingestion, processing, and streaming pipelines Integrate Kafka-based solutions with Amazon S3 for durable data storage and efficient retrieval Monitor, troubleshoot, and optimize system performance, data flows, and latency Implement data security, compliance, and disaster recovery best practices Collaborate with cross-functional teams to ensure end-to-end data solution alignment Support production environments and contribute to continuous improvement initiatives Essential Skills & Experience Hands-on experience with Apache Kafka (cluster setup, tuning, monitoring, and troubleshooting) Strong experience working with Amazon S3 and cloud-based storage services Proficiency in at least one programming language: Java, Python, or Scala Experience with data pipeline orchestration tools such as Airflow or Apache NiFi Solid understanding of data security, governance, and compliance in data engineering environments Experience working in Agile delivery teams We offer a flexible collaboration model based on a B2B contract, with the opportunity to work on diverse projects

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
As a Kafka / S3 Engineer, you will design, implement, and maintain real-time data streaming and cloud-based storage solutions. You will collaborate with Data Engineers, Architects, and Application Teams to deliver reliable and scalable data pipelines.
Loading...