Software Engineer, Python/Go (Data+Backend) at Kronos Research
, , Taiwan -
Full Time


Start Date

Immediate

Expiry Date

15 Feb, 26

Salary

0.0

Posted On

17 Nov, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Go, Data Pipelines, Data Warehousing, Cloud Technologies, AWS, Azure, GCP, Data Modeling, Database Design, Problem-Solving, Analytical Skills, C/C++, Java, Machine Learning, Data Integration

Industry

Software Development

Description
Job Description We are currently looking for a talented software engineer to join us as a Software Engineer to take over and improve upon our existing data pipelines. As a Software Engineer at Kronos, you will have the opportunity to learn about how data drives success in the heart of the quantitative trading industry. Successful candidates should have experience in constructing large-scale systems and data processing pipelines, utilizing both on-premise and cloud-based technologies. Responsibilities Design, build, and maintain all domain-related data pipelines & data warehouses and optimize existing pipelines Develop solutions to easily import and integrate data from 3rd party data vendors and exchange private endpoints into our existing data inventory Deploy and manage the daily operation of our market data loggers and ensure data is stored efficiently and securely across cloud and on-premise solutions (S3/NAS/DFS) Work closely with the Infra team and quant researchers to understand data needs. Set up and maintain data pipelines that prepare raw market data for machine learning research and simulations Design and develop tools and interfaces that improve quality, integrity, and availability for all of our data Qualifications Bachelor’s degree in Computer Science, Mathematics, Machine Learning, or a related field 3+ years of relevant industry experience Excellent problem-solving and analytical skills, with a focus on identifying and addressing complex data engineering challenges Strong experience in designing and developing scalable cloud-native software solutions, utilizing cloud technologies (e.g., AWS, Azure, GCP) for data storage, processing, and analytics Proven track record in constructing large-scale systems and data-processing pipelines Strong written and verbal communication skills English resume acceptable only Preferred Qualifications Have expertise in data modeling and database design principles Have a solid foundation in programming languages such as C/C++/Go/Java/Python Strong knowledge of building and maintaining data pipelines

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities
Design, build, and maintain data pipelines and data warehouses while optimizing existing systems. Collaborate with teams to ensure data is efficiently stored and prepared for machine learning research.
Loading...