Data Engineer at Àlber Blanc Capital
, , -
Full Time


Start Date

Immediate

Expiry Date

27 Apr, 26

Salary

0.0

Posted On

27 Jan, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Orchestrators, Databases, Machine Learning, Data Pipelines, Data Quality, Data Storage, Data Retrieval, Data Lakes, ETL/ELT, Orchestration, CI/CD, Probability, Statistics, Algorithms, Pandas

Industry

technology;Information and Internet

Description
As a Data Engineer at Alber Blanc Capital, you will be responsible for working with orchestrators and databases together with our Machine Learning team.Strong Python, deep understanding of libraries and data formats would be the best fit for this position. Responsibilities Design, build, and maintain scalable data pipelines to work with large datasets Own data quality end-to-end Develop and optimize data storage and retrieval layers across data lakes Implement robust ETL/ELT workflows, orchestration, and CI/CD for data jobs Partner closely with different teams to define data requirements, improve data usability, and reduce time-to-insight Continuously evaluate and adopt modern data tools and architectures Requirements STEM degree from one of top-tier universities OR proven track of success as Data Analyst/Researcher/Data Scientist Advanced understanding of probability, statistics, math and algorithms Strong analytical skills for working with large datasets Confident, hands-on level of Python and common libraries (pandas, polars, etc) Nice to have Personal achievements such as ICPC, IMC, IOI, codeforces red+, Kaggle, etc. Experience with distributed computing frameworks (e.g., Ray, etc) What we offer Result-oriented bonuses along with extraordinary salary Transparent processes and lack of red-tape Competitive environment with an opportunity to make decisions and change the company One of the best level of expertise on the market
Responsibilities
The Data Engineer will be responsible for designing, building, and maintaining scalable data pipelines to handle large datasets, while also owning data quality end-to-end. This role involves developing and optimizing data storage and retrieval layers across data lakes and implementing robust ETL/ELT workflows.
Loading...