Sr Data Engineer
at SCALEUP
Desde casa, Río Negro, Argentina -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 30 Apr, 2025 | Not Specified | 31 Jan, 2025 | N/A | Aws,Apache Kafka,Platforms,Data Integration,Sql,Scripting,Data Analysis,Spark,Python,Big Data | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
JOB OPPORTUNITY: SENIOR DATA ENGINEER
Our client, a global agency with over 200 team members specializing in interactive media strategy and development, is seeking a Senior Data Engineer to join their remote team. They believe in fostering innovation through inclusion, empowering their interdisciplinary teams of designers, developers, and strategists to solve complex business challenges. With a strong commitment to equity and meaningful collaboration, they build impactful solutions that connect businesses with their audiences.
QUALIFICATIONS:
- Advanced knowledge of Database Management Systems (DBMS) and data modeling tools like ER/Studio and IBM Data Architect .
- Hands-on experience with Python , Spark , and SQL for data analysis and scripting.
- Strong background in Data Integration using platforms like Apache Kafka and Microsoft Azure Data Factory .
- Familiarity with Big Data , ETL processes , and cloud computing platforms such as AWS .
- High level of English proficiency ( C1 or higher ).
Responsibilities:
ABOUT THE ROLE:
The ideal candidate will play a key role in designing, implementing, and managing robust data infrastructures and pipelines for diverse, U.S.-based projects. This is an opportunity to work with cutting-edge tools and technologies in a fast-paced, collaborative environment.
KEY RESPONSIBILITIES:
- Build and maintain scalable data pipelines using Big Data technologies like Apache Hadoop and Apache Spark .
- Design and optimize databases, including NoSQL systems like MongoDB and cloud solutions such as Amazon Redshift and Snowflake .
- Develop ETL workflows using tools like Apache NiFi , Talend , and Informatica to process and integrate data effectively.
- Ensure data security and compliance with tools such as Apache Ranger and HashiCorp Vault .
- Collaborate with cross-functional teams to implement cloud-based solutions using AWS Glue Jobs , Athena , and other modern platforms.
- Create dashboards and visualizations using Tableau , Power BI , or Looker to drive business decisions.
- Utilize version control systems like Git and manage projects with tools like Confluence and Jira .
REQUIREMENT SUMMARY
Min:N/AMax:5.0 year(s)
Information Technology/IT
IT Software - Other
Software Engineering
Graduate
Proficient
1
Desde casa, Argentina