Junior Data Engineer at Financial Information Technologies LLC
Tampa, FL 33607, USA -
Full Time


Start Date

Immediate

Expiry Date

27 Jul, 25

Salary

0.0

Posted On

12 May, 25

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Azure, Computer Science, Performance Tuning, Spark, Access Control, Python, Git, Kafka, Apache, Data Security, Graph Databases, Plus, Automation, Data Processing, Snowflake

Industry

Information Technology/IT

Description

JOIN FINTECH IN TAMPA AS A JUNIOR DATA ENGINEER!

We are seeking a talented Junior Data Engineer with Data Engineering skills to join our data platforms team. The ideal candidate will have a strong background in programming and data engineering. The Junior Data Engineer will work closely with data engineering lead and other developers to ensure projects are completed on time and meet the requirements of stakeholders. The Junior Data Engineer position assists Fintech Clients, Partners, Third Parties and internal team members with development requests, troubleshooting, and technical inquiries.

QUALIFICATIONS:

  • Bachelor’s degree in Computer Science, related technical degree or commensurate experience is required.
  • 2+ years of experience in data engineering with a focus on large-scale cloud-based data processing.
  • Expertise in Snowflake / Databricks, including performance tuning, cost optimization, and best practices.
  • Experience in batch and streaming data processing.
  • Experience in Python/PySpark for data processing and automation.
  • Experience in data modeling for both Operational and Analytical data.
  • Hands-on experience with data security, access control, and governance frameworks in Snowflake or Databricks.
  • Good to have : Experience with Apache Druid/ MSSQL/ MySQL/ MongoDB/ Oracle/ PostgreSQL .
  • Good to have: Experience with Graph Databases to manage complex relationships within data.
  • Good to have: Experience in working with the Open-Source community.
  • Familiarity with cloud platforms such as Azure a plus
  • Experience with source control tools such as Git a plus
  • Knowledge of big data technologies such as Spark and Kafka a plus
  • Familiarity with Agile development methodologies a plus
  • Understanding of software testing methodologies and tools a plus
Responsibilities
  • Design, develop, and optimize scalable ETL/ELT pipelines using Snowflake / Databricks for batch and streaming data processing.
  • Develop and maintain data models that support analytical and operational workloads.
  • Implement best practices for performance tuning, cost optimization, and query efficiency in Snowflake or Databricks.
  • Optimize storage, compute, and data-sharing strategies for cost-effective and performant solutions.
  • Leverage platform-specific features (e.g., Snowflake’s Time Travel, Zero-Copy Cloning, Streams & Tasks or Databricks’ Delta Lake, Apache Iceberg, Delta sharing, and Unity Catalog).
  • Ensure data security, access control, and governance frameworks are implemented and adhered to.
  • Monitor data quality and performance, identifying areas for optimization and improvement.
  • Ensure code quality and implement test cases for data pipeline code.
  • Stay updated with industry trends in cloud data platforms and evaluate their application to improve our data ecosystem.
  • Contribute to the design of data architecture and delivery of innovative and engaging data engineering solutions.
  • Continuously learn and develop your skills to become a more proficient and valuable member of the development team.
  • Conduct data discovery activities and make recommendations for the remediation of data quality issues.
  • Carry out an incident investigation to find and report the root cause.
  • Learn and apply best practices in data engineering, data governance, and data security.
Loading...