Freelance Data Engineer

at  Red Badger

London, England, United Kingdom -

Start DateExpiry DateSalaryPosted OnExperienceSkillsTelecommuteSponsor Visa
Immediate21 Jan, 2025Not Specified22 Oct, 2024N/AGood communication skillsNoNo
Add to Wishlist Apply All Jobs
Required Visa Status:
CitizenGC
US CitizenStudent Visa
H1BCPT
OPTH4 Spouse of H1B
GC Green Card
Employment Type:
Full TimePart Time
PermanentIndependent - 1099
Contract – W2C2H Independent
C2H W2Contract – Corp 2 Corp
Contract to Hire – Corp 2 Corp

Description:

We are working with a large client - a blue chip price comparison website, offering multiple financial services companies access to new UK customers for a wide variety of financial products. We’re working on a large initiative to transform the level of reporting capability across the business in the medium term. In the short term we require the assistance of a capable freelance Data Engineer to help us unlock key data and management information relative to one product category, as a temporary fix whilst we work towards a much better medium term solution.

  • Likely start date Monday 11th OR Monday 18th November
  • 6 months (may extend)
  • Outside IR3
  • Hybrid working (2 days / week either at client offices or Red Badger offices by Old street)
  • £competitive day rate

Technology >

  • Python, C#, or other relevant languages for data processing and ETL.
  • Experience with big data tools: Hadoop, Spark, Kafka, and other big data technologies.
  • Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Cloud platforms: Experience with cloud platforms such as AWS, Azure, or Google Cloud, especially in using data-related services like Redshift, BigQuery, or Databricks.

We bring together the best in strategic services, user experience and technical delivery using Lean and Agile processes.
Founded in 2010 by Stuart, Cain and David, we help large organisations improve their speed to market whilst focusing on delivering value to their users and customers. We work with our clients to understand their problems and validate ideas in order to deliver improved process efficiencies, strategic enhancements and new digital products and services (or make significant improvements to existing ones).
The best bit about Red Badger is, of course, the team. We’ve been around for 10+ years now and we are 120 strong. We are really proud of our people; we support and learn a lot from each other; we work really hard but have fun doing it. We are a diverse group made up of 22 different nationalities, speaking 17 different languages.
Our 3 founders have considerable tech and consultancy experience, and still own the company. We’ve been consistently profitable and have grown responsibly from the beginning. We are embarking on the next phase of growth and development.

RED BADGER VALUES:

  • PEOPLE PEOPLEWe respect and care for each other, giving us the space to feel safe and be our true selves
  • FIND A WAYWe’re comfortable with uncertainty and accountability, whilst achieving great outcomes through shared goals
  • ALWAYS LEARNINGWe’re curious. It’s how we learn and grow as individuals, continuously testing and improving what we do, and how we do it
  • OPEN & FAIRWe build trust by telling things as they are, being open, and seeking to achieve fair and equitable outcomes.
  • COLLABORATIVEWe are united by our desire to get to the best ideas. We are generous with our knowledge, actively listen to each other, and are open minded.

How To Apply:

Incase you would like to apply to this job directly from the source, please click here

Responsibilities:

THE ROLE

As a Data Engineer, you will design, build, and maintain scalable data pipelines and architectures to support our organisation’s data processing needs. You will collaborate closely with data scientists, analysts, and other engineers to optimise data flow and collection for cross-functional teams. The ideal candidate is a technical expert in building and optimising large-scale data systems and can troubleshoot performance and reliability issues in big data environments.

KEY RESPONSIBILITIES

  • Develop and maintain data pipelines: Design and build robust, scalable, and efficient data pipelines for collecting, processing, and storing large datasets from various sources.
  • Data integration: Integrate data from multiple data sources, including APIs, databases, flat files, and external platforms, ensuring consistent and high-quality data availability.
  • Optimize data workflows: Identify and implement optimizations to improve data reliability, performance, and scalability. Automate manual processes, optimize data delivery, and design systems to scale as data grows.
  • Maintain data architecture: Create and manage data models, schemas, metadata, and ETL processes to ensure consistency, efficiency, and data integrity.
  • Database management: Manage and maintain databases, data warehouses (e.g., Amazon Redshift, Snowflake), and data lakes (e.g., AWS S3), ensuring high availability and security.
  • Collaborate with data teams: Work closely with data scientists, analysts, and stakeholders to ensure proper data infrastructure and meet data requirements for analytics, reporting, and machine learning.
  • Monitor and troubleshoot: Set up monitoring and logging systems to detect data issues, implement resolution protocols, and provide root cause analysis to resolve data flow problems.


REQUIREMENT SUMMARY

Min:N/AMax:5.0 year(s)

Information Technology/IT

IT Software - DBA / Datawarehousing

Software Engineering

Graduate

Proficient

1

London, United Kingdom