Senior Data Engineer at V2 Digital
Sydney, New South Wales, Australia -
Full Time


Start Date

Immediate

Expiry Date

11 Jun, 26

Salary

0.0

Posted On

13 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Databricks, Spark, Cloud Technologies, Lakehouse Architecture, AWS, Azure, GCP, Data Pipelines, Data Platform, Batch Processing, Streaming Data, Data Modelling, Data Architecture, Orchestration, Performance Optimisation

Industry

IT Services and IT Consulting

Description
About V2 AI V2 AI is a leading AI-Native consultancy backed by $30m in VC funding, allowing us to meet our customers' needs. We harness the power of AI to accelerate business outcomes for some of the world's largest brands. We bring decades of experience and a unique delivery model to partner with our customers on the most complex problems for immense, measurable impact. Our Services AI Strategy & Governance AI Enablement Applied AI Data & AI Security About the Role: As a Senior Data Engineer (Databricks), you'll contribute to the design and development of modern data platforms and scalable data pipelines. You will work on data engineering initiatives using Databricks and lakehouse architecture, building robust data pipelines and helping deploy scalable cloud data platforms across AWS, Azure, or GCP. You'll collaborate with product teams, architects, AI engineers, and other data engineers to deliver end-to-end data solutions, including modern data platforms, advanced analytics capabilities, and data foundations that enable machine learning and AI applications. You'll also contribute to engineering discussions, support technical decisions, and help uphold data engineering standards and practices across V2 AI. What you will do: Design, develop, and deploy data pipelines and data platform components using Databricks, Spark, and cloud technologies Build and optimise batch and streaming data pipelines to support analytics, machine learning, and AI workloads Collaborate with cross-functional teams (product, data, AI, and architecture) to implement data models and platform architectures Build scalable, secure, and maintainable cloud data solutions leveraging AWS, Azure, or GCP services Contribute to data engineering best practices, code quality, and architectural discussions Implement data platform workflows, including orchestration, monitoring, data quality, and performance optimisation Work with structured and unstructured data to enable advanced analytics and AI-driven capabilities Contribute to reusable data pipelines, frameworks, and engineering standards Stay up to date with modern data engineering technologies, Databricks capabilities, and cloud data platforms, sharing knowledge with the team About You Strong experience designing and delivering data engineering solutions and modern data platforms Experience working with Databricks, Spark, and lakehouse architecture Experience building scalable data pipelines and distributed data processing systems Experience working with modern data platforms such as Databricks, Snowflake, or similar ecosystems Cloud experience across AWS, Azure, or GCP Good understanding of data modelling, data architecture, and data engineering best practices Experience working collaboratively with engineers and contributing to technical decisions Passionate about building scalable data pipelines and high-quality engineering solutions Excellent communicator and collaborator, able to work across engineering, product, and data teams Benefits: Competitive Salary package Pick your equipment Gifted Day Off ("VersionUp Day") Generous parental leave Well funded start up $$$ Annual Training budget Mentorship Program Clear promotion pathways Flexible working Meet-ups & socials 97% Rating on corporate social responsibility APPLY NOW: If you are keen to join one of the fastest-growing consultancies in this space and interested in shaping the future of V2 AI, then APPLY NOW!
Responsibilities
The Senior Data Engineer will design, develop, and deploy data pipelines and platform components utilizing Databricks, Spark, and cloud technologies to support analytics, machine learning, and AI workloads. This role involves collaborating with cross-functional teams to implement data models and platform architectures across AWS, Azure, or GCP.
Loading...