DevOps / Platform Engineer at Epic IT
City of Vincent, Western Australia, Australia -
Full Time


Start Date

Immediate

Expiry Date

13 May, 26

Salary

0.0

Posted On

12 Feb, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

CI/CD, GitHub Actions, Azure DevOps, Cloud Infrastructure, Azure, AWS, GCP, Docker, Kubernetes, PostgreSQL, Python, Bash, Node.js, API Integration, REST APIs, Terraform

Industry

IT Services and IT Consulting

Description
Most DevOps roles are about maintaining what exists. This one is about building what’s next. What You’ll Do • Own and build our CI/CD pipeline — from GitHub through to production deployment. Nothing exists yet. You’re designing it. • Build and manage data pipelines that move data from our RMM, PSA, documentation, and accounting platforms into a central database. • Deploy and maintain the infrastructure behind our AI agents, web portals, automation workflows, and client-facing tools. • Implement security, monitoring, logging, and governance across everything we build. • Containerise and orchestrate our applications using Docker (and Kubernetes if appropriate). • Collaborate directly with the CEO and senior leadership to turn AI ideas into shipped products. • Help the team level up — we’re an MSP that builds software now, and we need someone who can bring engineering rigour to that transition. What We’re Looking For Must Have • 3+ years in DevOps, Platform Engineering, SRE, or Cloud Engineering roles. • Strong experience with CI/CD tools (GitHub Actions, Azure DevOps, or similar). • Solid cloud infrastructure skills — Azure preferred, AWS/GCP also valued. • Hands-on with Docker and container orchestration. • Database management experience, particularly PostgreSQL or similar relational databases. • Comfortable with scripting and automation (Python, Bash, or Node.js). • API integration experience — you’ve connected systems together and know your way around REST APIs. • Infrastructure as Code (Terraform, Ansible, or similar). Great to Have • Experience with LLM APIs or AI agent frameworks. • React / frontend experience — our portal and internal tools are React-based. • Familiarity with workflow automation platforms (e.g. n8n, Make, Power Automate, or similar). • Experience in the MSP / IT services industry. • Knowledge of monitoring and observability tools (e.g. Prometheus, Grafana, or similar). Why This Isn’t a Normal DevOps Job • You’re not inheriting legacy infrastructure. You’re building from scratch. Greenfields with input into design decisions. • You’ll work directly with the CEO and senior leadership — not buried in a team of 50. Your work will have visible impact immediately. • AI isn’t a side project here. It’s the core strategy. Everything we’re building runs on AI, and you’ll be at the centre of it. • You’ll get to use LLM APIs and modern AI tooling as part of your daily workflow — not just maintain someone else’s pipelines. • We’re a small, fast-moving team. No bureaucracy, no six-month approval cycles. Build it, ship it, iterate. What We Offer Compensation & Growth Competitive salary with clear progression criteria Funded certification paths Direct mentoring from Solutions Architects Work Environment Work from home flexibility with collaborative team culture Some client site visits required for hardware deployments and complex implementations Access to lab environments and test systems Modern tooling and technology stack Autonomy to solve problems your way (within architectural standards) The Team Collaborative environment where asking questions is encouraged Regular technical knowledge sharing sessions Ready to Apply? If you can read this job ad and think "I've done most of this, and I'm ready to learn the rest," we want to hear from you. Send through your CV and briefly tell us about the most complex technical problem you've solved recently.
Responsibilities
The engineer will be responsible for owning and building the entire CI/CD pipeline from GitHub to production deployment, as well as building and managing data pipelines to centralize data from various platforms. They will also deploy and maintain infrastructure for AI agents, web portals, and client-facing tools, while implementing security and monitoring across all builds.
Loading...