Speed and Inclusion: The real test of Australia’s APS AI rollout

In mid-March 2026, a 47-person Canberra firm called Reason Group won a $4.6 million contract to help DISR build out its artificial intelligence capability (InnovationAus, 2026a). On its own the number is small by Commonwealth standards, well under the threshold at which anyone outside procurement keeps track. What makes it worth noticing is who did not win it. A contract of that kind even eighteen months ago would almost certainly have been picked up by a multinational consultancy or one of the Big Four. The fact that it went to a local small-to-medium enterprise, and to a specialist one rather than a generalist, is the first tangible sign that Labor’s AI push inside the Australian Public Service is starting to have a procurement footprint that matches its rhetoric.

The broader architecture around that contract has been moving very quickly. The National AI Plan was released in early December 2025, followed by the second version of the Digital Transformation Agency’s Policy for Responsible Use of AI in Government a fortnight later (Australian Government, 2025; DTA, 2025). Mandatory AI training across the APS was announced alongside a requirement that every agency appoint a Chief AI Officer. A new AI Safety Institute was funded at $29.9 million. By March, up to 20,000 public servants were scheduled to take part in trials of GovAI Chat, the Commonwealth’s internal generative AI assistant, with Claude Sonnet models now available to agencies through GovAI as well (InnovationAus, 2026b). Round 19 of the Cooperative Research Centres Projects program opened with an explicit AI focus.

In other words, the machinery is real. It is not just a strategy document with a media release attached, which is where several previous attempts to lift Commonwealth digital capability have stopped. A reasonable observer, looking at training mandates alongside the procurement flow and the central enablement functions now being stood up, would conclude that the APS is genuinely trying to operationalise its AI ambitions rather than simply framing them.

The question is whether the operating model assumes a workforce, and a citizen base, that is ready to be on the receiving end of AI services deployed at pace.

A paper in the 13 March 2026 research digest gives that question a sharper edge than I had expected. The “Systematic Inequities” study (2026) analysed Australian health workforce data combined with digital access and service utilisation patterns, and used them to model what happens when AI systems are trained on the resulting distributions. Its headline finding is that Indigenous Very Remote populations are currently accessing public health services at roughly 12 per cent of the urban non-Indigenous baseline once workforce and digital access barriers are compounded with cultural ones. If AI triage and eligibility tools are trained on this historical data without correction, the authors argue, the resulting systems will treat that 12 per cent figure as the normal level of need for those communities. The under-service becomes the ground truth. The algorithms then reinforce the gap, not because anyone designed them to discriminate, but because the training data was already a record of who had been left out.

CSIRO’s earlier work on AI for healthcare in Australian Indigenous communities (CSIRO, 2025) sits in the same terrain. Its point is a related one: deployment without deliberate design for equity risks amplifying the disparities the technology is notionally meant to help close. Neither CSIRO nor the Systematic Inequities study is arguing against AI deployment. They are arguing that you cannot build a safe deployment on top of data that is already a record of systemic exclusion, and that the design work to correct for that cannot be retrofitted once the system is live.

This is where the tempo of the APS rollout matters. The government’s speed is real and, on most fronts, welcome. The mandatory training is welcome. The procurement flow toward local SMEs is welcome (and worth watching to see whether it holds up against the pull of bilateral deals with frontier AI developers, which is a separate piece of the architecture and a subject for another article). But speed generates its own risk profile. Chief AI Officers standing up in every agency over the same three-month window will reach for off-the-shelf vendor solutions because they will have to. Training curricula produced at pace will default to generic modules because the alternative is slower. And when GovAI Chat rolls out to 20,000 staff by mid-year, the feedback loop between early adoption and equity-safe design will necessarily run in parallel with deployment rather than ahead of it.

The implementation risks here are not theoretical and they are not new. Anyone who has worked on a large Commonwealth technology rollout has seen what happens when pace is set at the centre and fidelity is pushed down to agencies that do not yet have the capability to absorb it. The rollout starts to produce artefacts of compliance rather than outcomes of value. Agencies report training completion rather than capability uplift. Procurement reports spend rather than coverage. The story becomes one of velocity metrics, and velocity metrics almost never catch inclusion failures early. They are designed for the median user, and inclusion failures by definition occur at the edges.

There is a version of the APS AI rollout that avoids this trap, and it looks very similar to what the government has actually announced. The difference lies in sequencing. The Systematic Inequities paper’s practical recommendation is that community-led governance frameworks and training-data audits (including checks for constrained-access bias) happen before deployment, not after. The CSIRO work points in the same direction. These are not exotic asks. They are standard practice in any mature data governance regime. The question is whether the delivery tempo leaves enough room for them to be done seriously, or whether they become something Chief AI Officers sign off against a checklist on the way to a go-live date.

The answer will be visible in a handful of decisions over the next six to twelve months. The first is whether the Responsible Use policy (DTA, 2025) is treated as a live document with consequences or as a reference artefact. A second is the operational authority of the new AI Safety Institute: real remit over agency deployments, or an advisory voice with no teeth. And the procurement pipeline that brought Reason Group into DISR is the third signal, because if similar contracts open up for SMEs with Indigenous data governance expertise, the story looks very different than if the pipeline narrows back to the usual small set of preferred suppliers once the novelty wears off. None of these decisions will generate a media cycle, and all of them will shape the next five years.

For consultants and change practitioners working in or around this rollout, the more interesting professional question is which side of it you want to be on. The procurement wave is real and the funding is real, so the work will be there. The choice is between the kind of work that helps an agency stand up a Chief AI Officer and launch a chatbot in under a quarter, and the kind that helps the same agency figure out whether its training data is a record of inclusion or a record of exclusion. Both matter. One, at current pace, will be easier to sell.

Speed is the easy part of this story. Inclusion at pace is the part that will decide whether the APS AI rollout ends up as a genuine capability uplift or as a faster version of the same service gaps we already have. The difference between those two outcomes will be invisible in the launch-day metrics and very visible in the complaints data eighteen months later.

References

Department of Industry, Science and Resources. (2025, December 2). National AI Plan. Commonwealth of Australia.

Department of Finance, Digital Transformation Agency and Australian Public Service Commission. (2025). AI Plan for the Australian Public Service 2025. Commonwealth of Australia. https://www.digital.gov.au/policy/ai/australian-public-service-ai-plan-2025

Systematic inequities in Australian health workforce, digital access, and service utilisation: Implications for artificial intelligence deployment in public health. (2026). Zenodo. https://doi.org/10.5281/zenodo.18979127

CSIRO. (2025). Artificial intelligence for healthcare in Australian Indigenous communities: Scoping project to explore relevance. Commonwealth Scientific and Industrial Research Organisation.

Digital Transformation Agency. (2025, December 15). Policy for responsible use of AI in government v2.0. Commonwealth of Australia.

InnovationAus. (2026a, March 13). DISR enlists local tech firm for its AI drivehttps://innovationaus.com/disr-enlists-local-tech-firm-for-its-ai-drive

InnovationAus. (2026b, March 2). Up to 20,000 public servants to join GovAI trialshttps://innovationaus.com/up-to-20000-public-servants-to-join-govai-trials

Leave a comment