Bilateral, National, Operational: Australia’s layered bet on AI governance

On 1 April 2026, the Australian Government signed its first memorandum of understanding under the National AI Plan with Anthropic. Dario Amodei, the company’s chief executive, met Prime Minister Albanese in Canberra to formalise the arrangement. The announcement, issued by the Minister for Industry and Science, set out four areas of cooperation: collaboration through the recently established AI Safety Institute; sharing of Anthropic’s Economic Index data to track AI adoption across Australian sectors; research partnerships worth around AUD$3 million covering disease diagnosis and computer science education; and signalled investment in data centre infrastructure (Minister for Industry and Science, 2026). None of these elements on its own is remarkable, and several of them are the kind of cooperative gestures that routinely accompany bilateral policy announcements. What makes the MOU interesting is how it fits with the rest of the machinery the Commonwealth has been assembling over the preceding four months.

The timing is telling. In the same week as the Anthropic announcement, Paul Hubbard (the Commonwealth’s chief AI officer in the Department of Finance) spoke publicly about the delivery structures being built for whole-of-government AI adoption: central enablement functions, procurement frameworks, governance architecture, the slow infrastructural work that determines whether a national AI strategy is a slideshow or a programme of actual deployment (Hubbard, 2026). Hubbard’s remarks and the Anthropic MOU are two parts of the same move, which is the shift from the strategy documents of late 2025 (the National AI Plan, the Responsible Use of AI policy, the APS AI Plan) into something closer to execution infrastructure. Anyone who has watched previous Commonwealth technology rollouts will recognise this phase as the point at which the actual quality of delivery becomes visible.

The execution challenge Australia is taking on is real because there is no proven template for it anywhere. The United States National AI Policy Framework, updated several times since 2020, scores reasonably well on deployment readiness but is lighter on coordination between agencies and across state lines. It lets motivated federal departments move quickly on AI applications where the political will is present, without requiring a heavy coordination layer to precede action. The European Union’s AI Act is the opposite: high on system coherence and risk management, with a tighter regulatory architecture and a more cautious deployment tempo. Its designers traded some speed for cleaner rules and more predictable compliance obligations. Neither model is clearly better, and practitioners in both jurisdictions will tell you their own approach has obvious limitations the other addresses more effectively.

Australia’s current approach aims to get both: the speed of the US deployment-ready model and the coherence of the EU risk-management model, layered together through a national plan and delivered through bilateral MOUs with major frontier AI developers. The theoretical case for this combination is attractive. The practical case is harder to make, because nothing quite like it has been demonstrated yet at national scale. If it works, Australia will have designed a governance approach that other middle powers with similar diplomatic reach can copy. If it does not, the layered structure will produce the worst of both worlds: the coordination overhead of the European style without regulatory teeth, alongside the vendor-driven deployment energy of the US style without the accountability mechanisms that are only just starting to catch up over there.

The most interesting question the MOU raises is how Anthropic’s role in the layered structure will actually work. A bilateral MOU with a single frontier AI developer is a novel instrument in the Australian context, and it sits uncomfortably with the APS Reform emphasis on arms-length relationships with major vendors. Anthropic is being treated in the announcement as something closer to a research partner and safety collaborator than as a commercial supplier, which is consistent with how the company has positioned itself internationally but creates an immediate procurement question for agencies choosing between Claude and the other frontier models now available on GovAI. If Anthropic is a preferred partner at the national level through the MOU, is it also a preferred supplier at the agency level through procurement? The answer is probably no (the Commonwealth’s procurement rules would not permit that cleanly), but the question is real and the current architecture does not answer it.

The procurement angle also links directly to a separate article I wrote on the APS AI rollout and the Reason Group DISR contract. The contract to Reason Group signalled a Commonwealth interest in local SME capability; the Anthropic MOU signals a Commonwealth interest in bilateral relationships with US frontier AI developers. Both moves are defensible on their own terms and both are consistent with different parts of the National AI Plan, but the tension between them will need to be resolved in practice by specific procurement decisions over the next eighteen months. Agencies will be asking whether they should be building in-house capability with local SMEs, consuming frontier model APIs directly from the major vendors, or pursuing some mix of the two. The national and bilateral layers of the layered structure give them permission to do any of those things, and the coordination layer does not yet exist to tell them which combination the Commonwealth actually prefers.

A useful piece of context for this whole conversation comes from administrative law research on government technology adoption. Agencies faced with complex technology decisions under time pressure consistently default to procedural compliance over substantive oversight. They tick the boxes on the policy and commission the required risk assessment, append whichever ethics framework the rules call for, and deploy the system anyway, because the framework tells them what process to follow but not what to do when following the process still leaves them with an uncomfortable answer. The ethics assessment comes back yellow, the risk register flags an issue, and the project goes ahead with a footnote. That is not a failure of integrity. It is how agencies behave under pressure, and it is the default mode the Commonwealth needs its national AI architecture to work against rather than with.

This is where the operationalisation challenge matters most. Procurement rules and workforce adjustment plans are the obvious answer on paper. The harder problem is whether the governance structures being assembled have the authority to stop a deployment that ticks the boxes but fails the substantive test. I wrote separately about Australia’s new military AI policy and the implementation gap it has opened up between published principles and operational decisions. The same structural issue applies on the civilian side, and the MOU does not really address it. It is a research and safety collaboration layer. It is not a decision layer. The decision layer still has to be built.

The hopeful reading is that the Commonwealth is moving deliberately. The strategy was published in late 2025. The procurement test cases are running now. The bilateral MOUs and the execution infrastructure are being set up in parallel over the first half of 2026. On that reading, the next six to twelve months will be when the coordination layer is built, the decision authority of the Chief AI Officers and the AI Safety Institute is clarified, and the relationship between the different layers is resolved in practice. The less hopeful reading is that the layered structure is trying to do too many things at once. If each layer ends up operating on its own terms (the National AI Plan at one level and the MOUs at another, with agency-level procurement running at a third), the combination will produce incoherence rather than coordination.

Which of these readings turns out to be accurate will be visible in the kinds of second-order decisions that rarely make it into ministerial speeches. A first test is whether the Anthropic MOU generates specific joint projects with named milestones and budgets, rather than remaining a general cooperation banner. A second is the operational authority of the AI Safety Institute: a real remit over agency deployments, or an advisory voice with no enforcement. A third is how the procurement framework handles the bilateral-MOU-preferred-partner question explicitly, because the question will not go away just because the framework is silent on it. And the Chief AI Officer role in each agency will either be resourced to exercise real oversight, or become a compliance coordinator layered on top of existing delivery. The machinery is being built, and whether it has teeth is the question that matters.

References

Australian Government. (2025, December). National AI Plan. Commonwealth of Australia.

European Union. (2024). Artificial Intelligence Act. Official Journal of the European Union.

Hubbard, P. (2026, April). Remarks on delivery structures for whole-of-government AI adoption. Department of Finance.

Minister for Industry and Science. (2026, April 1). Australian Government signs MOU with Anthropic. Commonwealth of Australia. https://industry.gov.au/news/australian-government-has-signed-memorandum-understanding-mou-global-ai-innovator-anthropic

United States Government. (2025). National AI Policy Framework. White House.

Leave a comment