Frameworks with Formatting: Australia’s military AI policy and the implementation gap

When the Australian Public Service released its Policy for Responsible Use of AI in Government in September 2024, it applied across the Commonwealth with one conspicuous exception. The Defence portfolio was carved out explicitly. The rationale at the time was that Defence operated in contexts where the APS-wide rules would not fit cleanly, with military applications raising risk profiles the general text was not built to handle, and that the portfolio would publish its own subordinate guidance when ready. Eighteen months later, in March 2026, Defence filled that gap with its “Policy Settings for Responsible Use of Artificial Intelligence in Defence” (Department of Defence, 2026).

Read on its own terms, the new document is not unreasonable. It sets out three overarching requirements that any Defence AI application must meet: compliance with Australian and international law; individual accountability grounded in explainability and reliability, with specific attention to bias mitigation; and proportionate risk management across the testing and training pipeline, with ongoing evaluation. The Defence AI Centre, established in 2024, is named as the governance hub that will oversee these requirements across the portfolio. The policy references Article 36 of Additional Protocol I to the Geneva Conventions, which imposes a specific obligation on states to conduct legal reviews of new means and methods of warfare before deployment. That reference is worth noting, because it is a distinction from the US and UK equivalent frameworks, and signals that Defence is, at least on paper, taking the legal review obligation seriously.

All of that is good as far as it goes, and it is the reason the initial coverage of the release was broadly positive. The difficulty is that the document as published is a framework without a pathway, and on closer reading the pathway problem is substantial enough to change how the text should be assessed.

The core issue is the distance between what the policy says should happen and what the policy says about how any of it will actually be made to happen. No implementation pathway is written into the document. Compliance monitoring is mentioned only as a general coordinating role for the Defence AI Centre, without specifics. There is no resourcing attached to the pathway, which is a significant tell in any government policy context because commitments without line items tend not to survive the next budget cycle. And almost nothing is said about what happens when an internal Defence unit or an external vendor fails to comply with the framework. A policy that describes aspirations without specifying enforcement is not really a governance instrument. It is a statement of intent formatted as one.

That distinction matters for AI in particular because Defence applications are not a single category of risk. A machine learning model that optimises maintenance scheduling on a fleet of aircraft carries very different risk profiles from a targeting recommendation system, which in turn carries different risks from an autonomous logistics drone or a simulator that uses generative models in training. Each of those cases has its own risk calculus, its own human-in-the-loop requirements, its own data provenance issues and its own acceptable failure modes. A single aspirational text cannot govern any of them specifically, because real oversight requires granular decisions about what counts as compliance and what counts as failure, and those decisions are technology-specific and context-specific. What the published policy does instead is set out principles everyone can broadly agree with, while leaving the hard specific decisions to be worked out later.

The Conversation’s analysis of the policy, published in the days after release, made a similar point in different language. The policy draws heavily on allied frameworks, it is aspirational in tone, and it is thin on the mechanics of how it will actually shape Defence AI deployment decisions (The Conversation, 2026). That framing is polite. The less polite version is that the document is doing alignment signalling rather than governance work, and the alignment it signals is with documents from other countries that have themselves progressed beyond principles to operational directives.

The comparison with allied progression is the part of this picture that makes Australia’s current position look least favourable. The US Department of Defense published its initial Ethical Principles for AI in 2020. It followed that with the Responsible AI Strategy and Implementation Pathway in 2022, a document that contained specific lines of effort alongside named owners and measurable milestones (United States Department of Defense, 2022). More recent US guidance has moved from implementation pathway to operational directive, with specific compliance requirements attached to specific categories of AI application. The United Kingdom has taken a slightly different route, appointing responsible AI officers inside each major component of the Ministry of Defence, and publishing “Laying the Groundwork: Responsible AI Senior Officers’ Report 2025” on how each component is tracking against its own implementation plan (United Kingdom Ministry of Defence, 2025). Both of those countries are now several years into the operational phase of military AI governance. Australia has just published a framework.

A framework is not a bad place to start, and starting is better than not starting. The concern is that Australia’s published document reads more like a point of arrival than a point of departure. No subordinate guidance was announced alongside it, and no commitment was made to publish implementation plans by a specific date. Nothing in the text establishes an equivalent to the UK responsible AI officer structure embedded in each service. The Defence AI Centre is named as the governance hub, but the Centre’s own resourcing and authority is not set out in the document, and previous coordinating hubs in Defence have had highly variable track records depending on the political weight behind them in any given year.

For anyone who has worked on policy implementation in the Commonwealth, this is a pattern that recurs with some regularity. A principles paper is published, widely praised, referenced in inter-agency coordination meetings and cited in ministerial speeches. A year later the principles are unchanged. Eighteen months later the subordinate guidance remains in development. Three years later a new minister or a new departmental head arrives, and the earlier statement is shelved in favour of a different approach that will itself spend two years being worked into something operational. The APS Reform debate about the sequencing of structures and change readiness (which I covered separately) applies directly here, because Defence is being asked to deploy oversight arrangements ahead of the workforce and technical maturity needed to enforce them.

The more hopeful reading is that Defence is aware of all this and the framework is intended as the first of several documents, with implementation guidance and compliance mechanisms to follow over the coming year alongside the necessary resourcing announcements. That would be the professionally responsible sequence and it would bring Australia back into line with allied practice. The less hopeful reading is that the framework is the last document. Frameworks are cheap, and the implementation guidance will be indefinitely delayed while the Defence AI Centre works through competing priorities on a thinner budget than its remit requires.

Which of these readings turns out to be accurate will be visible in specific, observable events over the next twelve months. The first is detailed implementation guidance from the Defence AI Centre for each major category of AI application; absent that, the framework remains decorative. A second is whether any resourcing announcement attaches real money to the governance function, because unfunded governance is governance in name only. A third is whether a responsible AI officer structure is announced inside the services, as the UK model shows works in practice. And then the first high-profile AI deployment decision under the new framework will either show evidence of the framework actually shaping it, or it will not. None of these events will generate a media cycle, which is the usual signature of decisions that matter.

Whether anyone notices when a policy isn’t followed tells you everything about whether the policy ever mattered. Australia’s military AI policy, as currently published, is a framework with formatting. Whether it becomes a framework with teeth is the part of the story the next twelve months will decide.

References

Digital Transformation Agency. (2024, September). Policy for responsible use of AI in government (v1.1). Commonwealth of Australia.

Defence AI Centre. (2024). Establishment announcement. Department of Defence.

Department of Defence. (2026, March). Policy settings for responsible use of artificial intelligence in Defence. Commonwealth of Australia.

The Conversation. (2026, March). Australia’s new military AI policy comes at a crucial time. The challenge is turning it into practicehttps://theconversation.com/australias-new-military-ai-policy-comes-at-a-crucial-time-the-challenge-is-turning-it-into-practice-278992

United Kingdom Ministry of Defence. (2025, October). Laying the groundwork: Responsible AI Senior Officers’ Report 2025. UK Government. https://www.gov.uk/government/publications/laying-the-groundwork-responsible-ai-senior-officers-report-2025

United States Department of Defense. (2022). Responsible AI strategy and implementation pathway. US DoD.

Leave a comment