The Skill You Outsource is the Skill You Lose

Most people who own a phone now have a route they used to drive from memory and can no longer navigate without the device. Two years of GPS delegation and your spatial memory of that route is gone. The phone produces the destination on demand, the destination is correct, and the journey works. Something about the experience of driving that route, however, has changed in a way that is not easily reversed by switching the phone off. A new arXiv paper on adaptive AI delegation makes the formal case that the same dynamic is now playing out across professional cognitive work, and adds an uncomfortable element: the dynamic is path dependent, so the longer it runs without intervention, the harder it becomes to reverse (Path dependence under adaptive AI delegation, 2026).

The paper formalises what practitioners have been observing. Repeated AI delegation can improve short-term task performance while eroding long-run human skill, with path-dependent dynamics meaning early reliance patterns can lock users into a low-skill equilibrium. Equilibrium here is not jargon. It is the technical claim that once a particular pattern of human-AI delegation is established, the costs of switching out of it grow over time, and the user may not even notice that the option of operating without the AI has narrowed. The GPS effect generalises to writing, analysis, decision-making and a range of other cognitive tasks where AI is now in the loop.

The Gartner number that arrived alongside the paper anchors the timeline. Gartner predicts that 50 per cent of organisations will require “AI-free” skills assessments by 2026, driven by concerns about critical-thinking atrophy linked to generative AI use (Gartner, 2026). The forecast is striking because it implies organisations are already noticing the problem in their own workforces and starting to put countermeasures in place. The atrophy is not theoretical. It is something HR and learning-and-development functions are now budgeting for, in advance of formal evidence about how much capability has actually been lost in any given role.

Deloitte’s 2026 Government Trends report frames the response challenge through three required shifts (Deloitte, 2026). The first shift is designing genuine human-machine teaming, in which the human is doing complementary cognitive work the AI cannot do well, alongside supervising AI output. A second shift is building workforce adaptability so that staff can move between AI-assisted and unassisted tasks without losing the underlying skill. The third sits in developing AI fluency across roles, where fluency means understanding the model’s strengths and failure modes well enough to direct it productively. Underlying the Deloitte framing is a distinction worth holding onto: fluency and dependence are different conditions. A fluent user knows what the AI is good for and what it is not, and chooses accordingly. A dependent user has lost the option of choosing.

The mechanism behind the lock-in is what the cognitive science literature calls cognitive offloading. Cognitive offloading means strategically externalising mental effort onto a tool, and it is generally beneficial for efficiency in isolated tasks. The problem is that persistent offloading degrades the internal cognitive capacity that supplies the same effort. The brain treats the externalised capacity as no longer needed, and reallocates accordingly. Over time, the option of doing the task internally becomes difficult to access. The GPS analogy captures the dynamic at a personal scale, but the same pattern shows up in writing and in analysis, both at the individual practitioner level and at the team level across professional services firms that have moved heavily toward AI-assisted delivery over the past two years.

The deskilling question has specific operational implications for two settings I work in. The first is emergency services and defence, where human competency must be maintained as a fallback. If an AI system fails or is removed from the operational stack in a contested context, the human operator needs to be able to perform the task manually under time pressure. If the operator has been delegating for two years, whether they can still do the task is no longer a rhetorical question. Operational planning needs to account for the answer. I have written separately about the gap between Australia’s defence AI policy and the operational tempo of modern electronic warfare. The deskilling question sits inside that gap. A workforce that has degraded its manual capacity in peacetime is not the workforce that can absorb the manual fallback requirements of a contested operational environment.

The second setting is consulting itself. Firms that have integrated AI into delivery in the past 18 months have reported productivity gains, and those gains are real. The question that has not been asked publicly enough is what is happening to the analytical capabilities that originally differentiated those firms from each other. If AI is doing the analytical heavy lifting on most engagements, and the human consultants are increasingly editors of AI output rather than analysts in their own right, then the deep expertise that once justified consulting fees may be eroding faster than firm leadership realises. The firms that maintain genuinely deep human expertise alongside AI augmentation will hold competitive advantage as the market matures. The firms that have replaced expertise with augmentation will find that distinction harder to defend in a buyer’s market.

The path-dependence finding has a specific implication for how organisations should sequence AI adoption decisions. Early reliance patterns matter more than later ones, because they shape the equilibrium the workforce will settle into. An organisation that introduces AI on the assumption that it can always pull back if things go wrong is making an assumption the research now contradicts. The pull-back is harder than the push-forward, and the longer the delegation runs without active counter-balancing, the more expensive the pull-back becomes. That makes AI introduction a “decide now” problem about how to use AI and on what tasks, not a “monitor and review” problem to revisit in three years.

I am also planning to write about the launch of AI.gov.au and the broader question of whether Commonwealth procurement rules around AI are matched by the capability to comply with them. The deskilling question is the workforce equivalent of that procurement question. Rules require capacity to comply. Capacity, in this case, is the human cognitive capacity that AI delegation can erode. Without explicit countermeasures, the rules and the capacity move in opposite directions over time, and the gap between them is filled either by overstated compliance documentation or by undeclared exceptions that no one wants to write down.

Government leaders can take a practical step here. The step is to commission an honest internal review of where AI delegation has already settled into pattern, and to ask whether the operational fallback assumptions in current planning still hold. Consulting firm leadership has an equivalent step. Identify the analytical capabilities that genuinely differentiate the firm from competitors, and ensure those capabilities are being actively practised by humans rather than handed off to models. Individual practitioners face a more personal version of the question. What is the analytical task in your own work that you used to do in your head, and would now struggle to complete without the AI? That answer is your equivalent of the GPS route, and the work to recover it begins now or it gets harder later.

Cognitive offloading is fine until the offload is the only way back. The path dependence finding says the moment of “only way back” is closer than most organisations realise, and that the road there has been steadily built by the convenience of letting the model do the thinking. Letting that road continue to extend without active counter-balance is the choice with the highest long-term cost.

References

Deloitte. (2026). Scaling the public sector’s human edge: Making human-AI collaboration work. Government Trends 2026.

Gartner. (2026). Strategic predictions on AI workforce capability: AI-free skills assessments forecast.

Path dependence under adaptive AI delegation. (2026). arXiv preprint.

ACM Communications. (2026). The AI Deskilling Paradox.

San Diego Business Journal. (2026). AI Is Deskilling Your Workforce.

Leave a comment