Accelerating into the Future: Impact of Artificial Intelligence on Warfare

Ross Linnane
Australian Defence Force

Artificial Intelligence (AI) is already redefining the character of conflict and its use pose as a turning point in warfare. From autonomous intelligence, surveillance and reconnaissance (ISR) and precision targeting to rapid decision-making loops and cognitive electronic warfare, we're seeing the dawn of a new era in which the speed of thought may matter more than the speed of flight (Zhang, et al, 2020).

Adversaries are investing heavily in AI-enabled platforms designed to out-cycle human operators. If we hesitate, we risk becoming predictable in an age where predictability is vulnerability. AI isn't a niche enabler. It's fast becoming the new centre of gravity, reshaping operational planning, targeting processes, and command and control hierarchies (Black, et al, 2024).

The Royal Australian Air Force (RAAF) has already taken deliberate steps to modernise, but our trajectory must now steepen. Initiatives like Plan Jericho and the MQ-28 Ghost Bat prove we're serious about AI-enabled force design and AI applications have already enhanced areas such as search and rescue operations (Ferguson, 2020; Boeing, 2025). But these must be seen not as endpoints, but as launch platforms for deeper, more integrated transformation.

Our future force must be capable of seamlessly teaming human operators with autonomous systems. We need AI in our cockpits, our command chains, and our logistics tail – not to replace human judgement, but to enhance it, to out-think and out-manoeuvre our adversaries in compressed decision windows.

What happens when AI becomes something more? Artificial General Intelligence (AGI) may still feel abstract, but history suggests that disruption often arrives faster than expected. AGI won't just process information – it could autonomously generate strategy, outmatching even experienced commanders in speed and accuracy.

While a comprehensive exploration of AGI’s implications merits deeper study, its trajectory already demands immediate institutional consideration.

For the RAAF, this presents both a threat and an opportunity. We must explore AGI's implications now, not later. Doctrine, training, ethics, and accountability frameworks need to evolve before the technology does (Devitt & Copeland, 2021).

Leadership in this environment won't be about rank. It will be about mindset. Air Force leaders must become literate in AI just as much as they are in air power. We need commanders who understand not just what AI can do, but what it will demand of us – operationally, ethically, and culturally.

This also demands new partnerships: with academia, industry, and allies. Interoperability won't just be about platforms, but about algorithms, data standards, and human-machine teaming concepts.

To move beyond awareness and into preparedness, Air Force could consider establishing an AI Futures Taskforce to drive experimentation, wargaming, and ethical framework development. Embedding agile software development teams within operational units would also enable faster iteration of AI-enabled capabilities. Further, partnering with allies on AI interoperability and scenario planning can position the RAAF to influence, not just follow, coalition AI doctrine.

We are not passengers in this transformation. We are pilots. The decisions we make now, about how we fund, train, structure and lead – will define our operational advantage for decades.

AI will not wait for our policy cycles to catch up. It won't pause for our comfort zones. The RAAF must lean forward, not just to survive disruption, but to shape it. Because if we don't, someone else will.