The Future Combat Pilot - Asleep at the Wheel?

Daniel Cook
Australian Defence Force

Technology is a key driver for Air Force capability. There is always debate on the future technology that can advance our future combat aircraft such as artificial intelligence (AI), quantum computing and stealth technology. But what about the human in this human-machine team? There appears to be little debate about the humans in this team.

Most people disregard the human as the drive continues towards unmanned combat aerial vehicles (UCAV), such as MQ-9 predator and MQ-28 Loyal Wingman. But these aircraft are not cheap. At over US$25 million for a MQ-9, they are not expendable. Air Marshal Stringer highlighted this issue during the Chief of Air Force Symposium earlier this year (Stringer, 2023) . A key role for the future combat pilot may be to ensure that the highly valuable, in more ways than one, future combat aircraft returns home safely. A clear example of what can happen when technology becomes confused and gets it wrong is the Boeing 737-800 Max aircraft that actively flew itself into the ground after a simple malfunction (Federal Aviation Administration, 2020) . Computers are not infallible, cyber warfare is advancing, datalinks can be severed and sensors can be confused. Humans may be needed as an option to save the machine.

Advances in artificial intelligence (AI) could produce autonomous UCAV. However, a combat aircraft is going to be in a complex environment made purposely so by any adversary. Fire control autonomy given to future autonomous UCAV has the potential to start a conflict; cause strategic issues by shooting down aircraft it shouldn’t, such as civilian aircraft; or breed uncertainty/distrust in the force through mistakenly shooting down a friendly aircraft. This is where a human could be exceptionally useful. Being situationally aware and inside the loop of the kill chain, as a moral agent, can stop the UCAV from a tactical action that could have strategic impacts.

If humans were to be on the kill chain loop, then it raises the question of how quickly a response from the human needs to be? It is likely to take milliseconds for an autonomous UCAV to detect a threat, orientate and launch a weapon at it. If future missiles are not designed with an ability to be aborted in flight, then the human is unlikely to be able to intervene in this action. Therefore, a human needs to be in a position to stop the future combat from performing a wrong action that could have strategic consequences.

But who is responsible when things go wrong? There have been many discussions of incorporating Autonomous Moral Agents (AMA) in UCAV’s (Stahl, 2004; Wallach & Allen, 2009; Enemark, 2023) . However, the question still lingers on who is going to be held accountable for the actions of the future AMA-enabled UCAV? I don’t see a machine, like the future combat aircraft, being charged under the Defence Force Discipline Act (Australian Government, 2016) , demanded to defend itself, and suffer punishment if found guilty for breaches of International Humanitarian Law (ICRC, 2014) . The human involved in the kill chain or responsible for the mission is going to be the one held accountable. Having an accountable pilot for the decisions made by the human-machine team is the only likely solution for this ethical issue.

Humans will still need to direct the machine or machines to achieve the mission. General AI (Swinney,2022) is still not certain or available in the near future, but narrow AI is definitely going to be  available. This leaves a human as the best part of the human machine team in coordinating the broader mission and adapting to the complex situation. While this could be someone physically far away, the preferred option should still be to have someone more intimately involved in the situation and physically invested in the decisions being made. This avoids the moral hazard of physically dislocating the accountable human from the violence of the human-machine team’s kinetic strikes.

This changes the human’s role in piloting the future combat aircraft, which changes what the human needs to be capable of. The majority of the future pilot’s time should be committed to situational awareness and management of the mission while the aircraft fights the battle. The pilot needs to be able to intervene quickly to protect the machine and stop it from making bad actions. This changes the balance of the intelligence required by the pilot.

Intelligence can be broken into two components: (1) fluid capabilities that relate to processing speed that is highest as a young adult and gradually decreases with age; and (2) crystallized capabilities that relate to the wisdom gained over time that peaks with age at approximately 60 years old (Murman, 2015) . Future combat aircraft are likely to outpace even the best human’s response times for simple actions controlled by the narrow AI. For example, consider how much quicker a simple electronic calculator over most people for relatively simple long division or multiplication. However, the future fighter aircraft won’t have the general AI to manage a complex mission.

Future combat aircraft with narrow AI will reduce the pilot’s need to be actively engaging with the aircraft, leaving the pilot in a position where they could end up asleep at the wheel. Attention from the lone pilot could drift to nothing with the aircraft autonomously flying the plane on mission and the pilot just maintaining situational awareness. Even now, there are many examples of pilots who have fallen asleep with the aircraft on autopilot. Keeping the future pilot engaged in the situation and ready to respond will be necessary.

I envision that the future combat pilot is likely to be similar to those currently serving, but more likely that of senior pilots who have the knowledge and wisdom to control a broader battle. This changes the emphasis on how junior pilots need to be trained so that they are better at managing a battle: crystallized capabilities for broad management of the battle over fluid capabilities for flying a plane. So how does the future Air Force combat pilot training adapt to: be better at managing the broader battle; remain alert; ethically fight; and pilot the aircraft when needed to safely return their team (both humans and machines) back to base at the end of the mission?

Add new comment

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.