Claims that a US Air Force-trained artificial intelligence (AI) powered drone “attacked” its human operators and communication towers when it was supposed to target enemy weapons have been shared widely on social media.
One tweet, shared hundreds of times, says: “The Air Force trained an AI drone to destroy SAM [surface-to-air-missile] sites.
“Human operators sometimes told the drone to stop.
“The AI then started attacking the human operators.
“So then it was trained to not attack humans.
“It started attacking comm towers so humans couldn't tell it to stop.”
This text is accompanied by a screenshot, which gives a further description of how in a simulation the trained AI drone had “killed the operator because that person was keeping it from accomplishing its objective [eliminating the threat of an enemy surface-to-air missile]”.
Versions of this claim have also appeared a number of times on Facebook, and were also reported by some media outlets.
But, as a number of media outlets have now reported, the US AIr Force has since denied that the sequence of events described happened in real life, or even in a real US Air Force simulation.
US Air Force spokesperson Ann Stefanek told Full Fact: “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. This was a hypothetical thought experiment, not a simulation.
“It appears the colonel's comments were taken out of context and were meant to be anecdotal.”
While we’re not able to verify exactly what simulations the US Air Force or other organisations it works with may or may not have run, given the original source of the claim has now backtracked, there’s no evidence that the incident occurred as originally described.
Honesty in public debate matters
You can help us take action – and get our regular free email
Where did the claim originate?
The screenshot shared in the tweet comes from a report of a two-day summit in the UK held by the Royal Aeronautical Society.
The summit included a presentation by Colonel Tucker ‘Cinco’ Hamilton, the chief of AI test and operations at the US Air Force, during which he gave details of an apparently simulated test and “cautioned against relying too much on AI noting how easy it is to trick and deceive”.
His account of the AI-powered drone “killing” its operator was then widely shared online, leading the Royal Aeronautical Society to add a notice to its website on 2 June which says: “Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military.”
Colonel Hamilton further clarified the mistake, saying: “We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He added that the US Air Force has not tested any weaponised AI in the manner described, either in real life or through a simulation.
We don’t have any further details about the “thought experiment” described by Colonel Hamilton, or where outside the military it originated.
A BBC report about the claims said that a number of experts in defence and AI had been “very sceptical” about initial reports of Colonel Hamilton’s presentation.
Misleading claims which circulate online have the potential to harm individuals, groups and democratic processes and institutions. Online claims can spread fast and far, and are difficult to contain and correct.
Image courtesy of US Air Force