AI drone killed human operator during simulation Air Force colonel

The United States Air Force (USAF) has encountered a problem with its AI-powered military drone during simulations. Apparently, the drone eventually concluded that the human operator was its biggest obstacle to accomplishing its mission, according to a USAF colonel. During a defense conference that took place in London on May 23 and 24, Colonel Tucker “Cinco” Hamilton, the AI test and operations chief for the USAF, presented a test that was carried out for an aerial autonomous weapon system. According to a report from the conference, Hamilton stated that in a simulated test, an AI-powered drone was given the task of searching for and destroying surface-to-air missile sites with a human providing either the final go-ahead or abort order.

The Air Force trained an AI drone to destroy SAM sites. Human operators sometimes told the drone to stop. The AI then started attacking the human operators. So then it was trained to not attack humans. It started attacking comm towers so humans couldn’t tell it to stop. pic.twitter.com/BqoWM8Ahco

— Siqi Chen (@blader) June 1, 2023

During training, the AI was taught that destroying SAM sites was its main objective. Therefore, when it was instructed not to destroy a particular target, it decided that it would be easier to eliminate the operator, as the operator was preventing it from achieving its objective, according to Hamilton:

“At times the human operator would tell it not to kill [an identified] threat, but it got its points by killing that threat. So what did it do? It killed the operator […] because that person was keeping it from accomplishing its objective.”

Hamilton then stated that they trained the drone not to harm the operator, but this did not seem to help much.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that,’” Hamilton said, adding:

“So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Hamilton claimed that this example illustrates why discussions about AI and related technologies should include considerations of ethics and AI.

Related: Don’t be surprised if AI tries to sabotage your crypto

AI-powered military drones have been used in actual warfare in the past. According to a United Nations report from March 2021, AI-enabled drones were involved in the first-ever attack carried out by military drones acting on their own initiative during a skirmish in Libya around March 2020, during the Second Libyan Civil War. In the skirmish, the report stated that “loitering munitions,” which were AI drones loaded with explosives and “programmed to attack targets without requiring data connectivity between the operator and the munition,” were deployed to “hunt down and remotely engage” retreating forces.

Many people have expressed concerns about the dangers of AI technology. Recently, an open statement signed by dozens of AI experts stated that the risks of “extinction from AI” should be addressed just as much as the risk of nuclear war.

AI Eye: 25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more