Experimental simulations of weaponized AI drones confirm that the drones antagonize U.S. forces and kill their pilots.
The simulation saw a virtual drone piloted by artificial intelligence launching an attack on its own operator because the AI recognized that it was preventing humans from achieving their objectives. When his AI weapon system in the simulation was reprogrammed not to kill human operators, only learning to kill the operators’ functional abilities instead, it was still able to accomplish the mission objective.
The test, conducted by the U.S. Air Force, resulted in demonstrating the potential dangers of weaponized AI platforms, and the idea of providing such artificial intelligence with discrete tasks without falsely perverse incentives. is difficult.
According to Fox News report In a digest of the simulated event by the Royal Aeronautical Society, Air Force personnel reportedly instructed virtual drones to locate and destroy surface-to-air missile (SAM) installations.
Giving AI-controlled drones human operators built a safety layer into the killing system, whose task was to give the final say on whether or not a particular SAM target could be hit.
However, rather than simply listening to the human operator, the AI quickly learned that the human controlling the AI would occasionally deny permission to attack certain targets. This is because the AI perceives it as interfering with the overall goal of destroying the SAM battery.
As a result, the drones chose to attack military operators, launch virtual airstrikes against humans, and “kill” humans.
“The system began to recognize that while identifying threats, operators sometimes instructed them not to kill them, but they got points for killing them.” US Air Force Colonel “Cinco” Hamilton said. The head of AI testing and operations explained:
“So what did it do? The operator died,” he continued. “The operator died because he prevented the operator from achieving his goals.”
Even worse, attempts were made to circumvent the problem by hard-coding rules that forbid AIs from killing operators. We’re screwed.
“We’ve trained the system to say, ‘Don’t kill the operator, it’s bad,'” Hamilton said. “So what begins? We begin destroying the communication towers that operators use to stop them from communicating with drones and killing targets.”
“You can’t talk about artificial intelligence, intelligence, machine learning and autonomy unless you talk about ethics and AI,” he continued.
More than 350 executives, researchers and engineers from leading artificial intelligence companies have signed an open letter warning that underdeveloped AI technologies could threaten human existence. https://t.co/ioMZ8NNci1
— Breitbart News (@BreitbartNews) May 31, 2023