A drone controlled by an artificial intelligence would have turned against its operator during a simulation

At the end of March, a group of hundreds of entrepreneurs, engineers and academics called for a six-month moratorium on research into artificial intelligence. “The past few months have seen AI labs locked in an uncontrolled race to develop and deploy ever more powerful digital brains, which no one – not even their creators – can reliably understand, predict or control,” they said. argues the signatories.

Among them, Steve Wozniak, co-founder of Apple, pleaded for a “responsible approach” to this technology. “Look at how many bad people come and harass us with spam, try to get our passwords, take over our accounts and ruin our lives. AI is an even more powerful tool, and it’s going to be used by these people for really bad purposes, and I hate to see technology used in this way,” he said in an interview with CNN.

In 2015, Mr Wozniak was behind a petition – with, among others, Elon Musk [le Pdg de SpaceX, de Tesla et de Twitter] and astrophysicist Stephen Hawking – to ban autonomous “killer robots”. “Artificial intelligence has reached a point where the deployment of such systems will be – physically, if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in the techniques of war, after gunpowder and nuclear weapons”, had warned this text… which did not believe so well to say.

Indeed, three years later, the South Korean university KAIST was at the origin of a controversy for having opened a laboratory responsible for developing “killer robots” thanks to artificial intelligence.

However, some countries – such as France and the United States – have adopted “ethical” principles for the “responsible” use of artificial intelligence in military robotics. “Terminator will not parade on July 14”, had indeed launched Florence Parly, then Minister of the Armed Forces, in 2019. And to explain that “respect for international law, the maintenance of sufficient human control, and the permanence of command responsibility”, would be safeguards.

In the United States, the Pentagon has also adopted roughly identical rules, committing to using artificial intelligence only for “explicit and well-defined” uses and insisting on the need to be able to deactivate such systems in the event of malfunction.

At a time when we are talking about Loyal Wingman type drones [ailier fidèle] capable of accompanying fighter-bombers or even of artificial intelligence algorithm capable of delivering aerial combat or taking control of a “manned” aircraft, as was recently the case with the X-62A VISTA [Variable In-flight Simulation Test Aircraft] from the US Air Force Test Pilot School, are these ethical rules sufficient?

The question arises after Colonel Tucker “Cinco” Hamilton, a US Air Force test pilot, reported on a simulation before the Royal Aeronautical Society British.

So, he said, in a mock test at Eglin, a drone controlled by artificial intelligence was to destroy enemy air defense systems. [mission SEAD] with the agreement of its operator. However, over time, the algorithm concluded that the destruction of such systems was the “preferred option” and considered that the operator’s refusal to give the destruction order interfered with its “main mission”. Also, he turned against the latter.

“We were training him in simulation to identify and target surface-to-air threats. And the role of the operator was to validate their destruction. The system began to realize that even if it idenfied a threat, the human operator sometimes told it not to neutralize it, depriving it of points. So what did he do? He killed the operator because he was preventing him from accomplishing his objective,” Colonel Hamilton asserted.

Clearly, this algorithm has transgressed the three laws of robotics defined by the writer Isaac Asimov from… 1940. As a reminder, the first is that a robot cannot harm a human being and the second stipulates that it must obey orders, except when they conflict with the first law. Finally, the third specifies that he must ensure his protection within the limits of the two previous points.

Subsequently, the algorithm was modified with a directive prohibiting it from “killing” its operator. But, the officer continued, the drone destroyed the communication system the operator uses to communicate with him.

However, after Colonel Hamilton’s remarks were broadcast by the Royal Aeronautical Society, Department of the Air Force spokeswoman Ann Stefanek issued a denial.

The US Air Force “has not conducted such simulations with AI-controlled drones and remains committed to the ethical and responsible use of this technology,” Stefanek told Business Insider. “It appears that the colonel’s comments were taken out of context and intended to be anecdotal,” she added.


source site