Replicator: Pentagon program wants to decide the race for AI weapons

Autonomous weapons
Replicator: This Pentagon program is intended to decide the race for AI weapons on the battlefield

MQ-9 Reaper drone (archive image)

© US Air Force / AFP

While the moral questions of AI-controlled weapons are still being debated, their implementation has long been being worked on. The Pentagon wants to finally catch up with China with a new program. But it might just be the beginning.

It’s a very fundamental question: Who is responsible if a weapon controlled by artificial intelligence decides to kill a person? But the moral concerns about autonomous weapons that have been discussed for years are increasingly being overtaken by reality. The USA now officially wants to expand the use of AI in combat operations with a Pentagon program. With “Replicator” they want to catch up with China – by flooding the battlefield with tons of self-piloting drones.

The goal is simply explained: China has a clear mass advantage and the new initiative is intended to make up for this disadvantage. But instead of spending a lot of money on more ships, missiles and soldiers, the Pentagon would rather “rely on American ingenuity,” Deputy Defense Secretary Kathleen Hicks announced in the summer. Now the plan became concrete: The Pentagon announced last week that they want to decide at the beginning of December which platforms are already ready for mass production.“We want systems that are harder to predict, harder to hit, and harder to outcompete our potential competitors.”

“Small, smart, cheap and many”

This is not necessarily about large combat drones like the aircraft-sized ones Global Hawk. Instead, you rely on small, cheap ones Drones, which can be mass-produced – and released onto the battlefield by the thousands. Ideally, these should act completely autonomously. “Imagine systems that use solar energy to keep themselves in the air and are full of sensors that provide us with new, reliable information live,” she raved. “Imagine fleets of autonomous systems on the ground delivering logistical support, scouting terrain and securing our troops in new ways.” Use in water is also conceivable.

The smarter use of AI allows “determined defenders to stop a larger attacker while sending fewer personnel to the front.” In addition, the new weapons can be “built, deployed and upgraded as quickly as the combat forces actually need them. Without a never-ending tail of maintenance needs,” Hicks is convinced. It’s about accelerating the innovation of the US armed forces with “platforms that are small, smart, cheap and, above all, many,” she explained.

Catching up

But we are still a long way from that, I believe Gregory Allen, who used to research AI for the Pentagon and now works at a think tank. “The Defense Department is still struggling to capitalize on recent breakthroughs in machine learning,” he told the Associated Press. Missy Cummings from the Robotics Center at George Mason University sees it that way too. “There is still no AI running around autonomously on the battlefield. So far, it has been used to lift the veil battlefield “The AI ​​that the US Department of Defense has relied on so far primarily supports people.”

In view of this, the current plans are quite ambitious: “We want to scale quickly,” explained Hicks when presenting the project in the summer. The AI-controlled “autonomous systems” are expected to arrive on the battlefield within 18 to 24 months.

It has been known for years that the Pentagon is relying on artificial intelligence. Finally, with Microsoft, Google or Amazon brings high-profile companies on board to build the right technology. This has already caused trouble for employees in Silicon Valley on several occasions; Google, for example, had to abandon its application for the program called “Project Maven” to better analyze the battlefield.

Fear of the autonomous weapon

The rapid developments are causing concern for some observers. The question of responsibility for the lethal use of AI-controlled weapons remains unclear. Who is to blame if a combat drone kills civilians on its own? But under the pressure of competition – Russia, China, India, Pakistan and other countries have never signed a US push for the responsible use of AI – these thoughts are becoming less and less affordable if you want to keep up. It is no longer a question of whether one should develop truly autonomous combat systems, reports Christian Bros, who used to be a soldier himself and now works for the military supplier Anduril. “It’s more about how concretely we implement this – and adhere to the deadlines set.”

It doesn’t help that the description of Replicator also reminds us of so-called drone swarms. These swarms consist of hundreds or even thousands of drones that are controlled simultaneously. If they are used for a coordinated attack, they could reach the destructive potential of a nuclear weapon, experts warned two years ago (find out more here). If such a swarm could also be operated autonomously, a misjudgment by the AI ​​would have serious consequences.

However, the Pentagon remains tight-lipped about details about Replicator’s AI plans. They would rather keep their competitors in the dark, explained Hicks. It is therefore preferable not to reveal to the public which platforms are chosen for mass production. The fact that this also makes the debate about ethical use more difficult is likely to be more convenient for the Pentagon.

Sources:US Department of Defense, AP, Modern War Institute,

source site-5