AI use in the Gaza war? Serious allegations against Israel’s army

As of: April 6, 2024 3:52 p.m

Is Israel’s army using AI in Gaza? An investigative journalist makes serious allegations. When selecting Hamas targets, civilian casualties were accepted with the help of artificial intelligence. The military rejects this.

It was the confidential statements of six officers from the elite Israeli secret service unit 8200 that led the Israeli investigative journalist Yuval Abraham to come to the conclusion: Shortly after the outbreak of war, the military used an AI program called “Lavender” and although on a very large scale.

Formally, the system was designed to evaluate intelligence data on all members of the military wing of Hamas and Islamic Jihad using artificial intelligence and to mark them as potential bomb targets. Above all, the so-called “simple infantry troops”, who were previously hardly or not at all recorded by the secret service.

“37,000 Palestinians marked at peak”

On the scope of the artificial intelligence-based program, Yuval Abraham says: “The scope of ‘Lavender’ flagging the Palestinians in Gaza as suspected or possible lower-ranking Hamas or Islamic Jihad militants? My sources told me that ‘Lavender’ marked 37,000 Palestinians in Gaza as such suspects at its peak.”

The army gave approval in the first weeks of the war that for every low-ranking Hamas gunman flagged by the Lavender AI program, up to 15 to 20 civilians could be killed. When bombing these targets, Abraham’s sources from the elite unit 8200 told him, so-called “dumb bombs” were primarily used, which destroyed entire houses and killed all the residents. This threshold has been raised significantly for higher-ranking Hamas officials.

Yuval Abraham published his research results on Wednesday this week in the Israeli-Palestinian online magazine 972, at the same time as the British Guardian, with which he had previously shared the Israeli officers’ explosive statements.

Informants apparently feel responsible for killings

What was the motive of the officers, who were mainly reservists, to share their top secret knowledge about the use of artificial intelligence in the Gaza war with him? These were people, most of whom only went back to the military after October 7th.

“They were shocked by the atrocities of October 7th and the massacres that took place that day. And I think that some of them, after serving in the military for weeks and acting under those guidelines, were even more shocked by things that were required of them.”

Some would have felt a responsibility to pass on this information, first of all to the Israeli public, but also to the international public.

But there were also other reasons why his sources from the elite Unit 8200 contacted him, says Abraham: “Several of them spoke to me about the fact that they feel directly responsible for the killing of Palestinian families.” They were of the opinion that many of the political measures, “including the almost automatic dependence on artificial intelligence, are unjustifiable.”

Israeli armed forces reject research

In a detailed statement published verbatim by the British Guardian on Wednesday, the Israeli armed forces firmly rejected the research results: Contrary to claims to the contrary, the army does not use an artificial intelligence system “that identifies or tries to predict terrorists, whether a person is a terrorist.”

The information systems are merely tools for analysts to identify targets. Army regulations would require analysts to conduct independent investigations verifying whether identified targets met relevant definitions in accordance with international law and those in Army directives.

Minimal screening of targets?

In practice, however, things look significantly different, according to investigative journalist Abraham: His sources told him that there was one check per target, “but only a very minimal one. According to a source who spoke to me, the human surveillance took time about 20 seconds per tagged AI person, and the only check it had to do was to listen and see whether the person was male or female.”

So if it was a woman, they would have known “that ‘Lavender’ had made a mistake because it wasn’t supposed to mark women.” And if it was a man, they would have approved the AI’s results “without looking at the reasons for the machine’s decision, without examining the raw intelligence data. They were not obliged to do that.”

Clemens Verenkotte, ARD Tel Aviv, tagesschau, April 6, 2024 1:54 p.m

source site