Does algorithmic video surveillance signal the end of our freedoms?

Smile, you’re being filmed ! In April, Parliament passed the Olympics Law which allows the use of camera and drone images to feed algorithms. Called algorithmic video surveillance cameras, a barbaric term for cameras backed by artificial intelligence capable of recognizing and reporting behavior deemed “abnormal” to law enforcement during sporting, recreational or cultural events.

Leaving the responsibility of detecting suspicious behavior autonomously to the machine raises many questions. How does she decide that a person in the wrong direction, static or on the ground, for example, represents a danger? Even before the question of personal data, there is that of individual freedoms. “Behind this tool, there is a political vision of the public space, of wanting to control what happens there,” notes Noémie Levain, responsible for legal and political analyzes for La Quadrature du Net. A group that forms can be an event like a group of friends. There are moral and political considerations in the very design of these tools which, for us, are dangerous. Any surveillance tool is a source of control and repression for the police and the State.” The algorithms are trained by humans who will tell them what they should consider dangerous or not.

Even more worrying, these algorithms operate on the deep learning, which uses neural networks to solve complex tasks. “It is a certain type of statistical learning that is so complex that humans cannot understand all the steps of the reasoning. These algorithms will use personal and biometric data to identify situations without us being able to know exactly which ones are used,” she continues. On the one hand, there is a disempowerment of humans by delegating decision-making power to the machine. On the other hand, these infrastructures provide new surveillance capabilities to law enforcement. “We are giving the police an enormous power that they did not have before, that of being omniscient, of seeing what they had not seen until now and of deciding what is suspicious,” observes Noémie Levain.

“Facial recognition is coming tomorrow”

Faced with the outcry, the Senate assured that it had increased the “safeguards” in April. Is the absence of facial recognition or the deletion of images after twelve months sufficient guarantees of security? “If after 11 and a half months, the entire database is hacked and it ends up on the darknet [un réseau parallèle, caché, souvent associé à des activités illicites ou illégales], too bad, says Hélène Lebon, lawyer specializing in personal data protection law. In pure video surveillance, out of law for the 2024 Olympics, when you have a camera on public roads, the images cannot be kept for more than a month. It’s twelve times more.”

An algorithm trained to recognize so-called suspicious behavior is then capable of facial recognition. Once it performs well, it’s too late. Especially since these are the same cameras for algorithmic video surveillance and facial recognition. These are the same companies that are behind it. Once the infrastructure is in place, all it takes is a new legislative green light to take the next step.

“Managers often take facial recognition as a scarecrow of the most intrusive technology, and indeed it is very dangerous because behind facial recognition, we can, in certain applications, trace back to an identity,” underlines Noémie Levain. But in both cases, it’s about spotting, controlling and knowing who is where. Either we link it to an identity, or we identify the person based on their behavior and we find them on a video surveillance image. Both technologies are equally dangerous. Algorithmic video surveillance is already here when facial recognition arrives tomorrow,” she insists.

“A great strategy to put regulators to sleep and to pass sensitive files is to do it in two stages. We say: “today, technology does not allow [la reconnaissance faciale]”Later, we say: ‘Now she allows it,'” adds Hélène Lebon. “Going there little by little is a persuasion maneuver. Starting with an experiment, minimizing the operation, adds Noémie Levain. We fear that At the end of this experiment, on March 30, 2025, we decided to perpetuate this technology. Switching to facial recognition will no longer be more than a formality. The algorithms will have had time to train with a monstrous database.

The problem of discriminatory algorithmic bias

The machine is not infallible and such power is worrying. Should we remember that artificial intelligence is not free from bugs and discriminatory algorithmic biases? It is impossible to train algorithms on completely objective databases. They reproduce biases that already exist in society. If a society is racist, it will reflect its image. For example, it has been shown in the United States that facial recognition is most often wrong on racialized populations, particularly Asians, African Americans and Native Americans.

According to a US government report published in 2019, facial recognition was wrong on Asian or black people 100 times more often than on white people. Should we fear that these populations will be wrongly reported by the machine? “Diverted from their initial use, these technologies risk, in the long term, targeting already marginalized groups. With these new technologies, the risks of discrimination are real,” writes Amnesty International in a petition published September 9.

“Almost all countries that have hosted major sporting events have passed security laws. And the infrastructure remained, deplores Noémie Levain. We fear moving on to generalization, then to other biometric surveillance technologies, such as facial recognition or emotion recognition, which are already proposed by deputies.” Should the State give itself such power of surveillance? That’s the real question.

source site