AI Doesn’t Pose an Existential Risk—but Silicon Valley Does

A coalition of the willing has united to confront what they say is a menace that could destroy us all: artificial intelligence. More than 350 executives, engineers, and researchers who work on AI have signed a pithy one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” But like the target of the last infamous coalition of the willing—Saddam Hussein and his mythical “weapons of mass destruction”—there is no existential threat here.

This isn’t the first letter to sound the alarm. It features prominent figures in the field—such as Sam Altman, chief executive of Microsoft-backed OpenAI. Generally, the warnings about AI are straightforward: It poses immediate risks like discrimination or automation as well as existential ones like a superintelligent Skynet-like system eradicating humanity.

These claims of an extinction-level threat come from the very same groups creating the technology, and their warning cries about future dangers is drowning out stories on the harms already occurring. There is an abundance of research documenting how AI systems are being used to steal art, control workers, expand private surveillance, and seek greater profits by replacing workforces with algorithms and underpaid workers in the Global South.

The sleight-of-hand trick shifting the debate to existential threats is a marketing strategy, as Los Angeles Times technology columnist Brian Merchant has pointed out. This is an attempt to generate interest in certain products, dictate the terms of regulation, and protect incumbents as they develop more products or further integrate AI into existing ones. After all, if AI is really so dangerous, then why did Altman threaten to pull OpenAI out of the European Union if it moved ahead with regulation? And why, in the same breath, did Altman propose a system that just so happens to protect incumbents: Only tech firms with enough resources to invest in AI safety should be allowed to develop AI.

No, the real threat is the industry that controls our technology ecosystem and lobbies for insulation from states and markets that might rein it in. I want to talk about three factors that make Silicon Valley, not one of its many developments, a “societal-scale risk.”

First, the industry represents the culmination of various lines of thought that are deeply hostile to democracy. Silicon Valley owes its existence to state intervention and subsidy, at different times working to capture various institutions or wither their ability to interfere with private control of computation. Firms like Facebook, for example, have argued that they are not only too large or complex to break up but that their size must actually be protected and integrated into a geopolitical rivalry with China.


source site

Leave a Reply