The dangers of AI: Why Geoffrey Hinton is leaving Google – Business

The change of heart came quite suddenly. For decades, Geoffrey Hinton had been doing research in his specialty, in Cambridge and Edinburgh, in Toronto and finally at Google in Silicon Valley. And when people asked him whether the artificial intelligence (AI) that he and his people were working on wasn’t dangerous, the Nobel Prize winner always excused himself. With a modified quote from Robert Oppenheimer, the father of the atomic bomb: “If something is technologically attractive, then you go ahead and do it.”

But now Hinton has dropped a kind of bombshell himself. He of all people, who is considered one of the most important pioneers of how machines can take over increasingly complicated tasks, has now quit his job at Google. A little bit because of that, he admitted in an interview with the magazine Technology Review, because he can no longer remember the many details that are necessary for his work as well as he used to. Above all, however, to act as a reminder. As a warning against a technology that can overwhelm people and even endanger them. “Sometimes I think it’s like aliens have landed and people don’t even realize it because they speak English so well,” he says in the interview.

Dangers from unscrupulous power-mongers

The now 75-year-old sees the greatest danger in the misuse of the technology by unscrupulous power-seekers such as Russian President Putin. Many evil actors wanted to use AI to win wars or influence elections. “Don’t think for a second that Putin wouldn’t build hyper-intelligent robots with the aim of killing Ukrainians. He wouldn’t hesitate.”

He can talk more freely about all of this if he is no longer with Google. So far, the company has been a good guardian of technology. But he can’t talk about the dangers of AI while he’s at Google because it could hurt the company’s business. He is obviously also worried about how things will go at Google now that Microsoft and Open AI have presented their now famous language model Chat-GPT4 and thus directly threaten Google’s core business, search on the web. The competition of the giants could lead to the Internet being flooded with fake photos, videos and texts. Many normal users would then no longer be able to distinguish between what is true and what is false.

However, the fact that Hinton has only now changed his mind upsets some. Meredith Whittaker, a longtime Google employee, left the company over discrimination and surveillance disputes. “Where were these people when we spent months and thousands of dollars on lawyers?” she asks on Twitter. We, she also means Margaret Mitchell and Timnit Gebru, two AI researchers who were responsible for ethical questions about artificial intelligence at Google and were pushed out of the company after criticism of their employer’s behavior.

“…there was a moment to act together”

And she goes on to ask, “Where were you when we organized to stop it before it got to that point?” Google CEO Sundar Pichai also lied about her and the other contenders and downplayed the risks we showed. “…there was a moment to act together when the power these men of AI hold could have been used in solidarity with an emerging movement to stop the worst of AI. They have their power not used that much. And here we are.”

Hinton now also seems to regret his life’s work. Life’s work, one can rightly speak of him. Since the 1980s, the computer scientist has been persistently researching how neural networks can be made to learn. He is considered one of the fathers of what is still the most important method for this, the so-called backpropagation. Although many other scientists did not see a great future in it, he stuck with it. And believed all these years that it would be a very long time before the technology would work satisfactorily. One of his students at the University of Toronto is the co-founder and chief technology officer of Open AI, which developed the Chat-GPT language model.

But now Hinton sees things differently: “I suddenly changed my views on whether these things will be more intelligent than us,” he says in an interview with Technology Review. “I think they’re very close now and they’ll be a lot smarter than us in the future. How do we survive?” Hinton now believes there will now be two types of intelligence in the world, biological brains and neural networks. Whether the biological of the people is enough to stop the thing? Hinton is rather skeptical: “The USA can’t even agree on keeping assault rifles away from teenage boys.”


source site