ChatGPT: The AI ​​behind the chatbot can do that – and these are the risks

Watch the video: ChatGPT’s AI can do this

What is ChatGPT? This question is currently occupying people around the world. And of course the chatbot has an answer ready here too. The software from the company OpenAI simulates human conversations, supported by artificial intelligence. From text data fed to it, the app can learn to create virtually any type of content. Even novel texts or poetry. “GPT stands for Generative Pre-trained Transformer,” says Andrew Patel of Helsinki-based cybersecurity firm WithSecure. “The AI ​​model is trained on enormous amounts of data. Basically, it continues to write what you put in it. If you ask something, it answers, if you ask it to continue writing for you, it does.” According to Patel, the fact that this feels “like magic” for users is due to how fascinating the chatbot makes it. It is actually nothing new that machines learn through human feedback. The first models of this technology were already developed in the 1990s, says Tim Scarfe from the British tech company XRAI. Since then there has been a small revolution. “The early versions had very, very small neural networks. Since then, the neural networks have gotten bigger and deeper.” And they are now learning by context. It is no longer necessary to retrain the machine with each step. “People realized that you could give the model a Shakespearean sonnet and it just kept generating Shakespearean content. Then you got a little more experimental and said, ‘Why don’t we just give it questions? Why don’t we ask it math “Or things it wasn’t trained to do? And that’s when this logical thinking ability came out.” Last November, OpenAI made the AI ​​software available for free public testing. According to the company, over a million users tried to get the ChatBot to chat within a week. Or simply to do a few presentations quickly. Again and again this question comes up: What are the risks? For example, anyone who is up to evil can write a script from conversations between Twitter users, says Andrew Patel. “And with such a script, you could then influence the perceived political landscape in a social network. Or automate the harassment of individuals.” Texts can also be misleading or distorted if the data entry is too one-sided. Tech giants like Google and Amazon have already acknowledged that some of their projects experimenting with AI were “ethically sensitive”. At several companies, people had to intervene to repair damage.

source site-5