Artificial intelligence in China: chatbot with “socialist core values”


background

Status: 04/11/2023 7:47 p.m

China’s tech giant Alibaba wants to get involved in the artificial intelligence business. At the same time, Beijing is preparing state requirements. But governments in the West must also ask themselves: how many rules does technology need?

By Antonia Mannweiler, tagesschau.de

It was a big announcement for Chinese internet giant Alibaba. The cloud division of the online trading group today presented a competitor to the text robot ChatGPT: the language software “Tongyi Qianwen”, which means something like “truth from a thousand questions”, which also uses artificial intelligence (AI). But only a short time later, the joy of the developers should have received a strong damper. At the same time, the Chinese Internet regulator, the “Cyberspace Administration of China”, published the first draft of planned regulations for AI services.

In 21 points, the authority presents possible requirements that Chinese companies and developers of AI language models could soon face. According to Beijing, the content must reflect the “basic values ​​of socialism”. In addition, no information may be disseminated that could disrupt the economic and social order. When developing the algorithms, care should also be taken to prevent discrimination based on gender or age.

Bot with “hallucinations”

A problem for the developers should also be the rule that all content should be truthful. The development of AI language models is currently still in an early stage. In many cases, the software still works imprecisely and is quite error-prone. Google, for example, made an embarrassing mistake when presenting its chatbot “Bard”, which gave the wrong answer about the James Webb telescope on its first public appearance. Alibaba’s chatbot, on the other hand, is initially geared towards business life and is intended to write documents or e-mails, for example.

However, it remains to be seen how well the bot will fare against the competition, says George Karapetyan, an AI specialist at the LPA consultancy tagesschau.de. “Also, according to initial user reports, Alibaba’s bot was already having ‘hallucinations’, which ultimately means it’s confidently giving wrong answers.”

The Chinese regulatory authority now wants to put a stop to such false content. Comments and suggestions on the catalog of regulations can be submitted until May 10th. “As the Chinese government begins to regulate and mandate what these bots can and cannot say, this could present an additional hurdle in the tradeoff between innovation and compliance,” Karapetyan said.

Is technology evolving too fast?

On the other hand, from the expert’s point of view, the early introduction of clear rules for companies can also be helpful in reducing the risk of unforeseen results. “If China succeeds in defining clear guidelines at an early stage, this also harbors opportunities.” However, it can be difficult to regulate a technology that is developing so quickly and is so intelligent. Every day it is reported how Internet users circumvent the protective mechanisms used to control the bots.

Alibaba is just the latest example of a Chinese company with its own text robot. Just a day earlier, the Hong Kong-based AI company SenseTime presented its chatbot “SenseChat” in a live demo, to which the stock market reacted with a strong price increase. And last but not least, the Chinese search engine Baidu also demonstrated its chatbot “Ernie Bot”, which, however, caused less enthusiasm and a falling share price.

“Chinese bots are still lagging behind at the moment and are primarily focused on the Chinese language,” says AI specialist Karapetyan. At the moment, ChatGPT, the software designed by the start-up OpenAI and supported by Microsoft, is the “clear market leader and the gold standard” among chatbots.

How are governments responding?

Tech industry rivals Microsoft and ChatGPT have come under pressure to advance their artificial intelligence business, even if the product is still immature. At the same time, in view of the rapid development, the pressure on governments around the world to find answers to the question of how the legislature should react is growing.

In the USA, the IT authority NTIA (“National Telecommunications and Information Administration”) today announced public consultations on possible government measures. “Just as food and cars only enter the market when they are safe, AI systems should reassure the public, government and businesses that they are fit for purpose,” it said in a statement . In the end, the authority could recommend safety assessments or certification of artificial intelligence to politicians.

Italy sets ChatGPT a deadline

Government regulations for the new technology are also being sought in the EU. Most recently, the Italian data protection authority had caused a stir by blocking ChatGPT in the country for the time being. The main concerns were about the massive collection of personal data and the protection of minors. Italy has given OpenAI 20 days to inform about the company’s further measures. Otherwise there could be a fine of up to 20 million euros or four percent of annual sales.

Two years ago, the EU Commission presented a draft AI regulation that could come into force this year. Regulation is also urgently needed in this area, says Paul Lukowicz, head of the Embedded Intelligence research department at the German Research Center for Artificial Intelligence (DFKI). tagesschau.de. Technology will change the world in a way that cannot even be imagined today. Therefore, one cannot simply let them run in the sense of “wild growth”.

Possible “watermark” for bots

Stricter rules are needed where human life, health or freedom are affected, Lukowicz believes – but not in other areas where the technology does not cause any damage. The problem is not research or development, but the question of where a technology is used.

In the long term, Lukowicz could imagine a kind of “watermark” for content created by bots. When it comes to the rules for artificial intelligence, the expert draws parallels to the pharmaceutical industry: Even with medicines, very few people know how they work, but that they work. Studies are required for drugs, there are strict approval processes – and still cases of damage caused. From Lukowicz’s point of view, the main thing is a balanced cost-benefit analysis.

source site