Google releases test version of ChatGPT competitor Bard

ChatGPT forces Google to speed up. The subsidiary of the Alphabet group launched Tuesday in public access its conversational robot Bard, with the assumed objective of improving the quality of its answers thanks to the increase in exchanges with users.

At the beginning of February, Google had announced in disaster the creation of Bard, overwhelmed by the arrival in November of ChatGPT, developed by the start-up OpenAI in collaboration with Microsoft. Bard’s use was initially limited to “trust testers” before it opened to the general public on Tuesday. However, the number of connections has been restricted and a waiting list established to manage demand. Access is currently only possible from the United States and the United Kingdom.

Internal criticism of Bard’s hasty launch

“As people start using Bard and testing its capabilities, they’re going to surprise us,” Google boss Sundar Pichai said in a message to staff. “Things are going to go wrong. But user feedback is key to improving the product and the underlying technology,” he added. The leader of the Californian group had been criticized internally for the hasty launch of Bard to catch up with Microsoft.

The interface consists of a website, separate from the Google search engine, with a space in which the user can type in a question. Asked what sets it apart from ChatGPT, Bard said that unlike its rival, it was “able to access real-world information through Google’s search engine.” The conversation robot (chatbot) also pointed out that it was “still in development while ChatGPT is already available to the general public. This means that I am constantly learning and improving while ChatGPT will certainly remain unchanged”.

Response “safeguards”

Bard relies on LaMDA, a language model designed by Google to generate chatbots, the first version of which the Mountain View group in California unveiled in 2021. In a message published on a group site, the vice-presidents of Google Sissie Hsiao and Eli Collins recognize that LLMs – programs capable of generating answers to questions formulated in everyday language – “are not flawless”, and can “provide, in a certain way, inaccurate, misleading or false information” .

Google also indicates that it has put in place “safeguards” to contain the possibilities of inaccurate or inappropriate answers, in particular the limitation of the length of the exchanges in a dialogue between Bard and a user. Since the launch of ChatGPT, several Internet users have indeed sought to push the chatbot to its limits and generated absurd, even worrying responses.

source site