The pressure from the competition from Microsoft is said to have led Google to ignore ethical principles when developing its own chatbot Bard. That’s what 18 current and former Google employees complain about in one Bloomberg report. Bloomberg also had access to internal documents that are intended to show the prioritization of rapid market entry.
After Microsoft’s competition launched the Bing chatbot, Google’s internal ethics department is said to have been increasingly ignored. The team’s actual task is to check new developments in the field of AI for ethical problems and, if necessary, to delay publication.
For certain issues, such as child safety, developers still need to achieve 100 percent approval from the ethics team members. In other areas, the release limit is said to have been softened to 80 to 85 percent so as not to stand in the way of publication.
Ethics assessments should be ignored
The employees in the ethics department complain that their opinions and guidelines no longer play a role in the area of AI development. Instead, the publication of AI products like Bard is more important at all costs.
Google was surprised by the release of the Microsoft chatbot. The company then pushed ahead with the development of its own chatbot. Before the first test version was released, employees had warned that Bard was a pathological lie and that it was not a good product. However, Google introduced the bot and marked it as an experiment.
The AI chatbot situation is said to have caused panic at Google in late 2022. Internally, a Code Red was issued and development of Bard was prioritized.