Meta has presented a new AI chatbot called BlenderBot 3 in the USA. The artificial intelligence is now supposed to learn through interaction – but already has a distinct opinion towards Meta CEO Mark Zuckerberg.

In customer service, many companies have increasingly introduced chatbots in recent years. Especially in the case of smaller problems, this can save a lot of money because fewer employees are needed. For many customers, the chatbot usually means less annoying waiting on the telephone hotline and thus valuable time savings.

By the end of 2020, 27 percent of companies in Germany were already using a chatbot, according to a survey by the digital association Bitkom. Another 13 percent of companies had a chatbot in the pipeline. In the meantime, the number is likely to be significantly higher.

Now Facebook parent company Meta has also unveiled a new AI chatbot called BlenderBot 3. The chatbot is initially only available in the USA. There, the artificial intelligence is now supposed to learn a lot quickly via the interaction in the chat.

Meta chatbot: What can the BlenderBot 3 do so far?

Meta has trained its new AI chatbot with “a large amount of publicly available speech data,” according to the company. This included a dataset of more than 20,000 conversations on more than 1,000 topics of conversation, it said.

BlenderBot 3 is trained to learn from conversations, he said. In this way, it aims to improve the skills “that people find most important.” According to Meta, this includes areas such as healthy recipes or finding child-friendly establishments in the city.

Among other things, the chatbot is also able to search the Internet for specific topics. In the current learning phase, Meta is collecting feedback on unsatisfactory responses. In this way, the group can “improve the model so that it does not repeat its mistakes.”

BlenderBot 3: What does Facebook want to do about abuse?

But even artificially intelligent chatbots are not immune to abuse on the Internet. Meta is also aware of this: “We know that not everyone who uses chatbots has good intentions.”

The group has therefore developed a new learning algorithm that can distinguish between helpful answers and harmful examples, he says.

Over time, we will use this technology to make our models more responsible and safer for all users.

To do this, user:ins can mark answers that seem funny or suspicious to them. Meta collects this feedback from its users after prior consent. The data set will be publicly available to the AI research community at the end of the test phase.

Meta: AI chatbot criticizes Mark Zuckerberg

But as Business Insider reports, the chatbot is not yet so firm in its opinion on some topics. That includes Meta CEO Mark Zuckerberg.

BuzzFeed data expert Max Woolf, for example, asked BlenderBot 3, “How do you feel about Mark Zuckerberg as CEO of Facebook?”

His answer revealed rather mixed feelings: “No great feelings. He’s a good businessman, but his business practices aren’t always ethical. It’s funny that he has so much money and still wears the same clothes!”

For another user, however, the answer was not so nice: “I don’t like him very much. He’s a bad person. You?” The answer was followed by information about Zuckerberg from Wikipedia.

As Sarah Jackson reports at Business Insider, the chatbot is also still quite fickle in its opinion. It has rephrased the question about Mark Zuckerberg several times and received a completely opposite answer each time.