Select Page

LaMBDA: Google AI wants to be smart in all circumstances

LaMBDA: Google AI wants to be smart in all circumstances

ARTIFICIAL INTELLIGENCE Google’s artificial intelligence can already hold a complete and coherent conversation on an infinite number of topics

What is the status of the LaMDA project? Google used its annual I/O conference to unveil its advances in artificial intelligence.

LaMDA (Language Model for Dialogue Applications) is an AI capable of understanding a conversation. The bot can pretend to be anything and everything, and start a discussion on any topic in a natural way. The last idea of LaMDA is to program chatbots that could pretend to be a company, a person, an entity, or whatever. To better illustrate the concept of this AI, Google took the example of a discussion with the planet Pluto, and then with a paper plane.

LaMDA is able to improvise answers by pretending to be Pluto, it’s quite amazing. If the AI is able to engage in any conversation, without needing a predefined script, it is thanks to the many concepts it has stored and learned. This allows to keep the dialogue open without going in circles. In the Pluto example, “no answer was predefined,” says Google boss Sundar Pichai. The dialogue seems to flow.

In the embryonic stage

This chatbot differs from what we already know with Alexa, Siri or the current Google Assistant, because the goal is to use LaMDA orally, in a fluid conversation: “Unlike other language models, LaMDA has been trained on dialogues. During its training, it spotted different nuances that distinguish conversations from other forms of language,” Google explains on its blog.

For now, the AI is still in its infancy and has only trained on text. The next step will be to be able to detect information through text, audio, video and image simultaneously. This will make it possible to ask questions through different sources of information, just like in a classic messaging conversation.

In addition, Google said it is aware of the biases of such an AI and tries to limit the risks. Indeed, an AI simply interprets the data it is given, so it can take into account prejudices, hate speech and misinformation, as it has already been the case several times in the past. A Twitter chatbot from Microsoft became deeply racist, just 24 hours after it entered the social network.

The Mountain View firm said it was tempted to add LaMDA to Google Assistant in the future. The bot will, moreover, certainly be aimed at other services and businesses. In any case, LaMDA is still only at the stage of an internal project at Google, and the firm has not yet provided any information on a possible adaptation of the AI software for the general public.

Rate this post

About The Author

Leave a reply

Your email address will not be published.