Last week, Alphabet-owned AI lab DeepMind launched its new chatbot offering, dubbed Sparrow.
Designed as a conversational and informative tool, Sparrow was trained using DeepMind’s language model Chinchilla and is integrated with a live Google tool so it can rapidly search to answer users’ questions. Reinforcement learning was also used to hone Sparrow’s capabilities, with user feedback integrated into development of the tool.
The new chatbot is envisioned as an answer to ongoing issues in creating conversational AIs that scan the internet for information without relaying potentially harmful content, while retaining a level of conversational autonomy. As the Sparrow team wrote in a blog post, creating autonomous dialogue is a complex task as it features “flexible and interactive communication.”
“However,” they added, “dialogue agents powered by large language models can express inaccurate or invented information, use discriminatory language, or encourage unsafe behavior.”
By using human responses to develop its tool, DeepMind hopes to have reduced the amount of useless or harmful information being relayed by the bot. To test this, the team presented human participants with multiple potential answers to a question and had them select the most relevant or helpful response, using this to train the AI system on the kinds of answers it should give.
Creating a chatbot that pulls in information from external sources also, the team says, increases its accuracy, with the team saying Sparrow correctly cited evidence 78% of the time for factual questions.
Fears over the potential dangers of intelligent language tools have been growing as these devices have been made smarter, with demand for explainable AI now driving the market. The development also comes in the wake of controversies around Google’s chatbot being claimed as sentient by a now-fired engineer. While posing a different kind of potential danger, the effect is still to push demand for safer AI conversational models.