Facebook researchers test new approach to teach bots to chit-chat like humans

29 Jan 2018

Facebook Artificial Intelligence Researchers (FAIR) have tested a new approach to teach bots how to chit-chat like humans.

They taught the AI using a special 164,000-utterance data set titled "Persona-Chat" to look for patterns.

"Persona-Chat" has over 160,000 lines of dialogue, sourced from workers found on Amazon's "Mechanical Turk" marketplace, The Verge reported today.

Amazon Mechanical Turk (MTurk), a crowdsourcing internet marketplace allows individuals and businesses to coordinate the use of human intelligence to perform tasks that computers cannot at present.

In the Facebook test, the data was used to train neural networks used for existing chatbots, and the results were then assessed by another group of Mechanical Turkers.

In each case, they were asked to conduct a conversation with the persona-driven bot, and compare it with both other chatbots and humans.

The persona bot was not able to score as highly on criteria like "fluency" and "consistency" as the humans but it outperformed the chatbot trained on movie dialogue.

According to commentators, the new tests are significant after Facebook last year had to shut down one of its AI systems when chatbots started speaking in their own language in defiance of the provided codes (See: Facebook shuts down AI system after bots start chatting in uncomprehensible language).

It may, however, be pointed out that chatbots cannot really chat. Researchers from Facebook's FAIR lab explain in a pre-print paper published this week, that they fail to perform the task on a number of levels.

First, they do not display a ''consistent personality,'' and stick to the same set of facts about themselves throughout a conversation.

Second, they also do not remember what they or their conversational partners have said in the past. Third when they are faced with a question they do not understand, they tend to fall back on diversionary or pre-programmed responses, like ''I don't know.''

According to commentators, the goal now is not just interrogation, but conversation and to try to recreate this attribute, researchers have turned to deep learning, which means that instead of mapping out preprogrammed questions and answers, chatbots are taught by looking for patterns in large datasets.