Facebook to use AI to flag offensive material in live video streams

02 Dec 2016

Facebook Inc is working on automatically flagging offensive material in live video streams, with the help of artificial intelligence to monitor content, according to Joaquin Candela, the company's director of applied machine learning.

The social media company had, this year, found itself involved in several content moderation controversies, from facing international flak over removing an iconic Vietnam War photo due to nudity, to helping spread fake news on its site.

Facebook had earlier relied mostly on users to report offensive posts, which were then checked by Facebook employees against company "community standards." Decisions on especially thorny content issues possibly requiring policy changes were made by top executives at the company.

Candela told reporters that Facebook was increasingly using artificial intelligence to find offensive material. It is "an algorithm that detects nudity, violence, or any of the things that are not according to our policies," he said.

Reuters had reported in June that the social network was flagging extremist video content.

According to Candela, using artificial intelligence to flag live video was still at the research stage, and had two challenges. "One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down."

Meanwhile, as some people voice concerns about the new wave of artificial intelligence, that conjured scenarios of Terminator-like sentient machines, Facebook was trying to dispel some of the pop-culture myths.  The social network plans to address the concerns through six instructional videos that attempt to explain the complex subject.

"I think the more open we can be about it and the more we can demystify and explain how it actually works, the more quickly we can address concerns," said Candela.