Facebook To Use Artificial Intelligence to Flag Offensive Videos
Facebook is working on an intelligent software that will automatically flag offensive material in live video streams.
The social media company has been embroiled in a number of content moderation controversies this year, from facing international outcry after removing an iconic Vietnam War photo due to nudity, to allowing the spread of fake news on its site.
According to Joaquin Candela, Facebook's director of applied machine learning, Facebook increasingly was using artificial intelligence to find offensive material. It is "an algorithm that detects nudity, violence, or any of the things that are not according to our policies," he said.
The company already had been working on using automation to flag extremist video content. Now the automated system also is being tested on Facebook Live, the streaming video service for users to broadcast live video.
Using artificial intelligence to flag live video is still at the research stage.
Facebook said it also uses automation to process reports it gets each week, to recognize duplicate reports and route the flagged content to reviewers with the appropriate subject matter expertise.