Facebook Outlines Measures to Remove Terrorist Content
Facebook has announced details of steps it is taking to remove terrorist-related content - behind-the-scenes work that includes the use of artificial intelligence to keep terrorist content off Facebook.
"Our stance is simple: There's no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities," said Monika Bickert, Director of Global Policy Management, Facebook.
Facebook has recently started using AI against terrorism in order to keep potential terrorist propaganda and accounts off the social network.
When someone tries to upload a terrorist photo or video, Facebook's systems look for whether the image matches a known terrorism photo or video. This means that if Facebook previously removed a propaganda video from ISIS, they can work to prevent other accounts from uploading the same video to Facebook.
Facebook has also recently started to experiment with using AI to understand text that might be advocating for terrorism. The social network is currently experimenting with analyzing text that they have already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so they can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts.
Terrorists that they tend to radicalize and operate in clusters. This offline trend is reflected online as well. So when Facebook identifies Pages, groups, posts or profiles as supporting terrorism, they also use algorithms to "fan out" to try to identify related material that may also support terrorism. Facebook uses signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.
Facebook says they have also gotten much faster at detecting new fake accounts created by repeat offenders. Through this work, the social network has been able to reduce the time period that terrorist recidivist accounts are on Facebook.
Facebook is also taking action against terrorist accounts across all their platforms, including WhatsApp and Instagram.
Since AI can't catch everything Facebook also needs human expertise.
Facebook's community helps by reporting accounts or content that may violate Facebook's policies - including the small fraction that may be related to terrorism. Facebook's Community Operations teams around the world review these reports and determine the context.
Facebook has also grown a team of counterterrorism specialists. At Facebook, more than 150 people are exclusively or primarily focused on countering terrorism as their core responsibility. This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers.
In order to more quickly identify and slow the spread of terrorist content online, Facebook joined with Microsoft, Twitter and YouTube six months ago to announce a shared industry database of "hashes" - unique digital fingerprints for photos and videos - for content produced by or in support of terrorist organizations.