Facebook has been fighting misinformation on the platform using third-party fact-checkers, and is now expanding fact-checking for photos and videos.
Similar to Facebook's work for articles, the company has uilt a machine learning model that uses various engagement signals, including feedback from people on Facebook, to identify potentially false content. Facebook then sends those photos and videos to fact-checkers for their review, or fact-checkers can surface content on their own. Visual verification techniques inlcude reverse image searching and analyzing image metadata, like when and where the photo or video was taken. Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies.
Facebook is also leveraging other technologies to better recognize false or misleading content. For example, the company uses optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers' articles. Facebook is also working on new ways to detect if a photo or video has been manipulated.
Based on research and testing with a handful of partners since March, Facebook says that misinformation in photos and videos usually falls into three categories: (1) Manipulated or Fabricated, (2) Out of Context, and (3) Text or Audio Claim.