Google AI Tools Lets You Identify Malicious Comments on Your Website's Articles
Google and subsidiary Jigsaw launched on Thursday a new technology called Perspective, designed to help news organizations and online platforms identify abusive comments on their websites.
The technology will review comments and score them based on how similar they are to comments people said were "toxic" or likely to make them leave a conversation.
It has been tested on the New York Times and the companies hope to extend it to other news organizations such as The Guardian and The Economist as well as websites.
"News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether," Jared Cohen, President of Jigsaw, which is part of Alphabet, wrote in a blog post.
"But they tell us that isn't the solution they want. We think technology can help."
Perspective will not decide what to do with comments it finds, and publishers will have to flag them to their moderators.
The Perspective technology is still in its early stages and "far from perfect", Cohen said, adding he hoped it could be rolled out for languages other than English too.