New Google Tool Spots Child Abuse in Photos
Google on Monday released the Content Safety API, a developers' toolkit that uses deep neural networks to process images in such a way that fewer people need to be exposed to them.
The technique can help reviewers identify 700 percent more child abuse content, Google said.
"Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse," engineering lead Nikola Todorovic and product manager Abhi Chaudhuri wrote in a company blog post Monday. "We're making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it."
Artificial Intelligence is used for everything from speech recognition to spam filtering. The term generally refers to technology called machine learning or neural networks that's loosely modeled on the human brain. Neural networks are firstly trained with real-world data and then they can learn to spot a spam email, for example.