Researchers Advance Image Recognition Technology
Google Research scientists have have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with greater accuracy than ever before. Google's machine-learning system can automatically produce captions to accurately describe images the first time it sees them. This kind of system could eventually help visually impaired people understand pictures, provide alternate text for images in parts of the world where mobile connections are slow, and make it easier for everyone to search on Google for images.
The idea comes from recent advances in machine translation between languages, where a Recurrent Neural Network (RNN) transforms, say, a French sentence into a vector representation, and a second RNN uses that vector representation to generate a target sentence in German.
The researchers replaced that first RNN and its input words with a deep Convolutional Neural Network (CNN) trained to classify objects in images. Normally, the CNN’s last layer is used in a final Softmax among known classes of objects, assigning a probability that each object might be in the image. But by removing that final layer, reseearchers instead fed the CNN’s rich encoding of the image into a RNN designed to produce phrases. The whole system was trained directly on images and their captions, so they managed to maximize the likelihood that descriptions it produces best match the training descriptions for each image. The model combines a vision CNN with a language-generating RNN so it can take in an image and generate a fitting natural-language caption.
Google says that its experiments with this system on several openly published datasets, including Flickr8k, Flickr30k and SBU, showed qualitative results. It also performed well in quantitative evaluations with the Bilingual Evaluation Understudy (BLEU), a metric used in machine translation to evaluate the quality of generated sentences.
To get more details about the framework used to generate descriptions from images, as well as the model evaluation, read the full paper here.