Google Announces New Visual features in Search and Lens
Today at I/O, Google announced features in Google Search and Google Lens that use the camera, computer vision and augmented reality (AR) to overlay information and content onto your physical surroundings.
With new AR features in Search rolling out later this month, you can view and interact with 3D objects right from Search and place them directly into your own space, giving you a sense of scale and detail. For example, it’s one thing to read that a great white shark can be 18 feet long. It’s another to see it up close in relation to the things around you. So when you search for select animals, you’ll get an option right in the Knowledge Panel to view them in 3D and AR.
Google is also working with partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo, Wayfair and more to surface their own content in Search. So whether you’re studying human anatomy in school or shopping for a pair of sneakers, you’ll be able to interact with 3D models and put them into the real world, right from Search.
New features in Google Lens
Lens taps into machine learning (ML), computer vision and tens of billions of facts in the Knowledge Graph to answer these questions. Now, Google is evolving Lens to provide more visual answers to visual questions.
Say you’re at a restaurant, figuring out what to order. Lens can automatically highlight which dishes are popular--right on the physical menu. When you tap on a dish, you can see what it actually looks like and what people are saying about it, thanks to photos and reviews from Google Maps.
To pull this off, Lens first has to identify all the dishes on the menu, looking for things like the font, style, size and color to differentiate dishes from descriptions. Next, it matches the dish names with the relevant photos and reviews for that restaurant in Google Maps.
Lens can be helpful when you’re in an unfamiliar place and you don’t know the language. Now, you can point your camera at text and Lens will automatically detect the language and overlay the translation right on top of the original words, in more than 100 languages.
Google is also working on other ways to connect digital information to things in the physical world. For example, at the de Young Museum in San Francisco, you can use Lens to see hidden stories about the paintings, directly from the museum’s curators beginning next month.
For people who struggle with reading, Lens can now read loud when you point your camera at text. It highlights the words as they are spoken, so you can follow along and understand the full context of what you see. You can also tap on a specific word to search for it and learn its definition. This feature is launching first in Google Go, Google's Search app for first-time smartphone users. Lens in Google Go is just over 100KB and works on phones that cost less than $50.