Google now lets US beta users search up really specific things that they can’t exactly find the exact words for with its Multisearch feature. You can take a picture or a screencap of a picture, and then further refine the search itself by typing in a word.
“At Google, we’re always dreaming up new ways to help you uncover the information you’re looking for—no matter how tricky it might be to express what you need. That’s why today, we’re introducing an entirely new way to search: using text and images at the same time. With multisearch in Lens, you can go beyond the search box and ask questions about what you see,” said Google.
With multisearch, you can either screenshot or snap a photo of something first before adding on extra words to your Google search. For example, if you screenshot a specific dress but would like to find it in a different colour—green for example—you’re able to by adding the word “green” to your search.
Additionally, you’re also able to take a photo of your dining set, and add the words “coffee table” to find a matching table. And if you want to find out how to take care of a specific plant but you don’t know the name of it, you can take a picture of it and type “care instructions”.
“All this is made possible by our latest advancements in artificial intelligence, which is making it easier to understand the world around you in more natural and intuitive ways,” continued Google.
The feature, however, might not work with everything. The Verge reports that if you want to match a pattern of a leafy notebook, you have to get closer to it. Or else, Google Lens would think that you’re looking for notebooks, not the pattern. To enhance its accuracy, Google says that that multisearch might be further be enhanced by MUM—Google’s latest AI model in Search.
According to Google, multisearch is best used for Shopping searches. The feature is not yet available outside the U.S. It is currently rolling out to iOS and Android there, but hopefully, it will be rolled out to more countries.