Google announced today that its updated “multisearch” feature would be made available to users across the world on mobile devices, wherever Google Lens is already accessible, in addition to other A.I.-focused announcements. In an effort to make Google search more compatible with the capabilities of smartphones, the feature that allows users to search using both text and images simultaneously was first introduced in April of last year.
Over the next few months, we’ll also see the rollout of multisearch for the web and a new Lens feature for Android users, as well as a variant called “multisearch near me,” which narrows search results to local businesses.
According to a Google blog post, “multisearch is powered by A.I. technology called Multitask Unified Model, or MUM,” which is able to understand information across multiple formats (such as text, photos, and videos) and draw insights and connections between topics, concepts, and ideas. Using MUM, Google Lens users can include text in their image searches.
Lens completely transformed our conception of the search process. Since then, Prabhakar Raghavan, Google’s senior vice president in charge of Search, Assistant, Geo, Ads, Commerce, and Payments products, said at a press event in Paris, “We’ve brought Lens directly to the search bar and we continue to bring new capabilities like shopping and step-by-step homework help.”
A user could use Google Search to find a shirt they liked, and then use Lens to find similar patterns on other clothing items, such as a skirt or socks. They could also use a camera to take a picture of the damaged part and then use Google Search to find instructions on how to repair it. It’s possible that Google could benefit from this method of input, as it would be able to process and understand search queries that it couldn’t before, or that would have been more difficult to input using text alone.
This method is most useful when looking for clothing online, since you can easily find alternatives to an item you like if the colour or style is altered. It’s also possible to find complementary pieces of furniture, such as a coffee table, by snapping a photo of the whole set, say, from above. Google claimed its multisearch feature allowed users to filter results by brand, colour, and other visual characteristics.
In October of last year, the feature was released to users in the United States, and in December, it was released to users in India. Google has announced that as of today, multisearch is available to all users worldwide on mobile, in all languages and countries where Lens is available. Google announced today that it will be expanding its “multisearch near me” variant in the near future.
In May of this year, Google announced that it may soon be able to send “multisearch near me” queries directly to local businesses, providing users with search results for products that matched the stock at nearby stores. For the broken-part bicycle, for instance, a search query including a picture and the words “near me” would help you locate a nearby bike shop or hardware store that sold the appropriate replacement part.
Google has stated that this update will roll out over the next few months to all languages and countries where Lens is currently available. In the coming months, it will also add support for multisearch across all devices on the web, not just mobile ones.
The search giant hinted at an upcoming Google Lens feature, where Android users can search what they see in photos and videos across apps and websites without leaving the app or the website itself, as part of its new lineup of search products. Google has announced that its “search your screen” feature will be made available in all regions where Lens is provided. Likewise, Google announced a new benchmark for Google Lens, saying that it is now used more than 10 billion times every month.