Rolling out to Google’s iOS as well as Android apps, the feature is said to work best with shopping related queries.
Google’s search engine will offer a new feature that mimics the way users ask questions about things they are looking at in the real world. With this feature, users will now be able to identify an object in the physical world using Google Lens or select an existing image from the camera roll.
Then, users can access visual matches and further tailor the results with follow-up questions. Called Multisearch, the feature may even help Google maintain an edge against a wave of more privacy-centric search engines that have so far focused on text-based queries.
But given that Google Lens is not available on web browsers, making Multisearch a fundamental part of people’s search regimen might be challenging. Similarly, the company is also working towards competently tending to visual queries to ensure Multisearch does not come across as a mere gimmick.
[5 minute read]