Google is launching “multisearch” globally today. This feature in Google Lens allows users to search for relevant information using a combination of images and text. It was previously only available to users in the US but now, Google is rolling out the feature globally so more people can use it. More specifically, it will be available anywhere on mobile where Google Lens is available.
Google also plans to bring multisearch to mobile web globally in the next few months. The announcement was one of many that Google made at its “Google presents: Live from Paris” event on Wednesday. Multisearch near me, similarly, will be available globally wherever Google Lens is available. It was, so far, available only in the US. The feature lets you search for relevant things from local businesses. Additionally, Google Lens’s AR feature that lets you blend text translations into the image they were taken from is also expanding globally.
Also Read | Google versus OpenAI: ChatGPT rival “Bard” announced with wider availability in coming weeks
Google Maps’ immersive view feature that allows users to get a more immersive view of a location is starting to roll out in five cities including London, Los Angeles, New York, San Francisco, and Tokyo, and soon, it will be available in Florence, Venice, Amsterdam, and Dublin, among others. Google said it is also bringing information such as ETAs and turn by turn location on the lock screen itself for Google Maps users.
Last but not the least, Google is working to add more context-based information to Translate so users will be able to pull out more appropriate translations for words and phrases that may have more than one meaning. It will initially be available for users in English, French, German, Japanese, and Spanish.
Google also teased that it is working to integrate generative artificial intelligence (AI) features in search results as it looks to take on ChatGPT. Only recently, it has announced Bard, its potential ChatGPT rival that will be available more widely in the coming weeks.