Following in Microsoft’s footsteps, Mountain View, California-based Google today, at its Live from Paris event, announced AI-focused updates to Search, Maps and Translate, via The Verge.
Just two days ago, on Monday, February 6th, Google announced that its ChatGPT competitor ‘Bard’ will be available to the public in the “coming weeks,” with Google CEO Sundar Pichai describing it as an “experimental conversational AI service” powered by LaMDA. The chatbot made an appearance at the event, where Prabhakar Raghavan, senior vice president at Google, said that users would be able to interact with Bard to explore complex topics, collaborate in real-time and get new and creative ideas.
Today, live from Paris 🇫🇷, we’re sharing a few new ways we’re applying our advancements in AI to make exploring information even more natural and intuitive.
Join us at 2:30pm CET ⬇️ #googlelivefromparishttps://t.co/452k7Rc7Hn
— Google Europe (@googleeurope) February 8, 2023
Google then explained how some questions have No One Right Answer, or ‘NORA.’ This is applicable to questions like “what is the best constellation to look at when stargazing.” The answer to such questions is subjective, and hence, it has NORA. To help answer such queries, Google is introducing generative AI directly into Search results.
Soon, if you ask Google Search questions that have NORA, the new generative AI features would organize complex information and multiple viewpoints and opinions, and combine them in your search results.
Here are some of the other new announcements made across Google’s platforms, with some features releasing in the near future, and others in the coming weeks and months:
The Street View and Live View mixture, called Immersive View, is now beginning to roll out in five cities, namely London, Los Angeles, New York City, San Francisco, and Tokyo. The feature will next expand to Florence, Venice, Amsterdam, Dublin, and more.
The multi-search tool that allows users to initiate a search using an image and a few words of text is also receiving an update. The feature allows users to take photos of objects like food, supplies, clothes and more, and add the phrase “near me” in the Google app to get search results showcasing local businesses, restaurants or retailers that carry that specific item. The feature was limited to the United States, but is now rolling out globally wherever Google Lens is available. The feature will also be available on mobile web globally in the next few months.
Under Google Maps, the company is adding new features to assist EV drivers, including suggested charge stops for shorter trips, filters for “very fast” charging stations and indications of places with chargers in search results for places like hotels and grocery stores.
Further, ‘Translate with Lens’ for images is now rolling out globally. Normally, if you’d translate the text on an image, the translated text would be added on top of the image as ‘extra text,’ and wouldn’t be blended in. This would block or distort the image behind the text. Now, with a new machine learning tool called ‘Generative Adversarial Networks’ (the same tool used in Pixel phones for Magic Eraser), Google Lens can translate the text, and blend it back into the background image without distorting it, making the image retain its natural look.
ETAs and turn-by-turn navigation via Google Maps would now be visible on your lock screen. The feature is also compatible with iOS 16’s Live Activities. Google Translate will also be introducing additional context and information for certain words or phrases, starting with English, French, German, Japanese, and Spanish languages in the coming weeks. The new Google Translate design for Android will be available on iOS in the near future.
Follow the links to learn more about the new Search, Maps and Translate features.
Image credit: Google