Google on Thursday began rolling out a new Search feature that will let users search for information using both text and images at the same time. The new multisearch feature is part of Google’s ongoing efforts to use AI to “create information experiences that are truly conversational, multimodal and personal,” as Google CEO Sundar Pichai said recently.
The multisearch feature is embedded in Google Lens, the image recognition tool that’s accessible via the Google app. For now, the feature is available in beta for US users searching with text in English. It’s also geared towards shopping searches.
For instance, a user could take a screenshot of an orange dress and add “green” to their query to try to find the same dress in that color. It’s also helpful for non-commercial searches; a user could take a picture of a rosemary plant and add the query “care instructions” to figure out how to take care of their new plant.
To use the feature, you open up the Google app, tap the Lens camera icon, and then search a screenshot or take a new picture. Then, you swipe up and tap the “+ Add to your search” button to add text.
In its blog post Thursday, Google said it’s also exploring ways the feature could be improved with MUM (Multitask Unified Model), Google’s latest AI model. The tech giant recently shared how it’s using MUM and other AI models to more effectively deliver information on crisis assistance to people searching for help.
Back in February, during Google’s Q4 conference call, Pichai expounded on Google’s investments in AI models that enable multimodal search.
“In 2022, we’ll stay focused on evolving our knowledge and information products, including Search, Maps and YouTube, to be even more helpful,” he said. “Investments in AI will be key, and we’ll continue to make improvements to conversational interfaces like the Assistant.”
He added, “From MUM to Pathways to BERT and more, these deep AI investments are helping us lead in search quality.”