Google Adds Voice Search, Visual Search and Results Prerendering

Two of the most important Google mobile services: voice search and visual search will now be available from your computer.

Voice search, a feature that’s built into Android, also works in Google Chrome and allows you to search using your voice. Chrome added support for the Speech Input API back in April and it’s the only browser that implemented the API. Right now, Chrome’s speech input feature is only available for English.

“We first offered speech recognition on mobile search, but you should have that power no matter where you are. You should never have to stop and ask yourself, ‘Can I speak for this?’ — it should be ubiquitous and intuitive. So we’ve added speech recognition into search on desktop for Chrome users. If you’re using Chrome, you’ll start to see a little microphone in every Google search box. Simply click the microphone, and you can speak your search,” explains Google. The feature is gradually rolled out, so you may not see it yet.


Google Goggles is a full-fledged visual search engine that’s trapped in a mobile application. But why do you have to buy a smartphone to use Google Goggles when you could simply upload an image to Google and find related pages and images on the Web? “Search by Image” does more than TinEye, the “reverse image search engine” that lets you find an image on the Web.

“Google uses computer vision techniques to match your image to other images in the Google Images index and additional image collections. From those matches, we try to generate an accurate ‘best guess’ text description of your image, as well as find other images that have the same content as your search image. Your search results page can show results for that text description as well as related images,” mentions Google.

You can drag and drop an image to the search box, paste an image URL in the search box or click the camera icon and upload an image. Google generates a hybrid results page that shows both related images and Web search results for the equivalent text query.


Google also a developed two extensions for Chrome and Firefox that let you right-click on an image and use it as a query. “With these extensions, you can initiate a search on Google using pictures on the web. You can discover photos of places, learn more about art pieces, identify landmarks, and more.”

While voice search and visual search are useful, the most impressive search feature launched by Google today is Instant Pages. The new feature only works in Chrome 13+ (available in Canary/Dev Channel and soon in beta), but it will radically improve your search experience. Chrome prerenders the top search result if it’s likely that you will select it, so you no longer have to wait for the page to load. You might remember a feature called “prefetching” that was first supported by Firefox. Prerendering is a lot more powerful than prefetching.

According to a Chrome developer, “prefetch is Firefox style prefetching of resources specified (just populating the cache). In Chrome, with prerender, we don’t just download the URL specified, but render the whole page including running all the JavaScript and downloading and rendering all the embedded resources.”

For most users, Instant Pages will look like magic. They’ll search for [nytimes] or [amazon], click the first result and be surprised to see that the page loads instantly. Google says that this feature saves 2-5 seconds on a typical search.

But Chrome’s prerendering is not limited to Google searches. Any Web developer can use it by inserting a link element with a special value for the “rel” attribute. “Sometimes a site may be able to predict with reasonable accuracy which link the user is most likely to click on next — for example, the ‘next page’ link in a multi-page news article. In those cases, it would be faster and better for the user if the browser could get a head start loading the next page so that when the user clicks the page is already well on its way to being loaded,” suggests Google.



Google Operating System