Why Google thinks the GPU is the engine for the web of the future

For years, the internet provided users with static clumps of information stored and refreshed in databases on the back end. But as interactive games, animations and fancy scrolling have become popular, graphics have become fancier and screens richer. Throughout this evolution, hardware components on users’ devices have gotten more capable, but now Google seems to think the GPU is the best tool for the internet of tomorrow.

At a talk at the Google I/O conference on Thursday, Googlers Colt McAnlis (pictured), a developer advocate working on Chrome games and performance, and Grace Kloba, the technical lead on Chrome for Android, gave developers some tips for making better use of the GPU. Doing some of these things can help websites display their graphics as soon as possible and become optimized for “touch events” such as scrolling without sacrificing performance.

Chrome developers can split up many website components into GPU layers, each of which can be subdivided into a bunch of tiles for an entire page — think of a grid overlaid on top of the page. Instead of asking the CPU to upload the pixels to the whole screen area, the GPU caches those tiles inside its memory when a page is accessed and then serves up select tiles in response to user behavior, such as scrolling. This approach “allows the CPU to drink margaritas and essentially chill out while the GPU does all the heavy lifting,” McAnlis said.

But there’s a tradeoff to this layering approach. Making many layers can result in entirely too many tiles, and the GPU “has a static, non-growable memory resource in its texture cache,” McAnlis said. “If the cache is full, you have to push old tiles out of the cache before you put new tiles in.” And that can result in a decrease in performance.

In short, developers have to figure out the right number of layers for each page. For example, if a user ends up not using a tile that is loaded and cached on the GPU, it’s a waste of a GPU compute cycle. Developers can learn more about the use of GPU inside Chrome in the Chromium Project’s design documents and get insight into GPU use with the Trace Event Profiling Tool. Developers can also run experiments through Chrome, McAnlis said.

To demonstrate good use of layers, McAnlis pointed, perhaps unsurprisingly, to a Google site, the mobile version of the Google I/O conference site. “Look at the source code,” he said. “It’s a great example.” The header is its own layer, he said, and it expands and contracts and adjusts the times of conference sessions as the user scrolls up and down the page.

The winners on the web over the next few years will be the sites that can serve rich, compelling content as fast as possible. It looks like Google believes taking full advantage of the GPU might be the best way to accomplish that goal.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • A near-term outlook for big data
  • Dissecting the data: 5 issues for our digital future
  • NewNet Q2: Google closes the quarter with a bang

    


GigaOM