How Google is using OpenFlow to lower its network costs

Google is checking out a new form of networking protocol known as OpenFlow in the communications networks that run between its data centers. The search giant is testing the use of software-defined networks using OpenFlow in order to lower the cost of delivering information between its facilities, says Urs Hölzle, SVP of technical infrastructure and Google Fellow in an interview with me.

Google’s infrastructure is the stuff of engineer dreams and nightmares. The company’s relentless focus on infrastructure has led it to create a competitive advantage because it can deliver search results faster and for less money than the next guy. Much like Dell conducts studies showing that lowering a table where people assemble its computers saves seconds and costs, Google understands that investing in infrastructure can help it shave a few cents off delivering its product. And the next area that’s ripe for some infrastructure investment might be networking, specially using the OpenFlow protocol to create software defined networks.

For Google, at a certain point, communications between servers at such a large-scale can be a problem, notes Hölzle, who is speaking at the Open Networking Summit in Santa Clara next week. He explains that the promise of OpenFlow is that it could make networking behave more like applications thanks to its ability to allow the intelligence associated with networking gear to reside on a centralized controller.

Previously, each switch or piece of networking gear had its own intelligence that someone had to program. There was no holistic view, nor a way to abstract out network activities from the networking hardware. But with OpenFlow the promise is you can program the network and run algorithms against it that can achieve certain business or technical goals.

“I think what we find exciting is OpenFlow is pragmatic enough to be implemented and that is actually looks like it has some legs and can realize that promise,” he said. However, he added, “It’s very early in the history and too early to declare success.”

Google is trying the protocol out between data centers, although Hölzle didn’t disclose details about how much Google is saving and how widespread the implementation is. Hölzle said the search giant was trying to see how it could make its wide-area network and long-distance network more flexible and speed up the delivery of services to users without adding costs. However, costs for Google aren’t just measured in terms of bandwidth, but also in terms of people required to operate the network or configuring it.

“The cost that has been rising is the cost of complexity — so spending a lot of effort to make things not go wrong. There is an opportunity here for better network management and more control of the complexity, and that to me is worth experimenting with,” Hölzle said. “The real value is in the [software-defined network] and the centralized management of the network. And the brilliant part about the OpenFlow approach is that there is a practical way of separating the two things: where you can have a centralized controller and it’s implementable on a single box and in existing hardware that allows for a range of management and things that are broad and flexible.”

But OpenFlow still requires some work. Hölzle said that there are issues with how you communicate between OpenFlow networks and those that aren’t. So, today, Google takes its OpenFlow traffic and hands it over to a router running alternative network protocols such as MPLS or BGP, what Hölzle calls a “normal” router. He expects that to change over time as things standardize within OpenFlow and among vendors. As he said at multiple times throughout the conversation, these are early days for OpenFlow and software-defined networks.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Migrating media applications to the private cloud: best practices for businesses
  • Dissecting the data: 5 issues for our digital future
  • Infrastructure Q3: OpenStack and flash step into the spotlight



GigaOM