Do BYO data centers make sense anymore?

In this era of cheap-and-reliable renta-data centers, does it make sense for a company to build a new data center on it’s own anymore?

Amazon’s data center guru James Hamilto  is pretty clear that he sees no reason for most companies to build new data centers from scratch now, but if they have a huge compute load and really have to, they should go overboard, build way more capacity than they need and sell off the excess a la Amazon itself.

While Hamilton has a vested interest in people moving their compute loads to Amazon’s infrastructure , his build big or don’t build at all mantra resonates with other IT experts. The consensus:  it makes sense for most companies to trust their data center needs to the real experts in data centers. More companies will start trusting their new compute loads — maybe not necessarily all the mission critical stuff — to the big cloud operators. That roster  includes the aforementioned players as well as Google, Microsoft, IBM, Hewlett-Packard,, Oracle (ORCL) and others that are building out more of their own data center capacity for use by customers.

And for startup companies, the decision to not build is a no brainer. Connectivity to the cloud is the real issue for these companies. “If I was starting a greenfield company, the data center would be the size of my bathroom — there wouldn’t necessarily even be a server, maybe a series of switches and all my backoffice apps, my salesforce automation, my storage would be handled in the cloud,” said Dave Nichols, CIO services leader for Ernst & Young, the global IT consultancy

David Ohara, GigaPRO analyst and co-founder of Greenm3 holds a more nuanced view. Companies with mid-sized loads really have to think things through, he said. “Once you get to 5 to the 7.5MW data center, that’s just big enough to be super complex but the economics are weird. At that point you should probably build a 15MW data center and sell off the other 7.5MW to someone else or partner with Digital Realty Trust or some other company to share costs,” Ohara said. Data center size is typically described in terms of megawatt (MW) consumption.

It’s in that 5 to 7.5MW area where the company starts having to know about  the niceties of chillers and power systems, he said.

“When you break through the 10,000 server barrier — that’s when you start needing 3 to 5MW of power and now you’re getting into major facility costs where you have to have multiple diesel generators, and complex power and cooling systems. And it’s in that 10,000 to 100,000 server zone where costs soar. At that point, there aren’t many companies on the planet that can achieve the scale of an Amazon, a Rackspace, a Google, or a Microsoft. So why not trust your loads to the experts?

There will always be pushback on this point but it’s starting to change. Asked what type of data or task should not be entrusted to a cloud provider, the CIO of one big company said, “The formula for Coke. But that’s about it.”

Database guru Michael Stonebraker, co-founder and CTO of VoltDB, fully backs Hamilton’s thesis. There is simply no way for more than a handful of huge companies to achieve Amazon’s data center scale; the same low electricity costs, the experience standing up data centers.  As long as those companies are okay with running in the public cloud their decision is simple. “Sooner or later, if you’re a small guy, there will be huge incentive to move to the public cloud. You’ve either got to be really big or run on someone else’s data center,” he said.

Photo courtesy of Flickr user jphilipg

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Infrastructure Overview, Q2 2010
  • Migrating media applications to the private cloud: best practices for businesses
  • Infrastructure Q3: OpenStack and flash step into the spotlight



GigaOM