The word is out. Legislators in Iowa say Facebook is the company behind what could be a $ 1.5 billion data center project in Altoona, Iowa, that will result in 1.4 million square feet of data center space for the social network. Not only is that a lot of money and a lot of space, it’s also mind boggling to think about a company spending $ 1.5 billion on physical infrastructure when it pulled in just $ 5.01 billion in revenue in 2012.
But the servers, storage and networking gear inside the millions of square footage that Facebook has dedicated to data centers are where the bits that comprise Facebook’s online products are assembled. Every click, every upload and every message sent via the web passes through a data center somewhere.
An economy built on digital bits
And the relationships are getting more complicated with the rise of cloud computing and federated applications comprised of multiple services wrapped up in one program. So a data visualization service that ties your company’s Salesforce data with internal business data relies on servers hosted by Salesforce, possibly located in-house or, if the data visualization app is suing Amazon as its back end, one of Amazon’s data centers.
This is no longer a call and response approach, where I call up a web site and a server sends it to me. And the value of those services is increasing in line with their complexity. Intel’s purchase of Mashery last week, for an example, was evidence of the chip giant realizing that this web of relationships is the new digital supply chain. And the ports of call are the data centers, as Mark Thiele an EVP of technology at Switch told me.
So what does this have to do with Facebook? Or Google? In many ways they have pioneered the creation of a new model of data center and computing, where the data center is the computer. They did this because when offering a web-based service, their cost of goods was directly tied to their infrastructure. Knowing how much it costs servers to perform and deliver each search result is as important for Google as knowing what the cost of gas is for an airline.
And yet… for the most part, how we discuss and think about data centers has not become more sophisticated beyond saying it’s a room full of servers. Yes, we have data such as power usage effectiveness ratios, but that’s not the most important metric for everyone. If data centers really are going to be the manufacturing floor of the digital economy, we need to start thinking about them at a higher level.
Fortunately, some people already are. Here are two places we can start: understanding the operators and defining the metrics associated with success.
Understanding the market
All data centers are not equal. For years people have broken down data centers based on their redundancy and security, with a Tier-1 class of data center defined as a high-availability data center that has lots of redundancy and secured premises. This is where you host your financial information and NSA files, and it’s also the most expensive to build.
But there’s another breakdown worth exploring and that’s the party that is building and oeprating the data center. Just like there are different types of computing out there requiring different gear and performance, there are also different types of data centers. I’ve broken it down into three categories, but I am on the fence if there should instead be four.
- Master of their domain: This includes Google, Facebook and even enterprise customers. These operators control the hardware, the building and the apps running on the hardware so they can optimize the heck out of their infrastructure. Facebook and Google are the best known for doing this, but there’s no reason anyone who has the ability to control everything at a big enough scale can’t learn from Facebook and apply its tequiniques to their corporate operations. In the case of banks looking at Open Compute, this is already happening.
- Master of their servers: This includes most hosting companies, like Peer1, Rackspace and others, who can build out their own hardware and servers but can’t control what people run on them. I’m struggling a little with this category because I’m not sure if Amazon Web Services or Microsoft Azure fits into this category, because they don’t control the end applications. However, they are able to limit the services they offer in such a way that their infrastructure is optimized for them, putting them on similar footing as the masters of their domain.
- Masters of their cages:Companies such as Equinix, Switch and Digital Reality Trust that operate co-location space fall into the last category. These companies operate huge data centers that are like server hotels. People lease space, buy connectivity and pop their own gear into the space. They tend to offer customers multiple providers for connectivity or easy interconnects to other data centers. For example Equinix has a program where it can offer a direct connection to AWS. This cuts the distance digital goods have to travel and can also offer some additional guarantees.
Defining the metrics for success
Once you break out the different builders of data centers, it’s possible to try to figure out the right metrics associated with how they run their data centers. David O’Hara, a GigaOM Research Analyst and consultant in the data center space, breaks this down into three sections:
- Capital expenditures: How much does this data center costs to build a data center, per megawatt?
- Operational expenditures: How much does it it cost me to run the data center, per megawatt?
- Availability and redundancy: What is the availability in the data center in terms of networking, power and cooling?
He suggested companies start thinking of data centers as a portfolio that should match the value and needs of the services they are trying to run. For example, he pointed to Apple’s data center in Maiden, N.C. as a beautiful, state-of-the-art data center with high reliability. However, building such data centers is expensive, so a service that doesn’t need high availability doesn’t need to be hosted there. So showing someone a trailer for an iTunes movie might be better off served fromm a less reliable data center while transaction processing should occur in Maiden.
A company that is way ahead of the pack on this front is eBay, which has broken down its applications into specific workloads and assigned values to them to develop its miles per gallon or MPG for data centers. While the rest of the industry is obsessing over power usage effectiveness ratios (which matter to the companies operating their own data centers for cost reasons), eBay is tracking code, servers and overall infrastructure efficiency related to transactions (specifically URL requests) associated with users’ buying and selling on the site.
So if Facebook is prepping to spent 30 percent of last year’s revenue on a new data center in Altoona, as the Iowa legislators maintain (Facebook didn’t return my request for comment), let’s start talking about data centers not just as a room full of servers, but as the manufacturing floor for the digital economy. To do that, we need to develop a better understanding of the different operators, the right metrics for each business and start collecting more data on them overall.
Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.
- Locating data centers in an energy-constrained world
- The capex connection: Why we pay for privacy on the Web
- Dissecting the data: 5 issues for our digital future