All computing isn’t equal: Here are the four types

The world of data centers, servers and networking cables looks pretty monolithic to most people, but like Darwin’s finches, when you spend time talking to users you realize that they have evolved into different creatures. And because the types of machines and software that enterprise customers buy are very different from what Amazon might purchase to run its cloud, it’s worth it to understand the differences if you’re buying from, selling to or investing in infrastructure companies.

This is how I have broken them down, and where I think things are heading based on my talks with vendors and customers in all of these industries, but I hope to hear from others who may have different opinions. Let’s get to it.

Enterprise: This is the traditional IT system, comprised of what might be a mix of conventional servers that may or may not be virtualized. This is the world where people buy HP(shpq), Dell( s dell) or IBM servers, specialized data warehousing solutions and oodles of Cisco and Juniper networking gear. These companies probably also have a few specialty Sun or Power PC machines supporting older applications that they don’t want to touch.

Their legacy applications depend on this stuff and they have a lot of legacy applications! This is an area where there’s plenty of money to be made, but it’s not where the growth in infrastructure spending will come from. For startups, especially those commercializing open source technology that’s interesting to the enterprise, this is a market that many overlook at first, and only later find themselves getting called into it with requests for proposals on Hadoop distributions or software-defined network help. The companies will buy some SaaS offerings and maybe a few test infrastructure-as-a-service efforts for non-crucial apps, but aren’t leaving their old infrastructure behind. They also don’t want to mess with tinkering, so issues like openness aren’t as essential as ease of deployment and management.

Facebook infrastructure wallWebscale: Companies such as Google, Facebook, Yahoo, sections of Microsoft and other large online properties fit in this category. For these companies, IT isn’t just a cost of doing business, it’s the enabler for their business, much like a kitchen is for a restaurant. So just how Taco Bell tries to streamline a few ingredients into many cheap menu options in order to keep costs low, these companies streamline their hardware into a few highly optimized pools of computing for what will likely become several services offered on their platform.

These companies buy servers by the rack, and have the engineering resources to write code and implement new technologies that can save them money or speed up their ability to deliver services. Issues like interoperability and openness are important to them because they want their pieces to be as modular and as programmable as possible so they can tweak it to their needs. While their numbers may be few, they are a huge and growing segment of the market.

Cloud: I divide cloud into two categories. The more modern clouds, such as Microsoft Azure or Amazon’s Web Services that look more like the webscale architectures, and telco and service provider clouds that oftentimes have more of the enterprise gear (or specially built gear from enterprise vendors for the cloud.) An example of such gear would be Cisco’s Unified Computing System. In general the clouds following the webscale model may have some equipment closer to the data-specific or HPC (high-performance computing) clouds, but are content to put the engineering effort into making their clouds work optimally themselves. They will also have similar cost imperatives, continually worrying about how to continue scaling out without driving up costs.

As for the telco clouds and hosting companies that are getting into cloud environments, the industry seems to be in a state of flux, as the providers of such clouds realize that their first efforts were often times built around replicating enterprise environments and trying to force them to scale. Some, like Rackspace, have taken a different tack, throwing their hats in with Open Compute and the webscale guys while others have tried to use their existing gear (and the expertise of outside engineers) with cloud optimized software such as Open Stack or the products from Joyent or Eucalyptus.

PowerEdge C-Series ARM Server - DetailHPC and Data: This is the area where I’m having the most trouble, in part because data analytics is so new. I lumped them together in the last few years mostly because both styles of computing are able to take advantage of massively parallel compute architectures and require fast interconnects. But there are differences, especially as low-power ARM-based chips tied together with fabrics enter the equation. Those smaller chips can be more energy-efficient for processing data, because many of the problems associated with data can be broken up into really small bits whereas in high performance computing powerful cores are still the norm — they might be massively parallel and distributed, but these are some of the brawniest cores out there. I think as we continue in 2013, we’ll see these categories split.

I’m also curious how many categories we will have in the next three years or so and what pieces each style of computing will borrow from another. As I said before, this is my mental map of computing influenced by the need for performance, cost and fast interconnects combined with the people on the ground at the buyer and their willingness to tinker. I’d love to hear from y’all about your take on these categories and where they might be headed.

Finch image courtesy of Flickr user Keith Ellwood.


GigaOM