What we’ll see in 2013 in cloud computing

The cloud has moved from concept to reality. Sure, startups have been buying computing and storage on demand for years, while enterprises talked up virtualization and hoped it was the same thing. But now big companies are finally getting this whole on-demand compute thing, and the next year we’ll see big IT companies buy up startups that will help transition enterprise workloads to the cloud, more companies that offer enterprise-class infrastructure-as-a-service (IaaS) get real applications and a more viable model of hybrid cloud that enables cloud bursting. Let’s see what’s ahead.

1: Proving the public cloud can handle enterprise apps

awslogojpegAnecdotally speaking, most Fortune 1000 companies have at least some test and development running in Amazon’s public cloud. And a subset of those companies run actual applications there; heck, even NASDAQ is an AWS customer. Yet, when it comes to truly mission-critical applications in the heavily regulated finance and healthcare sectors, many companies will not put any data or applications in a public cloud. Companies like Diebold and the big banks won’t even allow staff to use AWS for development, let alone deployment.

That’s a huge hurdle for Amazon (and Microsoft Azure). AWS cut a deal with Eucalyptus last year to make it easier for companies to run Eucalyptus private clouds that interoperate with AWS on certain jobs in a hybrid model. Startups like CloudVelocity claim they can “clone” on-premise workloads onto AWS without modification and provide complete security. That’s a big promise — and one that needs to be vetted. But look for many more such announcements next year. Any company that can successfully make the public cloud a safe and secure repository for even regulated applications will be able to print money.

Meanwhile, enterprise software giants VMware and Microsoft likewise have to prove that their cloud technologies are up to snuff for their legacy customers as well as new prospects.

2: Make-or-break for HP

HP logoFor the past few years, all the legacy hardware powers — Dell, EMC, Hewlett-Packard and IBM– scrambled to prove their relevance in a new world where cloud computing makes hardware branding irrelevant.

But HP is in the hottest seat this coming year, with CEO Meg Whitman pleading with investors to wait out a “multiyear turnaround.” It’s by no means clear that they will. HP is in the cross hairs after years of management turmoil and questionable acquisitions — most recently of Autonomy. That $ 11.1 billion purchase was meant to build HP’s credibility both in big data and in cloud computing. It’s safe to say it has not done so.

Now that HP has launched its OpenStack-based compute cloud,we’ll see if HP’ s enterprise customer base — which is still huge — has enough confidence to move into an HP-branded cloud. Otherwise it will move elsewhere.

3: It’s time for OpenStack to stand (or not) on its own

full openstack cloud software logoNow that Rackspace has stepped back from its paternal role as OpenStack backer and turned governance over to a multivendor body, it’s time to see if OpenStack has what it takes to compete with Amazon Web Services on the public cloud side as well as with other open-source options CloudStack, Eucalyptus, and OpenNebula.

2012 was a big year for OpenStack with HP, Internap, Red Hat, and Rackspace itself all standing up OpenStack-based clouds. Nebula is getting close to general availability on its OpenStack appliance. And there are other options coming soon including CloudScaling’s Amazon and Google API-compliant private cloud. Watch for more services companies to build services around OpenStack as well. Mirantis just launched its own BYO OpenStack service, for example.

While many have said that OpenStack is late to the party, it’s important to remember THAT despite all the hype AND all the cloud washing of last year, we’re still very much in early days of cloud computing. Anything can happen. Some new company and perhaps even a legacy player could rise up and give even Amazon a run for its money.

4: Infrastructure now extends beyond the four walls of the data center

Google Iowa data centerBack in 2008 Google pushed the idea that the data center was the computer, but with the launch of its Spanner database that syncs content across five data centers, we’re clearly moving into a new realm of infrastructure designed to support our favorite web services. Now the data center isn’t the computers, the data centers plus the network connecting them are.

It’s not just Google thinking this way. Netflix, a heavy user of Amazon’s cloud and one of the biggest broadband traffic drivers, has extended its network as close to the edge as carriers will let it by building out a content delivery network (CDN).  Facebook has a similar effort in the works, and it is also making deals with carriers to lease fiber so it can extend its infrastructure closer to the edge. We’re going to see more deals where data center operators will have to become more comfortable stretching their infrastructure outside the data center discovering how to keep things in sync over massively distributed networks.

5: Software defined everything doesn’t get easier

servers networkSoftware-defined networking was the big buzz word of 2012 but we also saw the emergence of software-defined storage and the software defined data center. Basically the idea here is to bring the same flexibility to networking, storage and the data center that virtualization brought to computing. You can’t free your applications from the server without bringing along the networking and storage for them too.

But like any new field that could disrupt established vendors, there’s a lot of marketers throwing shade, especially around software defined networking. While we expect to see a lot of production deployments showing of network virtualization, we don’t think we’ll see much headway when it comes to commoditizing the router or effectively linking the applications to the networking gear in an open and standardized way.


GigaOM