From the beginning of Amazon Web Services in 2006 to the current state of webscale infrastructure and cloud vendors of all types, corporate IT has taken a crazy journey in the last five years. As we put the finishing touches on this year’s Structure conference, I realize I had a front row seat for most of it.
Looking forward, the growth of software-defined networking, tools to help better ensure security and compliance as well as more enterprises putting their faith in the cloud will lead to a second shift. First, we cobbled together the cloud as a new delivery model for IT, but in the coming years I think we’re going to blow completely apart the idea of the infrastructure and an app hosted in one place, on a single CPU. We’ll explore the beginnings of that second wave of computing in our Structure 2012 show on June 20 and 21, with discussions that range from tearing apart our servers into more modular parts, to the creation of programmable data centers.
But first, it’s worth reviewing how we got here. So, heading into our annual conference covering infrastructure and the cloud, I thought I’d look back at a few moments from previous shows that tell us how we got here today. Taking the long view, it’s clear that virtualization and the demand for mammoth compute has fundamentally changed IT.
2008: Our first summer of cloud
In the summer of 2008, we pulled together the first conference about cloud computing, with a group of companies including Amazon, VMware, Google, Teradata and IBM. The big names are still familiar today, and have even swallowed some of the startups we helped launch. For example, in 2008, we convened a panel on databases for the cloud, with representatives from Aster Data (bought by Teradata in 2011), Greenplum (bought by EMC in 2010) and SQLstream, which is still independent and will present again at this year’s event.
Big data wasn’t the only trend presaged that year. We also had a panel that predicted both the rise of OpenStack as well as the importance of APIs as ways to link clouds and services in this new world of federated compute that the cloud would enable.
But at its core, the first year was a mix of knowledge from webscale service providers like Meebo and Facebook as well as vendors trying to outline and fight to get their vision of the cloud heard. Unsurprisingly, VMware’s co-founder Mendel Rosenblum showed up discussing VMware’s role in a software framework for moving compute around and between data centers, something VMware’s current CTO Steve Herrod outlined for the first time last month at Interop.
2009: Private clouds galore!
A year later, private clouds were all the rage as Saleforce.com’s Marc Benioff extolled the virtures of software as a service and predicted the importance of social and real-time data on the enterprise. Representatives from HP and IBM were also on hand to discuss their own visions for cloud computing. HP’s looked more like delivering IT as a service, while IBM seemed more inclined to copy Amazon’s infrastructure-as-a-service vision.
Also at the 2009 Structure, we saw the beginning of public dissent from the webscale community at the products the server makers and chip firms were offering, as Jonathan Heiliger, then the VP of operations at Facebook, stood up and told attendees that server makers and chip firms weren’t building products for his needs. From the perspective of three years later, this was the germinating point for the Open Compute Project that Facebook announced in April 2011.
Meanwhile, HP has an Amazon-like public cloud offering and IBM is helping companies build private clouds, so their messages from 2009 have shifted somewhat.
2010: Hybrid clouds and scaling issues
It was only natural that the public and private clouds would need some way to connect eventually, and the theme in 2010 was primarily about dealing with hybrid clouds, although there was a lot of talk and not many actual deployments. Several panels dealt with scale, from the mechanical aspects to the practical issues of interoperability between clouds and data stores.
The third Structure also introduced the concepts of software-defined networking with Stanford’s Nick McKeown speaking on the topic for our audience. His words have since begun bearing fruit as SDNs are a key ingredient if we want to deconstruct the elements of compute to let our workloads move easily from data center to data center.
2011:Networking bottlenecks and the post-document era
Last year, the idea of software-defined network had created several startups that outlined the problems that software defined networking and the OpenFlow protocol could solve, but the use cases didn’t yet exist. (Check out some of talks for this year, though). On the hardware front, the dissatisfaction with server vendors had led to the creation of Open Compute. So at Structure 2011 ,we also combined the launch of Open Compute hardware with the newly created OpenStack infrastructure stack and the OpenFlow networking protocol for a discussion that looked at how more companies could build and operate clouds that would also be capable of interoperating with each other.
Outside of the infrastructure, Paul Maritz the CEO of VMware, shared a forward-looking discussion on the post-document era with Om, envisioning the future of collaboration within documents and a rethinking of what the finished work product is. In other brain-busting sessions, Teresa Lunt, VP and director of the computing science lab at PARC, shared a vision of an entirely new way to store and send information around the web, dubbed content-centric networking. If we deconstruct the cloud, this type of networking might play a role building algorithms to efficiently store content.
2012: Breaking down barriers and costs to a cloudy world.
In two and half weeks, folks will converge on this year’s conference, where a big theme will be how to take the agility offered by the cloud to the next level. This means tearing apart the infrastructure and making everything modular and maybe even able to be swapped out while the data center keeps humming. This is something Facebook VP of infrastructure Frank Frankovsky will discuss.
Another element of freedom will be those software-defined networks, as vendors and customers talk about how they are building out compute infrastructures that can span continents while the applications think they are in one physical location. The cloud isn’t some cool IT trend, it’s a fundamental shift in how we compute. Keep watching the journey with us.
Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.
- A near-term outlook for big data
- Dissecting the data: 5 issues for our digital future
- What Amazon’s new Kindle line means for Apple, Netflix and online media