For developers, the cloud means having to rethink everything they know about making software

The paradigm hasn’t changed since the advent of software: Applications run, and platforms are what they run on. But the underlying principles of application design and deployment do change every now and then – sometimes drastically, thanks to quantum-leap developments in infrastructure.

For instance, application design principles changed dramatically when the PC, x86 architecture, and client/server paradigm were born in the ’80s. And  it happened again with the advent of the web and open-source technology in the mid ’90s. Whenever such abrupt changes arise, application developers are forced to rethink how they build and deploy their software.

Today, we’re seeing a huge leap in infrastructure capability, this time pioneered by Amazon Web Services. It’s clear that to take full advantage of the new cloud infrastructure, applications that run successfully on AWS must be inherently different than applications that were built to run successfully on a corporate server – even a virtualized one. But there are a number of other particular ways in which today’s (and tomorrow’s) cloud applications will need to be designed differently than in the past. Here are the most crucial ones, and how the ways of the old world have been changed in the new one :

Scaling 

In the old world, scaling was accomplished by scaling up – to accommodate more users or data, you simply bought a bigger server.

In the new world, scaling is typically done by scaling out. You don’t add a bigger machine, you add multiple machines of the same sort. In the cloud world, those machines are virtual machines, and their instantiations in the cloud are instances.

Resilience 

Before, software was seen as unreliable, and resilience was built into the hardware layer.

Today, the underlying infrastructure – the hardware – is seen as the weak link, and it is up to applications to accommodate for this. There is no guarantee that a virtual machine instance will always function. It can disappear at any moment and the application must be prepared for this.

By way of example, Netflix, arguably the most advanced user of the cloud today, has gone the farthest in adopting this new paradigm. They have a process called ChaosMonkey that randomly kills virtual machine instances from underneath the application workloads. Why on earth do they do this on purpose? Because they are ensuring uptime and resilience: By exposing their applications to random loss of instances, they force application developers to build more resilient apps. Brilliant.

Bursting

In the old world – think accounting and payroll applications – the application workload was reasonably stable and predictable. It was known how many users a system had, and how many records they were likely to process at any given moment.

In the new world, we see variable and unpredictable workloads. Today’s software systems have to reach farther out in the world, to consumers and devices that demand services at unpredictable moments and unpredictable loads. To accommodate such unforeseen fluctuations in individual application workloads required a new software architecture. We now have it in the cloud, but clearly it is still in its infancy.

Software variety

In the past we didn’t have much software variety. Each application was written in one language and used one database. Companies standardized on a single, or at least very few operating systems. The software stack was boringly simple and uniform (at least now in retrospect).

In the new world of cloud, the opposite is happening. Within a single application, many different languages can be used, many different libraries and toolkits can be employed, and many different database products can be used. And because in a cloud you can create and spin up you own image, tailored to your and your application’s specific needs, applications within one company must be able to operate under a spectrum of configurations.

From VM to cloud 

Even between the relatively new technology of hypervisors and the modern cloud thinking, there are differences. VMware, the pioneer and leader in virtualization, built its hypervisors to essentially behave the way physical machines did before.

But in the cloud world, the virtual machine is not a representation of a physical server; it’s a representation of units of compute. (Steve Bradshaw wrote about this topic in depth.)

User patience

In the old world, users were taught to be patient. The system may have needed a long time to respond to simple retrieval or update requests, and new features were added slowly to the application (if at all).

In the new cloud world, users have no patience. They hardly tolerate latency or wait times, and they look for improvements in the service every week, if not every day. Evidence of this can be found in self-service IT. Rather than file a ticket with IT and wait for a response several days later, users of IT can self-provision the resources they need.

Do these observations rhyme with what you are experiencing and taking action on in your organization? I look forward to comments and debate on this topic.

Marten Mickos is the CEO of Eucalyptus Systems. He previously served as CEO of MySQL AB, which was acquired by Sun Microsystems. He is a member of the board of directors of Nokia.

Have an idea for a post you’d like to contribute to GigaOm? Click here for our guidelines and contact info.

Photo courtesy of Mike Flippo/Shutterstock.com.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Infrastructure Q1: Cloud and big data woo enterprises
  • What Enterprise Software Vendors Could Learn from the Consumer Space
  • Continuous delivery and the world of devops

    


GigaOM