The great challenge—and opportunity—of cloud: interoperability

Interoperability, and the challenge of maintaining control of operations in the face of it, is a central issue for those that operate distributed applications on the internet — or “in the cloud.”

In this case, however, I’m not talking simply about creating and controlling interoperability from the developer level. Tools and services like Dell’s Boomi or IBM’s CastIron have existed for years, and have some success in delivering more flexibility to integration between applications and services. However, these services are focused on solving the developer’s key issues with integration –how to make sure messages move between components based on a process definition and one or more translations, if needed.

The interoperability challenges facing IT operations

But today application operators see a tangental set of problems, and these problems are increasingly becoming difficult to deal with. For the operators, the problem of interoperability has several parts:

  • Maintaining interoperability with dependencies.For the developer, the problem of managing dependencies is one of logic—finding the right configuration of code and file dependencies to allow the application to execute successfully. This is largely a static problem, though one that increasingly requires devs to design for resiliency; if one dependency disappears, an alternative method of achieving the task at hand should be attempted instead. For operations, however, the problem is ongoing, as operations has to deal with the reality of why a dependency failed the component or components that depended on it.
  • Maintaining interoperability for dependents. The rapid growth of cloud services and APIs, on the other hand, make it operations’ job to deliver availability, performance and consistency of the software systems they operate to those that depend on that software. If you plan on earning business via services delivered via APIs, your operations team has to ensure that those services are there when your customers need them, without fail. Even if you simply provide data via batch files to a partner or customer, that mechanism has to run as the customer expects it to, every time.
  • Maintaining interoperability with things operations controls. The other key aspect of operations focus on interoperability has to do with control. There is a variety of responsibility that is inherent in operating systems that interact with one another. The goal of operations, in this case should be to optimize how these systems work together once deployed. Some of that is going back to developers and asking for changes to the applications or data themselves, but often much of that optimization has to do with network and storage configuration, tuning virtualization platforms, ensuring security systems and practices are in place, and so on.
  • Maintaining interoperability with things operations doesn’t control. Perhaps the most interesting aspect of application operations in the cloud computing era is the increased need to maintain control of one’s applications in the face of losing control over key elements on which those applications depend. Dealing with upgrades of third party services, handling changes to network availability (or billing, for that matter), or even ensuring that data is shipped to the correct location, on the correct media, by the right delivery service, are all tasks in which operations can only effect one side of the equation, and had to trust and/or respond to changes on the other side.

Complexity makes interoperability difficult

None of this is a shock to most IT operators, but there is one other element that I’ve hinted at before that is creating the rapid expansion of complexity facing operations today, and that is the sheer volume of integrations between software and data elements both within and across organizational boundaries. It’s no longer a good idea to think of individual applications in isolation, or to assume a data element has one customer, or even one set of customers with a common purpose for using that data.

Today we live in a world where almost everything that matters in business is connected by a finite number of degrees of separation from just about everything else in that category. Cloud computing is one driver, but the success of REST APIs is another, as is the explosion of so-called “big data” and analytics across businesses and industries.

We, in business software, exist in large part to automate the economy, in my opinion. The economy is a massive, highly integrated complex adaptive system. Our software is rapidly coming to mimic it.

We need standard operations interoperability

All of this brings me to the opportunity that this interoperability explosion brings to operators and vendors of operations tools alike. If we are going to manage software and data that interoperates as a system at such a massive scale, we need tools that interoperate in support of that system. We need to begin to implement much of what my friend, Chris Hoff, called for five years ago from the security software community:

We all know that what we need is robust protocols, strong mutual authentication, encryption, resilient operating systems and applications that don’t suck.

But because we can’t wait until the Sun explodes to get this, we need a way for these individual security components to securely communicate and interoperate using a common protocol based upon open standards.

We need to push for an approach to an ecosystem that allows devices that have visibility to our data and the network that interconnects them to tap this messaging bus and either enact a disposition, describe how to, and communicate appropriately when we do so.

We have the technology, we have the ability, we have the need.  Now all we need is the vendor gene pool to get off their duff and work together to get it done. The only thing preventing this is GREED.

Amen, Chris. That remains as true today as it was then, as far as I can tell. Only now the scope has exploded to include all of application and infrastructure operations, not just security software. While everyone is looking for standards that allow one tool to talk to another, we are missing the bigger picture. We need standards that allow every component in the operations arsenal to exchange events with any other component, within understood guidelines. That may be as simple as setting the expectations that any operations software will have both an execution and a notification API set.

Another option is a formal event taxonomy and protocol, but that option doesn’t interest me very much. Those standards tend to become outdated quickly and are far too restrictive.

One last thing: John Palfrey and Urs Gasser have written a book on interoperability which I am in the middle of reading. So far, the most interesting aspect of the model they describe is a multi-tiered view of interoperability that supplements data and software interoperability with human and institutional interoperability. The latter two concepts are incredibly important in the new cloud-based systems world.

It’s not good enough to focus on software, protocols and APIs. We have to begin to work together as an ecosystem to overcome the human and institutional barriers to better IT interoperability. Unfortunately, lack of interoperability often benefits software vendors, and as Hoff noted above, the only thing preventing this is greed.

Photo credit: Cory Doctorow



GigaOM