How to protect your company against vanishing cloud services

When your cloud provider closes up shop without warning — like cloud database Xeround did earlier this week — a two-hour outage suddenly doesn’t look so bad. Thankfully, the marketplace for business-focused cloud services has to date been relatively free of such sudden closures (the consumer space not so much), but one has to assume Xeround won’t be the last to fold.

Think about how many other cloud database services, platform-as-a-service offerings and — if you can count that high — software-as-a-service applications have launched in the past few years. If the 75-percent-of-all-startups-fail statistic applies equally to cloud computing as it does to other sectors,  we’re about to see a lot more sad emails to users warning them to move their data or find a new provider within the next month.

It sucks to think of adopting a new, presumably useful service as a significant risk, but that’s exactly what is if your data is trapped in some proprietary format or can’t be easily exported. The tide may be turning, though.

We can’t recreate cloud services, but maybe we can extend their lives

According to Mike Driscoll, founder and CEO of cloud-based analytics service Metamarkets (see disclosure), one of the major problems with cloud services is today is that they’re just not designed to be easily replicated. This creates problems when customers — particularly large enterprises — approach cloud providers with contractual conditions that harken back to the era of actual on-premises software. Essentially, they want the cloud version of a software escrow account that would place the service’s source code with a trusted third party and, should the company cease operations, would allow the customer to keep running the service on its own infrastructure.

For now, the response has been to push back on those requests because it wouldn’t really be possible to run the service anywhere other than where it’s currently running. Driscoll said many SaaS applications today — his own included — are “fairly monolithic in the way they’re architected,” which means there’s a strong dependency between the applications and the cloud operating system on which they’re running. He thinks it’s possible that hybrid cloud deployments could help solve the problem (e.g., what OpenStack, Cloud Foundry and Amazon-Eucalyptus theoretically would allow for), but that a feasible hybrid model is probably a few years out.

However, services like Metamarkets, also require a centralized data model (a la Bloomberg terminals) so much of the value is lost if customers all run their own versions on their own servers. For situations like this, he’s heard it proposed that service providers could put cash rather than software into an escrow account, and the cash would pay for a skeleton crew to manage the service for a year, let’s say, so customers would have ample time to find an alternative.

A screenshot of the Metamarkets service

A screenshot of the Metamarkets service

Until things some of these mitigation strategies get figured out, it’s probably more of the status quo for cloud adoption. Small businesses will likely assume more risk and rely heavily on cloud services, while larger companies will use them for non-mission-critical applications or when they’ve received adequate assurances of security and stability. “When you’re GE or JPMorgan,” Driscoll said, “you’re never going to create a dependency on any application that can just get unplugged.”

How insurable are you and your cloud provider?

Maybe the answer is to adopt but protect. I used Xeround’s closure as a reason to catch up with CloudInsure, a cloud-ratings firm that I first covered as it was just starting in 2011. The idea behind the company is to serve as an actuary for insurance providers that want to get into the business of insuring cloud computing customers like they previously have with managed hosting customers and general purchasers of IT equipment.

The way it works is by analyzing some 140 factors about both the user and the cloud provider(s) in order to assign a risk score. So, a high-risk user (e.g., one with highly regulated, very valuable data) might cost more to insure even though its cloud provider is rated as a very low risk. The inverse could be true, too, where a low-risk user could choose to deploy on a high-risk cloud service. Founder Drew Bartkiewicz said CloudInsure covers IaaS, PaaS and SaaS providers, and the financial stability of the provider is among the variables its models consider.

Depending on the insurance policy, insured companies would receive monetary remunerations to mitigate against an outage, breach or closure that required them to pay penalties to customers or regulators, or to move to another cloud provider. Insurance broker Lockton is already offering a cloud insurance product through the International Association of Cloud and Managed Service Providers, and has a partnership in place with CloudInsure, as well.

CloudInsure has solidified quite a bit since we last spoke, established some significant partnerships and, Bartkiewicz told me on Thursday, is about ready to make its service a lot more public.

The insurance model could prove to be a really big deal, especially if it helps smaller cloud providers gain a foothold that will allow them to flourish. Right now, a prudent CIO might decide to opt only for services from companies he assumes aren’t going anywhere — Amazon Web Services, Microsoft, IBM and the like — when insurance might make it a little easier to take a risk on something that might pay bigger dividends.

Besides, it’s not as if being part of a large vendor is always a sign of stability: VMware bought and then sold an app-development technology called WaveMaker in a two-year timeframe, but it just as easily could have killed the business rather than try to sell it. I have reached out to Amazon Web Services to discuss the circumstances under which it would ever consider terminating a service, but have not received a response.

The internet never* forgets

When you look at the topic of web service closures beyond business applications, you actually see just how perplexing and possibly problematic it is. Screenshots might exist of services such as Google Reader and Posterous, but myriad dependencies on other services might make them impossible to recreate even if you had the source code. Unique file formats and other development decisions could present problems for digital archivists trying to preserve the web in a way that’s accessible by future generations.

“This is a case where the internet is more forgetful than the things that came before it,” Driscoll quipped. “The internet never forgets, until it does — and then it forgets everything.”

Disclosure: Metamarkets is a portfolio company of True Ventures, which is also an investor in GigaOM. Om Malik is also a venture partner at True.

Feature image courtesy of Shutterstock user Tom Baker.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Quality of the cloud: best practices for ISVs
  • Public, private or hybrid? How to move to the cloud
  • Migrating media applications to the private cloud: best practices for businesses

    


GigaOM