Amazon’s DynamoDB shows hardware as means to an end

Somewhat lost in the greater story of Amazon Web Services’ new DynamoDB NoSQL database is that the new service runs atop a solid-state storage system. However, by abstracting those SSDs underneath a higher-level application service, AWS has once again demonstrated its cloud wisdom by illustrating how new hardware presents greater opportunities than infrastructure-as-a-service alone.

AWS doing solid-state drives is a big deal in the world of cloud computing, where users have been wondering for years when the company might start offering SSDs as a service. Other cloud providers already offer bare SSDs as a service, and more certainly are thinking about it with the advent of companies such as SolidFire that are specifically targeting cloud providers with solid-state arrays. The idea is that they’ll be necessary to run I/O-intensive applications such as databases and ERP, which many large organizations consider mission-critical but which many cloud providers aren’t yet equipped to handle.

In that sense, DynamoDB is something of a curveball. It lets AWS users leverage the performance of SSDs, only as the underpinning of a new service rather than as a new IaaS feature alone. But that’s also what makes Amazon’s decision so wise. By launching a new service that lets developers experience SSDs in a fully managed service so they don’t have to jump through the hoops of building, deploying and managing an SSD-based cloud application, AWS distinguishes itself from the pack. Especially among the web developers that are its bread-and-butter use base.

Web developers use NoSQL databases more frequently than enterprise developers, and NoSQL requires solid-state performance. As several MongoDB users explained at last month’s MongoSV conference, using AWS’s cloud means trading hard-disk performance in exchange for convenience and scalability. That trade-off, however, means data must be stored in-memory rather than on disk, which means higher operational costs. At $ 1 per gigabyte per month, DynamoDB provides SSD performance at a price point far lower than relying on the native memory of EC2 instances (at least in terms of capacity costs).

It’s services, rather than just infrastructure as a service, that most live up to the promise of cloud computing. Going forward, one can envision AWS using SSDs as the basis of numerous other services, including its rumored analytics service and maybe even for a speedier version of its Relational Database Service. Look at Elastic MapReduce. Anyone could use Amazon EC2 instances to build a virtual Hadoop cluster, but Elastic MapReduce eliminates the hassle of even managing that virtual cluster.

It seems likely AWS will eventually offer access to bare SSDs as a service for developers who still prefer to build applications using non-Amazon pieces and who will probably always represent the majority of developers. But that’s a relatively low-margin business for AWS, and a relatively low-value proposition for users. Web developers, enterprises, third-party cloud database providers, data scientists, whomever, will always need access to cloud-based incarnations of the latest-and-greatest hardware, but cloud providers that really want to stand out will always have to look higher.

Image courtesy of Flickr user shimelle.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Infrastructure Q1: IaaS Comes Down to Earth; Big Data Takes Flight
  • How AT&T can catch Amazon Web Services
  • Quality of the cloud: best practices for ISVs



GigaOM