Star Trek’s Dr. McCoy and DevOps 2.0

“Crazy way to travel – spreading a man’s molecules all over the universe.” — Dr. McCoy, Star Trek, the original series, “Obsession“

Dr. McCoy was always one of my favorite Star Trek characters. Among his many endearing quirks was a healthy skepticism of transporter technology. Being a doctor, he understood the complexity of the human body and all its constituent systems. Taking someone apart, molecule by molecule, and then reassembling them somewhere else is fraught with peril.

It’s not enough to get the skeleton and muscles right. You need the heart and lungs to be there, too, in the right places. You need the brain, down to every firing neuron and synapse. And it all has to rematerialize just so for the person to walk away from the experience. Otherwise, McCoy knew, you end up with a mass of vaguely humanoid Jell-O on the transporter room floor. Consequently, I’m certain that McCoy would have made a great DevOps manager.

DevOps is the application transporter.

DevOps is a lot like trying to build and operate a Star Trek transporter. The primary DevOps goal is to create a process and tools that can deconstruct a modern enterprise application on the development side of the universe and transport it over into the operations side of the universe. The application has to remain intact and ready to run when it gets there — reliably and on demand, as required.

As with a transporter, what seems simple on the surface is far easier said than done. When moving an application, like when moving a human, you have to make sure that you get all the pieces and that they arrive on the other side in the right configuration. Forget to move anything or reassemble the components in the wrong orientation and the application won’t function as designed.

DevOps is maturing over time and becoming more reliable, thanks in part to cloud computing. The pairing of the two technologies allows us to better capture the total essence of an application and thus transport it into production with greater reliability. Here’s why.

Old applications required manual labor.

The Star Trek Transporter

In the distant past, enterprise applications ran directly on the hardware. A large application might consist of code running multiple servers, arranged in a three-tier topology, with load balancers, firewalls, and network switches thrown into the mix to connect things together. In this world, moving an application into production meant writing down the environmental recipe for the hardware, topology, network appliances, and software dependencies used in development and QA and then trying to reconstruct a similar, compatible environment in production.

Hardware would be configured into a topology. Operating systems would be loaded and patched. Middleware would be updated to the right revision level. Firewalls and load balancers were configured appropriately. Finally, the application-specific code and data were then moved electronically into this environment. There were lots of activities, and many of them required manual work.

But this was clunky, sort of like trying to transport a human by writing down “the foot bone connected to the leg bone, the leg bone connected to the knee bone…” and then taking that down to the planet surface by shuttlecraft and using it to reassemble the person by hand. And about as successful. The early DevOps movement, DevOps 1.0, sprung for a recognition that many of these steps could be automated, and thereby made far more reliable. The goal was to “turn configuration into code” to improve reproducibility.

The problem has been, with traditional environments, we can only get so far with automated solutions. Standard DevOps tools can’t instantiate a new physical firewall or load balancer. The tools can’t instantiate a new physical server and wire it into a switch port. But now, pairing DevOps ideas with cloud computing changes the game completely.

DevOps turns manual labor into code.

In the cloud, all those physical limitations go away. In DevOps 2.0, the entire application topology can be turned into code, captured in electronic form and ready to be instantiated on demand. The firewall placement and configuration are captured. The load balancers are captured.

The network switches fade into the background, becoming the invisible fabric that holds the cloud together. If the application needs elasticity, new nodes are instantiated on demand. In short, we now have the technology that allows us to capture the entire enterprise application configuration and transport it from cloud to cloud.

Consequently, we need to update our DevOps mindset. In DevOps 1.0, we worried about building a compatible production environment to mirror our development environment. We worried about making sure server nodes were updated with the correct application logic, database configuration, middleware dependencies, and operating system patches. We then injected the final application code into that environment and crossed our fingers.

In DevOps 2.0, we can capture all that supporting scaffolding and then promote that through the software development lifecycle. We no longer worry about creating a compatible production environment into which to inject the application logic; instead, the environment is a design artifact itself and travels with the application logic from lifecycle stage to lifecycle stage. Once the application is appropriately captured, we can transport it to any cloud we want.

And I know what you’re thinking: “Dude, are you smoking something? Every Trekkie knows Scotty would have been the most awesome DevOps engineer, not Dr. McCoy!” And you’re right. Scotty was the engineer you would hire to run the thing day by day, and fix it when it broke. But Dr. McCoy would have been the better manager. He was the guy who understood the end result you were after; he was the guy you called when somebody ended up as Jell-O on the transporter room floor.

Dave Roberts is SVP of Strategy and Evangelism at ServiceMesh. He blogs here, and tweets as @sandhillstrat.

Image of Dr. McCoy courtesy of CBS Sudios.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

  • Infrastructure Q1: Cloud and big data woo enterprises
  • Migrating media applications to the private cloud: best practices for businesses
  • NewNet Q2: Google closes the quarter with a bang



GigaOM