Rethinking IT in the cloud computing era

I’ve been thinking a lot lately about IT and its role in the era of cloud computing, API-driven development and increasing interconnectivity. As enterprise computing moves from a server-centric to an application-centric operations model, what happens to the role of IT in a corporation? What is IT to cloud, anyway?

Nick Carr famously wrote about the lack of differentiation that IT brings to the business in his 2004 book Does IT Matter?. His argument was essentially that as computing is more and more expected in business, the things IT does for the business provide less and less differentiation. That means that each business owning its own information technology makes less and less sense.

If you believe Carr’s vision, cloud computing is one logical outcome. In fact, Carr himself made the argument in his subsequent 2008 book, The Big Switch, where he argued that the increasingly commoditized nature of computing would drive it toward a utility model, like electricity or water. It’s a compelling vision, and one that helped to ignite the cloud computing market we know today.

So,  is it possible IT departments will fade away completely as more IT services are available from third parties online and there are fewer legal hurdles to creating, analyzing and taking action on data outside of an organization’s own figurative four walls? I don’t think so.

I think there is a critical role for a central IT unit in organizations of any significant size (beyond help desks and device management). To understand what that role is, however, we have to explore the nature of the applications being created, acquired and operated in those organizations. We also have to explore what IT has been doing, and why that’s so disassociated from what they would be asked to do in a cloud-centric organization.

The IT we know and love

One of the problems of generalizing about IT is that any given organization probably operates somewhat differently from any other. However, there are some general trends that evolved from the client-server computing model that I would argue apply to most mature IT organizations. The most-critical of these trends is what I would call “server centricity,” which might be better called “infrastructure centricity.”

Think about how computing got its start. Before you could do anything, you needed a computer. Once you had a computer, you needed an operating system, which acted as an interface between the human and the machine. With those things in place, you could now decide how you wanted to apply that computer to some form of problem (or some set of problems). That’s where application software came in.

The computer (or the switch or the storage system) was critical to this model. Without the hardware, nothing else happened.

So, IT evolved to take on running infrastructure (and, almost always, operating systems, middleware and databases) to support the applications that business units required to do their jobs. This function grew in complexity until companies spent significant budgets on data centers, infrastructure availability, inter-networking and so on. This was the most critical role IT could possibly play for the business.

IT and cloud computing

Developers — the ones who ultimately applied computing to business problems — were frustrated with the understandably limited capacity that IT had for addressing software opportunities. Standing up infrastructure is work — often expensive work — and the time and money needed to deliver it could never keep up with the so-called long tail of developer demand.

Now, however, the game has changed significantly. General infrastructure is available on a cashflow-friendly basis to anyone who wants it. Add to that the variety of innovative software tools and services that have evolved thanks to the internet, open source and the new economics of cloud computing, and developers are finding utility services a much more palatable option than internal IT for many classes of application development and deployment.

Provisioning virtual servers on Amazon Web Services.

When developers think about operations, they are most definitely focused on the applications themselves, not the infrastructure the applications are running on.

So IT is getting cut out of the loop in many organizations? Not “officially,” and often in very stealthy ways. However, it is happening, and increasingly in unexpected industries and companies. And while most of this happens with the two critical software classes that cloud enables – web applications at scale, and data collection and analysis —  some of it is just developer frustration with IT in general.

So what is an IT department to do?

I think the answer comes in recognizing what “application-centricity” really means in a complex business. No business runs on one application. No business has only one deployment that they manage, only one executable that must meet the breadth of its computing demand. Every company runs on a system of applications: a collection of  highly interconnected, interdependent software components, services and data that must all work as required in order for the company as a whole to survive and thrive.

In the era of cloud computing, what the business requires of a central IT department is coordination of the application system — aiding the various application owners with what has to happen for their software to be a “good citizen” within the computing environment as a whole.

Here are just a few key questions that IT must answer with respect to the new application system:

1. How does the company handle identity, authentication, authorization, data management, and other central security and compliance-related operations functions that must be coordinated across all of its independent operating entities.

2. How does the company troubleshoot issues that happen when applications interact with each other across operating entity or even development team boundaries?

3. Is there anything that can be done independent of the individual appliations to improve the heath of the system as a whole?

4. Who knows the system as a whole well enough to give the appropriate advice on how to best integrate new application ideas and components?

Thus, the primary role of IT moves from running infrastructure to operating software — or, more accurately, assisting developers in operating their software in a larger software system. It’s a consultative role with a number of tools and services that have to be in place and — this is very important — relevant to the developers that IT is supporting. These could be tools to visualize how applications are interconnected and the resources they’re consuming, or services that add intelligence to operations.

The core idea is that IT has to let go of trying to control everything and focus on coordinating and enhancing things that other people control. It can be done. Several online companies, including Netflix and Amazon do it today. The result is significantly better agility, experimentation and innovation, with the trade-off that cooperation, communication and measurement are increasingly critical to success.

The scary part is that most IT organizations are still infrastructure-centric, or at least “context”-centric if you include so-called enterprise software packages. The move to application-centricity and developer self-service is going to be hard, and require some change of skills and culture. It also means that private cloud is not the most important cloud initiative that IT can take on.

I wonder how long it will be until most IT organizations figure that out.

Feature image courtesy of Shutterstock user Arjuna Kodisinghe.


GigaOM