IT Management

From a cursory survey of recent technology publications, it seems like everyone is trying to advance their vision of how IT will change and evolve over the next five to 10 years.

Some emphasize the growing role of external service providers, whether traditional outsourcing companies or their contemporary cousins, “public clouds.” Both will eliminate the need for internal infrastructure, applications, and IT staff altogether and change the role of IT to that of a service broker responsible for coordinating the delivery of specific technology resources and services. Others focus more narrowly on infrastructure changes to satisfy the need for faster and more manageable processing of “big data” workloads.

Opinions are like elbows, my grandfather used to say; everybody has a couple of them. Here are a couple of mine.

The appeal of outsourcing seems to cycle with the economy, and we’ve seen waves of interest surge during recessionary economies and ebb quickly once prosperity returns. Vendors, whether service bureaus in the ’80s, Application Service Providers (ASPs)/Shared Service Providers (SSPs) in the ’90s, or Infrastructure as a Service (IaaS) providers today, pose the same questions each time hard times recur: Is IT a core competency of your business? Was building an IT capability what you were really setting out to do when you started your widget-making company?

To senior executives desperately seeking ways to reduce CAPEX spending and to lean OPEX costs, the answer is inevitably, “No.” They find appealing the idea that IT can be given to someone else to do and purchased as a service from a market of competing providers. They may also view IT as an expense that places as much of a drag on business initiatives as it contributes to business agility.

The deceptive part of the outsourcer’s claim isn’t the cost savings that will accrue to the arrangement: Most firms realize a short-term savings in OPEX costs (from laying off IT workers) in the first couple of years. It’s the notion that someone else can operate IT more efficiently than your own staff and resolve problems more quickly. Assuming you don’t have an IT department populated by half-wits and idiots, the idea that another provider will provide substantially greater service levels at a significantly reduced cost is far-fetched. Empirical data suggests that outsourcing problems makes them worse. Only routine work can be outsourced successfully—and probably should be.

Industry watchers now say that the public cloud service providers will probably implode, in favor of large vendors that operate their data centers on the edge of what used to be called the core carrier networks (e.g., the public telephony network). The only problem with this idea is that larger outsourcers are often accused by disgruntled clients of being less responsive to customer needs by virtue of their sheer size. The old saw that many vendors leverage in their sales pitch—“We already buy outsourced services in the form of voice and data networks, etc., so why not buy IT that way?”—ignores the point. IT is best customized to the business and is extraordinarily challenging to deliver in a cookie-cutter sort of way.

Those offering a vision of the future bound to internal technological change, such as server virtualization or the introduction of Flash SSD technology to speed up storage, have a pitch that comes a bit closer to reality. Between the lines of all the woo being peddled about server virtualization and even storage virtualization is the notion that we may ultimately get to where we already are in mainframes today: the creation of “atomic units” of tech that can be rolled out to meet changes in business process workload.

That would be a fundamental improvement in the way we plan, deploy, allocate, and de-allocate IT resources today: a building block model. While it works on mainframes today, given their ability to support the allocation of storage and processors to various tasks in an ad hoc way, it’s driving the x86 distributed computing world in a different direction: toward monolithic computing stacks that are owned, much like the mainframe data center of a few decades ago, by a single vendor or cadre of vendors.

In both cases, what’s actually being sought is better management. With management comes the ability to tune infrastructure to meet business needs, manage assets cost-effectively, and apply services and resources selectively to the business processes that require them. In effect, service-oriented management gets us back to what we used to do: build our IT infrastructure in a manner that directly aligns with the business process, its data, and its information processing requirements. 

That’s an “old-school” vision, but it’s usually more successful than either outsourcing or pursuing the latest silver bullet technology.