IT Management

IT Sense: Software Rules

Recently, I was working with a hospital in Texas on its IT infrastructure strategy when one fellow complained that the vendor of his PACS (Picture Archiving and Communication System), which is used to manage healthcare medical imaging records, was limiting his choices regarding what server and storage products he could buy. He said he couldn’t simply choose what he viewed as best-of-breed platform components; the software vendor would let him use only gear that had been “certified” by the vendor for use with its products.

The remark reminded me of the tension that has always existed between hardware and software in computing–particularly in the distributed world, where we seldom have a homogeneous infrastructure or one vendor with the power to enforce a particular set of normative constraints on IT product selection.   

In the early days of mainframe-centric computing, just about everything we deployed was dictated by the OS vendor. Like the fellow in this case, some of us bristled at the lack of flexibility such control exerted over our decision-making process.   

On the other hand, everything worked reasonably well in the glass house. For the most part, the components all worked and played well together–from the terminal network to the front-end processors to the mainframe and peripherals to the software—mainly, because they all came from one source. Though we may have had a few reservations, the balance between anarchy and order imposed by the software vendor was a pretty equitable one.

That is, one could argue, until the dominant vendor started to get greedy. An entrenched incumbent vendor, as we have seen all too often, can leverage its huge installed customer base to prevent innovative newcomers from gaining ground (or customers) in the market. Too often, constructs such as “certification programs” are thinly veiled efforts to maintain market share by locking out competitors.   

In one of my shops in the late ’80s, we secretly purchased plug-and-pin-compatible memory cards from EMC for use on our IBM mainframes. The cards were every bit as good as the IBM cards, worked just as well or better, and were far less costly, but they weren’t certified by IBM and might have cost us a warranty agreement had Big Blue blown wise to our choice. (Ironically, EMC now plays much the same game with respect to third-party hardware or software add-ins on their storage boxes.)

With the advent of distributed systems, most software and operating systems got out of the business of dictating hardware choices. This isn’t to say they forfeited control altogether: You still need drivers for hardware you want to install in a Windows system, for example. But, by and large, software companies have been happy to sell their wares and didn’t much care on whose hardware you installed their product.

The result of this “flexibility” has been chaotic at times, to   be sure. Witness “patch management,” which has become a major thorn in the side of server administrators everywhere. But we certainly haven’t seen the anarchy predicted by diehard mainframers of a few decades ago. The software continues to run on whatever kludge of increasingly commoditized components we use to host it. When it doesn’t, we simply buy another kludge of different commodity parts.

Truth be told, the software anarchy the doomsayers predicted hasn’t happened. Instead, hardware vendors–particularly in the storage realm–have created a different kind of control mechanism to fill the void left by the mainframe companies. Vendors of Fibre Channel SANs, for example, resist heterogeneity and cross-platform management. Just about every storage array vendor either wants its customer to forego competitor products altogether, or to install only those arrays that can be “managed” or controlled by the “primary” vendor’s gear. The current war in the Big Iron storage world over different virtualization schemes is case in point. IBM’s SAN Volume Controller, EMC’s Storage Virtualization Router, and Hitachi Data Systems’ TagmaStore are all vying to virtualize storage in the SAN, though each vendor is making the case that its storage can’t be virtualized behind its competitor’s wares.

It almost makes you wish software guys would step back up to the plate and do what some of the PACS vendors are doing in the healthcare vertical today: defining exactly what their data needs in terms of appropriate hosting and providing a list of gear that has been tested and found capable of doing the job. Maybe then we could do some intelligent hardware purchasing again.

The caveat to this, however, is that software guys aren’t above the incentives (or bribes) of the hardware guys. Sometimes, their certifications are based less on the suitability of certified platforms than on kickbacks from vendors to whom they award the certifications.   

If, like me, you ever find yourself pining for a return to software vendor hegemony, just never lose sight of the fact that power corrupts. For now, it will simply require more savvy and chutzpah to work around the lock-ins and get what you need at a price you can afford to support your applications. I’m looking forward to your thoughts; feel free to e-mail me at jtoigo@it-sense.org.