IT Management

IT Sense: Welcome to 2010

The University of California at San Diego is the source of the latest installment of data points on the data burgeon. In December, they released the latest installment of the “How Much Information” study that has been refreshed several times since its inception in 1999. Back then, UC Berkeley started what has blossomed into an urban legend of technology: scary rates of data growth.

In fact, UC Berkeley’s study of the Digital Revolution was badly misinterpreted from the outset.  Researchers discovered that analog information was going digital at a phenomenal rate. They estimated that a total of 11 exabytes of digital information had been created by the end of the millennium. In year-one of the next millennium, this quantity had doubled to 22 exabytes.

Backers of the research from the storage industry immediately used this data to market the need for huge investments in capacity to fearful consumers. However, Berkeley threw them a curve: The preponderance of the digital revolution wasn’t occurring in businesses, but in the consumer population at large as VHS video was replaced by DVDs, vinyl record albums by digital CDs and MP3s, paper books by e-books, etc. By their estimate, less than 35 percent of data growth fell into a category that made it worrisome to IT managers for commercial organizations.

After a second round of research confirmed the trends, the storage vendors turned to industry analysts to tell the story that Berkeley wouldn’t. Pay an analyst enough money and he’ll say just about anything you want: Mainframes are dead, tape is going away, and data growth is an explosion that will soon see companies confronting a gap between capacity and new bits. The discussion has taken on a kind of Dr. Strangelove feel.

In the latest report, UC San Diego researchers made the point that individual consumers in the U.S. sucked up about 34GB of data per day in 2008; that’s roughly 3.6 zettabytes of data in a single year. These findings referred to households and individuals enjoying YouTube videos, MP3 downloads, DVDs, Blu-ray disk, Kindle books, etc. If your vendor tries to leverage this data to explain how much you need a new DASD refresh in your shop, ignore him.

What we can’t ignore, however, is the deeper trend revealed by the researchers. They noted that most of the technology used to create, store, and transfer this data to consumers is invisible to the end user. The only perception of cost is whatever the sticker price is on the media itself or, for the bill payer in the household, the monthly Internet and cable TV/satellite service fee.

This perception has a parallel in business today. Even in these challenging economic times, it’s often the case that users haven’t a clue about what IT costs or what the actual cost drivers are. Try implementing a chargeback arrangement across user departments that have never had one before, and you will immediately discover what I’m talking about. Thanks to UC San Diego, we now have a clue about how this misperception develops: We cultivate it in children at an early age.

The dilemma brought about by this situation has two parts. First, it means management doesn’t appreciate the cost of supporting a business with technology until they get the bad news (the bill).  Sticker shock drives cost-reduction plans, and in the worst case, places attention on the best defined and largest cost centers of IT—often meaning the mainframe and its associated software.  It’s easy to see the mainframe cost center because efficiency has distilled costs into a convenient line item in the accounting books. Less obvious are the costs of distributed computing, partly because these costs are spread out over many line items, both soft and hard, but also because some distributed systems costs aren’t accurately captured. Server virtualization costs are a key example, and actual labor costs for support and administration of distributed systems are a close second.

The first dilemma leads to a second: the fostering of bad choices for dealing with cost. Clouds are the latest example of this. Cloud computing may have its merits as a simplistic marketing term for a much more complex collaborative computing or data sharing approach, but vendors have largely sold it based on its ability to eliminate internal IT altogether and to place resources-on-demand safely out of view. Like all the data and games that simply flow into our kids’ greedy hands without a thought as to the behind-the-scenes technology and effort that make it possible, cloud woo obscures the actual cost and complexity of enterprise IT. All management sees in vendor ads are big-letter type promoting cheap disk, cheap processing, and cheap application access—and no more need for IT.

As we enter the next decade, the challenge isn’t just to frame the value of mainframe investments, but to have on-site, at the ready technology services for the organization. Happy new decade!  Here’s hoping it won’t take a cataclysmic failure to get everyone on board with the challenges and opportunities of data processing going forward. 

Note the use of data processing, instead of Information Technology. Perhaps it’s time to reclaim this older term, since the “IT” moniker might just be contributing to the idea that it’s all about the boxes.