Virtualization continues as a driving force behind data center optimization and the movement to private cloud infrastructure—but in many ways, organizations’ ability to define and enact best practices for virtual system creation, deployment, costing and deallocation have failed to keep pace.
The problem begins with a concept, and for purposes of example, we’ll use a virtual Linux system. When you first define this virtual system, you have to size it. In most cases, this is a routine exercise in IT, with network or system specialists simply taking stock of the size of a Linux operating system on a physical server, and applying the same sizing to a virtual image of that system on zLinux. In practice, this can create waste—because the overarching management systems on z have a broader view of that Linux virtual OS than a dedicated x86 server does. With z’s ability to maximize resources across systems, it is likely that a virtual instance of Linux can be sized smaller on z than it would be sized on a dedicated, physical server.
The next issue is virtual Linux creation. The prevailing best practice is to modify scripts that have already been written and executed for virtual Linux image creation. Organizations have a high degree of trust and comfort with this process, but it also requires a highly manual “assembly line” approach that can be labor-intensive and prone to error. These home-grown tools often lack automation that can update the virtual images and error-check new image creation to ensure that the end product will remain compatible with systems already running in production.
The refreshing news is that the distributed side of Linux (SUSE Studio is one example) is now presenting new methodologies that come with a graphical user interface (GUI) that allows staff to clone, customize, track and version images—and to deploy them on any platform.
Once Linux images are deployed, they are consumed. Best practices still evolving in this area are 1) figuring out how you are going to charge for the usage of these images if you operate on a chargeback basis; 2) developing image deallocation rules for end users so they understand that images not used for a certain amount of time will be deallocated; and 3) having the courage to deallocate the unused images, since there might not be adequate documentation on why they were created in the first place. The latter is a particularly sticky IT problem--because no one wants to delete an image when they don’t know the history. However, if you don’t do it, you risk creating virtual server and image sprawl—something that an increasing number of sites are facing today.
With 2012 poised to become yet another banner year for cloud and virtualization, CIOs and data center managers who normally don’t get into the details of creating, deploying and managing virtual systems, should perhaps take notice. The risk is losing some of the cost reduction gains that were first achieved when physical IT assets were virtualized away. Today, a second wave of data center cost reduction and efficiency evaluations for virtual assets beginning to “sprawl” seems to be in order. It couldn’t come at a better time.
Mary E. Shacklett is president of Transworld Data, a technology research and marketing/public relations firm. Her technology experience includes positions as vice president of Software Development at Summit Information Systems, a financial systems software company, and vice president of Strategic Planning and Technology at FSI International, a multi-national semiconductor company. She has been actively involved in the publishing industry for more than 20 years as an editor and writer. Voice: 360-956-9536; Email: TWD_Transworld@msn.com