The term “software-defined data centers” has become a fixture in contemporary IT speak, carving out a niche as a specialty practice or research area in the industry analysis space.
Nearly 40 years ago, when I first crossed the transom of the glass house, we were all about software-defined data centers. When building our data center, we started from the software requirements specification. Actually, we started by analyzing a business process to understand how we might automate it. We modeled its information flows and entity relationships, defined operational requirements, and then developed software. This software then needed to be hosted in a way that supported its processing activity—whether on the mainframe, in a midrange box, or on an x86 server—and it needed to be provided with adequate network and storage connections to enable both user access to the app and application access to its data. We implemented a hosting platform design that satisfied application requirements, tested it, deployed it, and trained users to use it.
While it has been a few years since I ran a data center, I’ve since supported many consulting clients with their data centers and have generally found their infrastructure to be designed in a similar fashion. Software requirements drive the design and selection of infrastructure components. When they don’t, things get messy and quickly become expensive.
When I recently heard the term software-defined data centers, I shrugged; I supposed it was just an alternative way to teach a time-honored concept to the newbies. Then, when they enshrined the term with its own acronym—SDDC—I started to get mad. The vendor did its best to hype this concept as something “new,” defining the term as a “data center in which all infrastructure is virtualized and delivered as a service, and the control of this data center is entirely automated by software.”
I scratched my head, again wondering how this was new? After all, we had virtualized workloads in mainframes in the early ’80s and we had plenty of systems-managed functions going on in the box and in the infrastructure connected to it. Heck, we even had an operations calendar and job control language that let us respond in a carefully cataloged and predefined way to any cryptic query from the mainframe operating system or application—using an equally cryptic response.
So, wasn’t I working in an SDDC all those years ago? Apparently not, according to a leading purveyor of the concept—server virtualization software peddler VMware. A data center isn’t truly SDDC unless you run the vendor’s server hypervisor software—a proprietary “cloud suite.” Only by using their wares, which enable you to abstract applications and operating system software away from commodity hardware, and perhaps the wares of some of their partners, which enable you to abstract network and storage I/O constructs away from the physicality of LAN and SAN plumbing, can you realize a true SDDC.
Remarkably, this situation is strikingly reminiscent of what IBM was pulling in the late ’70s and early ’80s, when I entered my first mainframe data center. Then as now, vendors like to play alpha dog. Back then, IBM locked out most third-party vendors and demanded that the few who they allowed into “their” data centers conform to IBM’s “de facto” standards.
Using the current SDDC concept, VMware seems to be seeking to build the same sort of lock-ins and lock-outs around its preferred technology stack. There’s no guarantee their SDDC will work with, say, Microsoft’s SDDC, or Oracle’s, or Citrix’s, or that it will be compatible with applications you choose to run on a non-virtualized hardware kit. One technology blogger working for the vendor, after hearing some grumblings along these lines, wrote recently that we shouldn’t judge this SDDC for what it is today, but what it may become in the future.
IT planners need to ask if now is the right time to go retro. Do we fix the problems with distributed computing by embracing another proprietary software and hardware stack—one similar to what we left in the ’80s—to get back to some sort of coherent management and allocation model?
If history repeats itself, the current generation of SDDC vendors will eventually aggravate enough consumers to cause a lurch toward an inverse technology model. Just as IBM’s mainframe hubris in the ’80s ushered in an era of distributed server-centric computing, might not the ultimate rejection of VMware’s hubris lead to some comparable explosion?
If a software-defined data center, largely automated and well-managed in an application-facing manner, is the goal (which I agree it should be), why not “fix”(improve the manageability of) what we have today? Why is a wholesale reinvention of computing as a proprietary stack the preferred solution to the problem?