Recently, I was asked to create a seminar on risk avoidance and business continuity planning for software-defined data centers (SDDCs). I found the request vaguely amusing, since I’d been saying for some time that SDDCs were nothing new; that since I entered “data processing” back in the early ’80s, we had always architected data center infrastructure, services and processes to meet the needs of business applications and databases. In other words, we had always had software defining our data center.
That wasn’t what the SDDC evangelists wanted to hear, however. Such old-school design concepts were exactly what was preventing IT generally, and data centers especially, from being “agile,” “responsive” and “dynamic.” To achieve these highly prized goals, you need to embrace “software-defined everything”—servers, networks and storage all virtualized, pooled and automated.
Of course, this is simply a rehash of what we’ve been hearing from the same folks for a little more than a decade. It started with discussions of server virtualization to correct the problem of server sprawl and inefficiency. That mutated into cloud-speak when the great recession hit: Clouds used virtualization to create multitenant IT outsourcing services that are always popular during recessions (see service bureau computing during the Regan recession and application service providers [ASPs]/shared service providers [SSPs] during the post-dotcom meltdown in the early ’80s) but lose their luster when the economy starts to right itself. That might just be happening now, since vendors that were promoting clouds a few years ago have morphed their marketing campaigns to call cloudy stuff software-defined stuff.
Anyway, SDDCs are supposed to be something new and, more important, a sea change from traditional data centers. I’ve been researching this supposed change for a few months and, slow learner that I am, I don’t see it.
Software-defined servers, essentially a server kit running a hypervisor to achieve a form of abstraction to enable workload to move from box to box as needed for resource optimization, sort of makes sense. However, this virtualization/abstraction is nowhere near as resilient as mainframe LPAR-based virtualization and multitenancy; Big Iron, after all, has had 30 years to work out the bugs in such strategies. Yet, to hear the SDDC advocates talk, Big Iron is old school and not fresh enough to fit with contemporary concepts of server abstraction.
Then there’s the software-defined network. This notion of separating the control plane from the data plane in network hardware to create a unified controller that simply purposes generic networking boxes to route packets wherever they need to go looks interesting on paper. However, Cisco Systems just delivered an overdue spoiler by announcing it wasn’t about to participate in an “open source race to the bottom” as represented by the OpenFlow effort: Network devices should have value-add features on the device itself, not a generic set of services in an open source controller node, according to the San Francisco networking company. The alternative proposed by Cisco is already being submitted to the Internet Engineering Task Force for adoption as an anti-SDN standard.
Finally, there’s software-defined storage (SDS). EMC is claiming this notion as its own thing, even though storage virtualization has been available for more than 18 years from companies ranging from DataCore Software’s hardware and hypervisor-agnostic software, SANsymphony-V, to IBM, with its hardware-centric SAN Volume Controller. In its reinvention of the idea to create SDS, we’re told that those other guys are doing it wrong. SDS is about centralizing storage services, not aggregating storage capacity so it can be parsed out as virtual volumes the way DataCore and IBM and a few others do it today. There’s no real explanation offered by EMC for why storage virtualization doesn’t qualify as SDS, but clearly the idea doesn’t fit EMC/VMware’s strategy of breaking up SANs in favor of direct-attached storage or with the still-evolving VSAN shared direct-attached architecture. With all the proprietary replication that will be required in a VSAN environment, to facilitate vMotion and HA failover, it would appear the strategy should sell a lot of hardware.
Bottom line: The whole SDDC thing looks like a house of cards that wouldn’t be able to withstand even the slightest breeze. So, those who claim the architecture is highly available, and thus obviates the need for continuity planning, are pulling our collective leg. At the end of the day, the best assessments of the necessary architectural components of an SDDC were articulated by the National Institute of Standards and Technology (NIST) in its discussion of Infrastructure as a Service (IaaS). Just Google it. You will find that the abstracted, pooled and automated resource cobble must be accompanied by a number of well-defined operational procedures and processes, administrative procedures and processes and business procedures and processes to deliver any real value.
Heck, if you squint real hard when you look at the NIST IaaS model, it looks an awful lot like, well, a traditional data center.