Operating Systems

The architecture of modern data centers, which host the computational power, storage, networking, and applications that form the basis of any modern business, have changed profoundly in recent years. Most conventional data center networks were never designed to handle the workloads and applications in highly virtualized cloud data centers. Many years ago, Ethernet was originally conceived to connect dumb terminals through a shared network consisting of repeaters, hubs, and eventually switches. Campus networks evolved into a tree structure; as the network grew, more branches were added until the tree developed an access, aggregation, and core layer. To prevent data packets from getting stuck circulating through infinite loops, Spanning Tree Protocol (STP) was used to divide the network at the access layer and restrict data from taking certain paths. This approach worked so well the industry decided to adopt the same approach when wiring data center networks.

Unfortunately, a data center network had different requirements. Most data center traffic flows east-west, between servers on the same tier of the network. This traffic pattern isn’t well-suited to a tree structure, which forces data packets to flow up and down the tree, adding latency and diminishing performance. Further, data centers have different requirements for network performance. Storage traffic requires a lossless network, for example, while server clusters require high bandwidth and low latency. Finally, and perhaps most important, modern data centers are highly virtualized, often as part of a cloud computing strategy. Virtualization means more than just placing more server images into the existing data center footprint. It also means the data center network must become dynamically reconfigurable. New virtual machines can be created, modified, or destroyed in real-time, and the network must keep pace with these changes. Mission-critical traffic needs to be configured with highly available, redundant network paths. Virtualized data centers also support multi-tenancy, which means different organizations can share the same physical servers and network but still need to be kept isolated from each other.

It’s quickly becoming apparent that traditional campus networks can’t meet the demands of modern data centers. Many administrators must use all their resources and budget just to keep the network running, particularly in a challenging economy. This ultimately prevents the business from implementing more innovative features and applications that might create new revenue streams. 

Rethink Your Network

It’s time to rethink the data center network, but many executives find themselves at a loss when deciding how to proceed. There are opportunities to extract more business value from the network. According to a recent Gartner study, multi-sourcing of network equipment is practical and can reduce Total Cost of Ownership (TCO) by 15 to 25 percent (see www.dell.com/downloads/global/products/pwcnt/en/Gartner-Debunking-the-Myth-of-the-Single-Vendor-Network-20101117-published.pdf). Further, a recent survey of 468 business technology professionals showed that adherence to industry standards was their second highest requirement, behind virtualization support. (See C.J. DeCusatis, A. Carranza et.al., “Communicating within Clouds,” IEEE Communications Magazine, November 2012.) Unfortunately, realizing this value can be tricky. The networking industry is going through one of the most significant technology discontinuities in its history, with a long list of new technologies (both open standards and vendor proprietary) being introduced to displace more well-established legacy approaches.

To clear up some of this confusion, IBM has created a simple approach to transition your network from its current state into a best-of-breed, virtualized fabric. Since this approach is grounded in a multi-vendor approach using existing industry standards, it’s become known as the Open Datacenter Interoperable Network (ODIN). ODIN was announced at the May 2012 Interop conference, and is available from the IBM System Networking Website as a series of technical briefs. (See C. DeCusatis, “Towards an Open Data Center with an Interoperable Network (ODIN)”: Volumes 1-5, http://www-03.ibm.com/systems/networking/solutions/odin.html; see also the IBM data networking blog: https://www-304.ibm.com/connections/blogs/DCN/and Twitter feed @Dr_Casimer.)

The initial volumes of ODIN deal with networking topics such as lossless Ethernet, converged networking and storage, software-defined networking and OpenFlow, Layers 2 and 3 equal cost networks, ultra-low latency, and Wide Area Networks (WANs) interconnecting multiple data centers, as seen in Figure 1. ODIN isn’t a marketing document; you won’t find mention of any IBM products, or a sales pitch urging you to buy IBM services. Instead, ODIN provides a framework for the significant problems faced by today’s data networks, and describes best practices for dealing with them using open standards. This is a powerful concept, and the market reaction has been overwhelming. In the first two weeks, more than 20,000 people read about ODIN (there’s an ongoing discussion thread at IBM’s data networking blog).

Meanwhile, many major networking companies, including Brocade, Juniper, Huawei, Big Switch, NEC, ADVA and Ciena, have endorsed ODIN. Marist College has also endorsed ODIN and is conducting research on new configurations, including software-defined networking and OpenFlow. While we shouldn’t expect all these companies to implement every feature described in ODIN right away, their endorsement of open standards serves to establish a commitment for transforming the network and extracting more business value from the data center infrastructure.

Shortly after announcing ODIN, IBM and its networking partners began to deliver examples showing how a host of different networking standards could be combined into a single, cohesive business solution. Some recent examples that illustrate many ODIN features include the IBM SAN Volume Controller (SVC) with Stretch Clusters and IBM PureSystems. To better understand this approach, let’s consider a few examples of ODIN. 

Stretching Your Storage 

IBM has announced a software bundle featuring SVC, which includes Stretch Clustering over long distance (see B. Larson and C. DeCusatis, “SVC Stretch Clusters,” Edge 2012, Orlando, FL, June 2012 available at https://www-304.ibm.com/connections/blogs/DCN/entry/the_edge_of_storage12?lang=en_us). This is meant to address problems associated with Virtual Machine (VM) mobility over extended distance, and multi-site workload deployment across data centers. VM mobility improves availability of your applications and is a more efficient way to use limited storage resources. The most common reason for using this approach typically involves some form of business continuity or disaster avoidance/recovery solution, including such planned events as migrating one data center to another or eliminating downtime due to scheduled maintenance. But given an increasingly global work force, there are other good reasons to explore VM mobility. Many clients are realizing this approach provides load balancing and enhanced user performance across multiple time zones (the so-called “follow the sun” approach). Others are realizing that by moving workloads over distance, it’s possible to optimize the cost of power to run the data center since the lowest cost electricity is available at night; this strategy is known as “follow the moon.”

2 Pages