IT Management

Solid State Drives (SSDs) based on flash memory are the premier, enterprise-class storage devices of the near future. Chances are you will be implementing SSDs in the next few years, even if the technology seems expensive today. This article explains the technology behind SSDs and provides practical implementation strategies, considering both the performance improvements SSDs provide and the reduction in spinning hard disk activity that results from moving the most active data sets to solid state. We’ll examine the pros and cons of migrating individual data sets, logical volumes, or complete storage groups and show the reductions in component count possible with SSDs based on actual z/OS workload data. 

SSDs are an important new storage technology because of their tremendous performance benefits, in particular for random I/O, the high data rates for reads, and the lack of moving parts. SSD remains an expensive technology and current enterprise storage systems offerings aren’t optimized for SSDs. For example, device adapters aren’t designed to handle the massive amount of data each SSD can handle, requiring you to configure “short strings” when using SSDs. Here, we’ll discuss strategies you can use to achieve significant performance gains with a limited number of SSDs in your storage hierarchy. Such configurations also may be less expensive because they use fewer components. 

SSD Cost Savings 

Enterprise storage systems that use SSDs can use fewer components and power, while providing better performance. Consider a 20TB disk subsystem where 10 percent of the volumes are handling 90 percent of the I/O load: 2TB with 9,000 I/Os per second and 18TB with 1,000 I/Os per second. If we size a storage system with only Hard-Disk Drives (HDDs) for this workload, we would mix the active and not-so-active volumes on one set of physical disks using, for example, horizontal storage groups. This would give us a 20TB storage system with 10,000 I/Os per second with an overall access density of 0.5. This access density typically requires 146GB drives. To configure 20TB, 20 array groups with eight drives of 146GB each will be needed for a total of 160 drives. 

With an SSD solution, we would put all active volumes together in one SSD pool instead of spreading them over all disks. The 2TB with 10,000 I/Os per second now requires only two 146GB SSD array groups. The other volumes have a low access density, so for those volumes, large-capacity HDDs can be used. Today, that would be six 450GB HDD array groups. So, the storage system that contains SSDs requires only 64 drives total. 

Using SSDs eliminated the need for 96 out of 160 drives. Whether or not fewer disk controllers are required depends on the workload intensity the controllers can handle. In addition to reducing the component count, we likely greatly reduced the response time. Read-misses will now see a response time of around 1ms for our heavy workload. 

Selecting and Justifying SSDs 

What workload candidates are good for SSDs largely depends on the price levels. Since SSD technology provides much better performance, any data set is going to be a candidate when the price is right. With current pricing, we’ll need to select the data that’s most important to our business or that saves the most money for our IT organization. There are two different approaches you can use to decide on the best candidates for placement on SSDs: 

  • Look at the highest potential response time improvement after the move. This means you move workloads that have a significant disconnect time (caused by the back-end) to the SSD drives; typically, workloads with high random read-miss content. Using that selection criterion, high, back-end-intensive workloads such as sequential reads or sequential writes would not be offloaded to the SSD drives since they typically have zero disconnect time. Those would still form a large load on your spinning drives.
  • Look at overall throughput potential. How do you reduce the back-end activity on the spinning disks as much as possible and move the back-end-intensive workloads to the fast SSD array groups where they belong? Here, there’s also an initial response time improvement, but the focus is on maximizing the throughput potential of the complete configuration. With this approach, the workload remaining on HDDs can be placed on high-capacity HDDs—quite likely as large as 450GB. 

The first approach will give you more response time improvements per dollar invested in SSD, and the second approach will improve the throughput of the storage system by offloading the HDDs, allowing you to use larger (and therefore fewer) physical disks. In both cases, the SSD disks are likely to contain your favorite data sets. In the second case, they will also store highly active work or log files. 

What method you use will also depend on the reasons you would want to move to solid state:

4 Pages