Storage Performance Management (SPM) is emerging as a distinct, important discipline in the scope of z/OS operations because of architectural changes in the I/O infrastructure in recent years. The growing importance of SPM is due to the three key benefits it can simultaneously provide— lower costs, improved availability, and simplified storage technology management.
Usually, an improvement in one of these areas brings a trade-off in the other areas. For example, better availability often comes at a higher cost or increased complexity. By implementing the SPM best practices discussed here, you can improve all three areas— and this article will reveal how some of the world’s largest data centers have done so. Definition of Storage Performance Management SPM is the process of ensuring applications constantly receive the required performance service levels from storage hardware resources and that storage assets are efficiently used. Effective SPM must be proactive, identifying and addressing bottlenecks before service level disruptions occur. This requires analyzing the current and historical utilization levels of internal components in the storage system. It also requires insight into and control over how well the workloads are balanced across those resources, and visibility into which of the many storage hardware options would be the best candidates for handling expected changes in those workloads.
SRM vs. SPM
How does Storage Performance Management (SPM) differ from Storage Resource Management (SRM), which has been around for many years? SRM primarily focuses on the issues storage administrators deal with: space utilization, allocation, policies, etc. SRM provides significant benefits that enable more efficient use of storage space.
SPM provides better utilization of storage resources from a performance perspective, taking into account response times and throughput rather than focusing on space utilization. This includes methodologies to avoid I/O performance problems. Proper SPM enables significant savings on new storage hardware while enabling existing hardware to provide acceptable response times for longer periods. These areas typically concern performance and storage engineering team members, rather than storage administrators.
SPM and SRM fit closely together. SRM is useful for achieving higher space utilization. However, higher space utilization could introduce unwanted delays without using SPM to validate that the storage system and disk arrays can manage the higher throughput required with higher space utilization levels. In that sense, SPM is key to being able to achieve the full benefits of SRM.
SPM Lowers the Cost of Storage
Using storage performance management best practices can yield a substantial ROI. Data centers spend too much money on storage. By using these methodologies, one of the largest banks in the U.S. documented 34 percent savings in a single storage hardware technology refresh without negative consequences to performance service levels. Savings in this range are realistic and being achieved.
Planning hardware upgrades is an area ripe for large savings. SPM best practices will help you choose the most efficient storage configuration that will deliver the performance your workloads require. You have even less risk of performance disruptions with a right-sized, customized configuration than with an oversized, more expensive configuration that wasn’t selected to account specifically for exact characteristics of your workloads.
A common assumption is that implementing tiers of storage requires a business decision about which applications can live with slow response times, but that’s not necessarily true. The new SPM best practices help distinguish workload categories based on how active the workload data is from a hardware perspective rather than which applications are more “loved.” This approach can ensure good performance and eliminate overspending. Using measured workload characteristics to select the storage hardware tier will deliver the performance each workload requires.