There are 2.5 quintillion bytes of data created every day and 80 percent of it is unstructured. Servers grew six times and storage 69 times from 2000 to 2010 according to IBM. Against this backdrop, IT budgets remain flat while user demands on the data center are accelerating. 

Consider, too, the results of IBM’s 2010 Global CFO Study, in which 78 percent of those surveyed identified reducing computing costs as a priority over the next three years. Meanwhile, 74 percent cited faster decision-making as a goal. This goal implies the need to capture and analyze loads of data, including Big Data (huge amounts of structured and streamed data). Big Data demands more storage. So the question becomes, how can an enterprise continue to expand its storage environment while reducing costs and adding new business analytics applications?

There are several ways to reduce data storage costs without sacrificing performance or reliability.  Historically, launching new applications meant provisioning additional storage to support that application—usually much more than was actually needed. 

Virtualizing IT environments has helped enable servers and storage to be shared among a pool of applications, improving utilization of servers and storage already deployed. De-duplication and data compression solutions help reduce costs by storing less data. One of the most effective ways to reduce storage costs is through storage tiering. The objective of tiering is to place the most important data on fast (but more costly) drives while moving less important data to less expensive storage tiers that use mechanical drives, tapes, or other medium. 

Storage Tiering to Reduce Costs

• Maxim #1: The closer data is in proximity to a processor, the faster it can be processed.
• Maxim #2: Faster storage costs more.

Storage tiering assigns different types of data to different media based on requirements and cost considerations with the objective of reducing total storage cost. Important information that’s accessed frequently or is mission-critical (also called Tier 1 data) should be stored on more expensive Solid State Disk (SSD) or flash media. Tier 2 (less frequently accessed or older data) is usually stored on less expensive Hard Disk Drives (HDDs). Tier 3 (archival data) is typically stored on tape. By examining requirements for data access, performance and reliability for each application and data type, price/performance is optimized by assigning data to the appropriate storage tier.

The first tiering solutions were manual, with storage administrators making decisions about where data should be stored and when it should be moved. Policy-based tiering solutions put in place policies for tiering, and data is moved when a threshold is reached (i.e., files that haven’t been accessed for a certain number of days).

The latest products are “self-learning,” using real-time analytics to determine where data is most cost-effectively stored and automatically moving data accordingly. By employing these solutions, businesses can realize huge cost savings while improving performance by using only small amounts of SSD or flash storage. By taking the human element out of data placement, data placement is optimized based on real-time analytics and scientific algorithms, and human management costs are reduced. Reducing human management costs represents a way to significantly drive down operational costs.

SSD, the primary high-performance storage media in use today, uses electronic interfaces compatible with HDDs, but contains integrated circuit assemblies as memory to store data persistently. Because SSDs have no mechanical moving parts (no spinning disk or movable read/write heads as with HDDs), they offer higher performance and are more reliable. Most SSDs today use NAND-based flash memory (one of two types of flash technology; NAND is best-suited to high-capacity data storage), more commonly referred to as flash (which has been around awhile in the consumer market). Declining SSD and flash costs and storage tiering solutions make it a much more viable option in today’s enterprise computing market.

IBM’s Easy Tier

Several vendors offer storage tiering solutions, including IBM, EMC, NetApp, and HP. IBM’s solution to storage tiering is called Easy Tier; EMC’s is Fully Automated Storage Tiering (FAST); NetApp’s is Virtual Storage Tier (VST); and HP’s is HP 3PAR Adaptive Optimization Software.

2 Pages