h1

SME Stack V0.1, Part 2 – Storage is Key

December 31, 2008

Storage is key to the virtual infrastructure. That’s right, storage. For business applications network products, hypervisors and management tools all exist to interconnect what? Storage.

Need proof? First, the derivative product from a hypervisor-based computing platform (composite of OS, applications and related data) is a group of files. Second, hypervisor-based storage relies on reliable, target performance, network connected storage to facilitate migration, recovery and duplication (cloning, rapid provisioning, etc.) The network element will simply not be significant a factor in determining performance or utility when TCO is calculated. Third, storage is where hypervisor and non-hypervisor technologies meet in the middle. For some time, a significant number of businesses will need to live in the hybrid world of hardware and virtual computing. The only surviving common element will be storage.

Why is DAS a dead technology? This is a question better answered with another question: how do you backup direct attached storage (DAS)? The licensing and management requirements for DAS is driving its TCO upward along with storage densities. While there is no universal consensus, both IDC and Enterprise Strategy Group agree that annual compounded storage growth rates are between 30 and 60 percent. Taking the low-end as a baseline, that’s more than doubling every three years. At the high-end, it is a staggering four-fold increase every three years. Recognizing this trend: How do you realistically keep up with that growth in a manageable way with DAS?

But isn’t SAN just another way to abstract storage to the network? Yes and no. SAN technologies that work include storage virtualization which – at a minimum – include storage over subscription (sometimes called “thin provisioning”) which allows storage to be allocated before it is needed or available. This allows a “just in time” approach to storage acquisition which, in turn, tends to drive per-unit storage costs down (i.e. a 1GB unit of storage will cost less in six months that is does today.) By “pre-allocating” the storage needs of tomorrow without committing to the “buy” for that storage, a deployment based on scalable storage architectures will cost less today and cost less tomorrow as well.

Storage needs versus time

Storage needs versus time

Let’s take the following “pure storage” deployment model as an example. A graphics design customer has 1.5TB of storage “consolidated” across four file servers with an aggregate capacity of 2TB. The rate at which their work product – and storage needs – grows is at a rate of 45% today. Since they have no scalable storage infrastructure today, a true consolidation would take place on a storage platform that can accommodate their present need plus at least six months growth and allow for thin provisioning with add-on storage capabilities.

Our example customer needs at least 2TB of SAN storage today with thin provisioning to 4.5TB to accommodate three years of estimated growth. Instead of investing in the 2.5TB of additional storage today, it could be added in 1TB increments every 10 months to keep ahead of growth while partaking in the ever decreasing cost of storage (accelerated ROI). The customer’s servers can be configured today to “assume” an available storage pool of 4-8TB (assuming a maximum allocation of 2TB per server node based on arbitrary operating system limitations) and have that storage delivered (and paid for) only when it is actually needed. If the storage growth tapers-off due to a changing business climate, the investment will have been limited to current need.

What happens to the tiered storage model? It is alive and well – sort of. In legacy storage architectures, storage of differing performance were not integrated together. Their use was dictated by management and provisioning policies which allowed for only rigid applications of the benefits. Today, modular architectures allow mix-and-match performance structures of DRAM-based disks, solid-state disks (SDD), high-speed disks (i.e. SAS 15K), medium-speed disks (i.e. SATA/SAS 10K) and low-speed, high-efficiency disks (i.e. SATA 7.2K) to be allocated based on service policies (dynamic) instead of provisioning policies (static). By pooling resources according to policy, performance, retention, replication and snapshots can be applied to storage and associated workloads without dedicated “storage islands” to improve application specific performance.

%d bloggers like this: