h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 2

August 19, 2009

In Part 1 of this series we introduced the basic Lab-in-a-Box platform and outlined how it would be used to provide the three major components of a vMotion lab: (1) shared storage, (2) high speed network and (3) multiple ESX hosts. If you have followed along in your lab, you should now have an operating VMware ESXi 4 system with at least two drives and a properly configured network stack.

In Part 2 of this series we’re going to deploy a Virtual Storage Appliance (VSA) based on an open storage platform which uses Sun’s Zetabyte File System (ZFS) as its underpinnings. We’ve been working with Nexenta’s NexentaStor SAN operating system for some time now and will use it – with its web-based volume management – instead of deploying OpenSolaris and creating storage manually.

Part 2, Choosing a Virtual Storage Architecture

To get started on the VSA, we want to identify some key features and concepts that caused us to choose NexentaStor over a myriad of other options. These are:

  • NexentaStor is based on open storage concepts and licensing;
  • NexentaStor comes in a “free” developer’s version with 4TB 2TB of managed storage;
  • NexentaStor developer’s version includes snapshots, replication, CIFS, NFS and performance monitoring facilities;
  • NexentaStor is available in a fully supported, commercially licensed variant with very affordable $/TB licensing costs;
  • NexentaStor has proven extremely reliable and forgiving in the lab and in the field;
  • Nexenta is a VMware Technology Alliance Partner with VMware-specific plug-ins (commercial product) that facilitate the production use of NexentaStor with little administrative input;
  • Sun’s ZFS (and hence NexentaStor) was designed for commodity hardware and makes good use of additional RAM for cache as well as SSD’s for read and write caching;
  • Sun’s ZFS is designed to maximize end-to-end data integrity – a key point when ALL system components live in the storage domain (i.e. virtualized);
  • Sun’s ZFS employs several “simple but advanced” architectural concepts that maximize performance capabilities on commodity hardware: increasing IOPs and reducing latency;

While the performance features of NexentaStor/ZFS are well outside the capabilities of an inexpensive “all-in-one-box” lab, the concepts behind them are important enough to touch on briefly. Once understood, the concepts behind ZFS make it a compelling architecture to use with virtualized workloads. Eric Sproul has a short slide deck on ZFS that’s worth reviewing.

ZFS and Cache – DRAM, Disks and SSD’s

Legacy SAN architectures are typically split into two elements: cache and disks. While not always monolithic, the cache in legacy storage typically are single-purpose pools set aside to hold frequently accessed blocks of storage – allowing this information to be read/written from/to RAM instead of disk. Such caches are generally very expensive to expand (when possible) and may only accomodate one specific cache function (i.e. read or write, not both). Storage vendors employ many strategies to “predict” what information should stay in cache and how to manage it to effectively improve overall storage throughput.

New cache model used by ZFS allows main memory and fast SSDs to be used as read cache and write cache, reducing the need for large DRAM cache facilities.

New cache model used by ZFS allows main memory and fast SSDs to be used as read cache and write cache, reducing the need for large DRAM cache facilities.

Like any modern system today, available DRAM in a ZFS system – that the SAN Appliance’s operating system is not directly using – can be apportioned to cache. The ZFS adaptive replacement cache, or ARC, allows for main memory to be used to access frequently read blocks of data from DRAM (at microsecond latency). Normally, an ARC read miss would result in a read from disk (at millisecond latency), but an additional cache layer – the second level ARC, or L2ARC – can be employed using very fast SSDs to increase effective cache size (and drastically reduce ARC miss penalties) without resorting to significantly larger main memory configurations.

The L2ARC in ZFS sits in-between the ARC and disks, using fast storage to extend main memory caching. L2ARC uses an evict-ahead policy to aggregate ARC entries and predictively push them out to flash to eliminate latency associated with ARC cache eviction.

The L2ARC in ZFS sits in-between the ARC and disks, using fast storage to extend main memory caching. L2ARC uses an evict-ahead policy to aggregate ARC entries and predictively push them out to flash to eliminate latency associated with ARC cache eviction.

In fact, the L2ARC is only limited by the DRAM (main memory) required for bookkeeping at a ratio of about 50:1 for ZFS with an 8-KB record size. This means only that 10GB of additional DRAM would be required to add 512GB of L2ARC (4-128GB read-optimized SSD’s in RAID0 configuration). Together with the ARC, the L2ARC allows for a storage pool consisting of fewer numbers of disks to perform like a much larger array of disks where read operations are concerned.

L2ARC's evict-ahead polict aggregates ARC entries and predictively pushes them to L2ARC devices to eliminate ARC eviction latency. The L2ARC also acts as a ARC cache for processes that may force premature ARC eviction (runaway application) or otherwise adversely affect performance.

L2ARC's evict-ahead polict aggregates ARC entries and predictively pushes them to L2ARC devices to eliminate ARC eviction latency. The L2ARC also acts as a ARC cache for processes that may force premature ARC eviction (runaway application) or otherwise adversely affect performance.

Next, the ZFS Intent-Log and write caching…

Pages: 1 2 3

3 comments

  1. […] Results. In search of the elegant solution to complex business problems… « In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 2 Quick Take: HP’s Sets Another 48-core VMmark Milestone » In-the-Lab: Full […]

    Like


  2. […] Part 2, Selecting a Virtual Storage Appliance (VSA) […]

    Like


  3. can you provide the VMotion test LAB links and Documents….

    Like



Comments are closed.

%d bloggers like this: