Posts Tagged ‘zfs’

h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 3

August 21, 2009

In Part 2 of this series we introduced the storage architecture that we would use for the foundation of our “shared storage” necessary to allow vMotion to do its magic. As we have chosen NexentaStor for our VSA storage platform, we have the choice of either NFS or iSCSI as the storage backing.

In Part 3 of this series we will install NexentaStor, make some file systems and discuss the advantages and disadvantages of NFS and iSCSI as the storage backing. By the end of this segment, we will have everything in place for the ESX and ESXi virtual machines we’ll build in the next segment.

Part 3, Building the VSA

For DRAM memory, our lab system has 24GB of RAM which we will apportion as follows: 2GB overhead to host, 4GB to NexentaStor, 8GB to ESXi, and 8GB to ESX. This leaves 2GB that can be used to support a vCenter installation at the host level.

Our lab mule was configured with 2x250GB SATA II drives which have roughly 230GB each of VMFS partitioned storage. Subtracting 10% for overhead, the sum of our virtual disks will be limited to 415GB. Because of our relative size restrictions, we will try to maximize available storage while limiting our liability in case of disk failure. Therefore, we’ll plan to put the ESXi server on drive “A” and the ESX server on drive “B” with the virtual disks of the VSA split across both “A” and “B” disks.

Our VSA Virtual Hardware

For lab use, a VSA with 4GB RAM and 1 vCPU will suffice. Additional vCPU’s will only serve to limit CPU scheduling for our virtual ESX/ESXi servers, so we’ll leave it at the minimum. Since we’re splitting storage roughly equally across the disks, we note that an additional 4GB was taken-up on disk “A” during the installation of ESXi, therefore we’ll place the VSA’s definition and “boot” disk on disk “B” – otherwise, we’ll interleave disk slices equally across both disks.

NexentaStor-VSA-virtual-hardware

  • Datastore – vLocalStor02B, 8GB vdisk size, thin provisioned, SCSI 0:0
  • Guest Operating System – Solaris, Sun Solaris 10 (64-bit)
  • Resource Allocation
    • CPU Shares – Normal, no reservation
    • Memory Shares – Normal, 4096MB reservation
  • No floppy disk
  • CD-ROM disk – mapped to ISO image of NexentaStor 2.1 EVAL, connect at power on enabled
  • Network Adapters – Three total 
    • One to “VLAN1 Mgt NAT” and
    • Two to “VLAN2000 vSAN”
  • Additional Hard Disks – 6 total
    • vLocalStor02A, 80GB vdisk, thick, SCSI 1:0, independent, persistent
    • vLocalStor02B, 80GB vdisk, thick, SCSI 2:0, independent, persistent
    • vLocalStor02A, 65GB vdisk, thick, SCSI 1:1, independent, persistent
    • vLocalStor02B, 65GB vdisk, thick, SCSI 2:1, independent, persistent
    • vLocalStor02A, 65GB vdisk, thick, SCSI 1:2, independent, persistent
    • vLocalStor02B, 65GB vdisk, thick, SCSI 2:2, independent, persistent

NOTE: It is important to realize here that the virtual disks above could have been provided by vmdk’s on the same disk, vmdk’s spread out across multiple disks or provided by RDM’s mapped to raw SCSI drives. If your lab chassis has multiple hot-swap bays or even just generous internal storage, you might want to try providing NexentaStor with RDM’s or 1-vmdk-per-disk vmdk’s for performance testing or “near” production use. CPU, memory and storage are the basic elements of virtualization and there is no reason that storage must be the bottleneck. For instance, this environment is GREAT for testing SSD applications on a resource limited budget.

Read the rest of this entry ?

h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 2

August 19, 2009

In Part 1 of this series we introduced the basic Lab-in-a-Box platform and outlined how it would be used to provide the three major components of a vMotion lab: (1) shared storage, (2) high speed network and (3) multiple ESX hosts. If you have followed along in your lab, you should now have an operating VMware ESXi 4 system with at least two drives and a properly configured network stack.

In Part 2 of this series we’re going to deploy a Virtual Storage Appliance (VSA) based on an open storage platform which uses Sun’s Zetabyte File System (ZFS) as its underpinnings. We’ve been working with Nexenta’s NexentaStor SAN operating system for some time now and will use it – with its web-based volume management – instead of deploying OpenSolaris and creating storage manually.

Part 2, Choosing a Virtual Storage Architecture

To get started on the VSA, we want to identify some key features and concepts that caused us to choose NexentaStor over a myriad of other options. These are:

  • NexentaStor is based on open storage concepts and licensing;
  • NexentaStor comes in a “free” developer’s version with 4TB 2TB of managed storage;
  • NexentaStor developer’s version includes snapshots, replication, CIFS, NFS and performance monitoring facilities;
  • NexentaStor is available in a fully supported, commercially licensed variant with very affordable $/TB licensing costs;
  • NexentaStor has proven extremely reliable and forgiving in the lab and in the field;
  • Nexenta is a VMware Technology Alliance Partner with VMware-specific plug-ins (commercial product) that facilitate the production use of NexentaStor with little administrative input;
  • Sun’s ZFS (and hence NexentaStor) was designed for commodity hardware and makes good use of additional RAM for cache as well as SSD’s for read and write caching;
  • Sun’s ZFS is designed to maximize end-to-end data integrity – a key point when ALL system components live in the storage domain (i.e. virtualized);
  • Sun’s ZFS employs several “simple but advanced” architectural concepts that maximize performance capabilities on commodity hardware: increasing IOPs and reducing latency;

While the performance features of NexentaStor/ZFS are well outside the capabilities of an inexpensive “all-in-one-box” lab, the concepts behind them are important enough to touch on briefly. Once understood, the concepts behind ZFS make it a compelling architecture to use with virtualized workloads. Eric Sproul has a short slide deck on ZFS that’s worth reviewing.

ZFS and Cache – DRAM, Disks and SSD’s

Legacy SAN architectures are typically split into two elements: cache and disks. While not always monolithic, the cache in legacy storage typically are single-purpose pools set aside to hold frequently accessed blocks of storage – allowing this information to be read/written from/to RAM instead of disk. Such caches are generally very expensive to expand (when possible) and may only accomodate one specific cache function (i.e. read or write, not both). Storage vendors employ many strategies to “predict” what information should stay in cache and how to manage it to effectively improve overall storage throughput.

New cache model used by ZFS allows main memory and fast SSDs to be used as read cache and write cache, reducing the need for large DRAM cache facilities.

New cache model used by ZFS allows main memory and fast SSDs to be used as read cache and write cache, reducing the need for large DRAM cache facilities.

Read the rest of this entry ?

h1

Nexenta Turns 2.0

June 30, 2009

Beth Pariseau, Senior News Writer for SearchStorage.com, has an interview with Nexenta CEO Evan Powell about the release of NexentaStor 2.0 today. The open source storage vendor is making some fundamentally “enterprise focused” changes to its platform in this release by adding active-active high availability features and 24/7 phone support.

“Version 2.0 is Nexenta’s attempt to “cross the chasm” between the open-source community and the traditional enterprise. Chief among these new features is the ability to perform fully automated two-way high availability between ZFS server nodes. Nexenta has already made synchronous replication and manual failover available for ZFS, which doesn’t offer those features natively, Powell said. With the release of Nexenta’s High Availability 1.0 software, failover and failback to the secondary server can happen without human intervention.”

SearchStorage.com

In a related webinar and conference call today, Powell reiterated Nexenta’s support of open storage saying, “we believe that you should own your storage. Legacy vendors want to lock you into their storage platform, but with Nexenta you can take your storage to any platform that speaks ZFS.” Powell sees Nexenta’s anti-lock-in approach as part of their wider value proposition. When asked about de-duplication technology, he referred to Sun’s prototyped de-duplication technology and the promise to introduce it into the main line this summer.

Read the rest of this entry ?

h1

Add SSD to Your ZIL

May 6, 2009
Samsungs new SSD generation using multi-level cell (MLC) flash and a multi-channel flash controller with NCQ and 128MB SDRAM cache.

Samsung's new SSD generation using multi-level cell (MLC) flash and a multi-channel flash controller with NCQ and 128MB SDRAM cache.

Tom’s Hardware has a good review on the state of current SSD options out there. As discussed in previous posts, the ZFS file system offers hybrid storage aspects out of the box. This game-changing technology allows for “holy grail” levels of price-performance with the key technology being SSD for caching. That’s the value proposition our friends at Nexenta have been preaching.

To see what this means in a ZFS storage environment, go no farther than Sun’s blog: Brendan Gregg has posted a great blog on how ZFS’ L2ARC can be comitted to SSD to dramatically increase effective IOPS and drastically reduce latency. The results speak for themselves…

h1

Sun Finds a Buyer in Oracle

April 21, 2009

Sun and Oracle have come to terms on a $7.4B cash deal. Oracle’s Ellison rejected a similar deal in 2003 due to bad timing and a PeopleSoft acquisition. Says Sun’s post:

“Sun and Oracle today announced a definitive agreement for Oracle to acquire Sun for $9.50 per share in cash. The Sun Board of Directors has unanimously approved the transaction. It is anticipated to close this summer.”

“The acquisition of Sun transforms the IT industry, combining best-in-class enterprise software and mission-critical computing systems,” said Oracle CEO Larry Ellison. “Oracle will be the only company that can engineer an integrated system – applications to disk – where all the pieces fit and work together so customers do not have to do it themselves. Our customers benefit as their systems integration costs go down while system performance, reliability and security go up.”

Oracle’s press release mirror’s Sun’s: Read the rest of this entry ?

h1

SME Stack V0.1, Part 3 – Storage Solutions

January 2, 2009

If storage is the key, shared storage the key that opens all locks. In the early days of file servers, shared storage meant a common file store presented to users over the network infrastructure. This storage was commonly found in a DAS array – usually RAID1 or RAID5 – and managed by a general purpose server operating system (like Windows or Netware). Eventually, such storage paradigms adopted clustering technologies for high-availability, but the underlying principles remained much the same: an extrapolation of a general purpose implementation.

Today, shared storage means something completely different. In fact, the need for “file servers” of old has not disappeared but the dependency on DAS for the place where stored data is ultimately placed has moved to the network. Network attached storage – in the form of filers and network block devices – are replacing DAS as companies retire legacy systems and expand their data storage and business continuity horizons. Today, commercial and open source software options abound that provide stability, performance, scalability, redundancy and feature sets that can provide increased functionality and accelerated ROI to their adopters. Read the rest of this entry ?