Posts Tagged ‘alternative to freenas’

h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 3

August 21, 2009

In Part 2 of this series we introduced the storage architecture that we would use for the foundation of our “shared storage” necessary to allow vMotion to do its magic. As we have chosen NexentaStor for our VSA storage platform, we have the choice of either NFS or iSCSI as the storage backing.

In Part 3 of this series we will install NexentaStor, make some file systems and discuss the advantages and disadvantages of NFS and iSCSI as the storage backing. By the end of this segment, we will have everything in place for the ESX and ESXi virtual machines we’ll build in the next segment.

Part 3, Building the VSA

For DRAM memory, our lab system has 24GB of RAM which we will apportion as follows: 2GB overhead to host, 4GB to NexentaStor, 8GB to ESXi, and 8GB to ESX. This leaves 2GB that can be used to support a vCenter installation at the host level.

Our lab mule was configured with 2x250GB SATA II drives which have roughly 230GB each of VMFS partitioned storage. Subtracting 10% for overhead, the sum of our virtual disks will be limited to 415GB. Because of our relative size restrictions, we will try to maximize available storage while limiting our liability in case of disk failure. Therefore, we’ll plan to put the ESXi server on drive “A” and the ESX server on drive “B” with the virtual disks of the VSA split across both “A” and “B” disks.

Our VSA Virtual Hardware

For lab use, a VSA with 4GB RAM and 1 vCPU will suffice. Additional vCPU’s will only serve to limit CPU scheduling for our virtual ESX/ESXi servers, so we’ll leave it at the minimum. Since we’re splitting storage roughly equally across the disks, we note that an additional 4GB was taken-up on disk “A” during the installation of ESXi, therefore we’ll place the VSA’s definition and “boot” disk on disk “B” – otherwise, we’ll interleave disk slices equally across both disks.

NexentaStor-VSA-virtual-hardware

  • Datastore – vLocalStor02B, 8GB vdisk size, thin provisioned, SCSI 0:0
  • Guest Operating System – Solaris, Sun Solaris 10 (64-bit)
  • Resource Allocation
    • CPU Shares – Normal, no reservation
    • Memory Shares – Normal, 4096MB reservation
  • No floppy disk
  • CD-ROM disk – mapped to ISO image of NexentaStor 2.1 EVAL, connect at power on enabled
  • Network Adapters – Three total 
    • One to “VLAN1 Mgt NAT” and
    • Two to “VLAN2000 vSAN”
  • Additional Hard Disks – 6 total
    • vLocalStor02A, 80GB vdisk, thick, SCSI 1:0, independent, persistent
    • vLocalStor02B, 80GB vdisk, thick, SCSI 2:0, independent, persistent
    • vLocalStor02A, 65GB vdisk, thick, SCSI 1:1, independent, persistent
    • vLocalStor02B, 65GB vdisk, thick, SCSI 2:1, independent, persistent
    • vLocalStor02A, 65GB vdisk, thick, SCSI 1:2, independent, persistent
    • vLocalStor02B, 65GB vdisk, thick, SCSI 2:2, independent, persistent

NOTE: It is important to realize here that the virtual disks above could have been provided by vmdk’s on the same disk, vmdk’s spread out across multiple disks or provided by RDM’s mapped to raw SCSI drives. If your lab chassis has multiple hot-swap bays or even just generous internal storage, you might want to try providing NexentaStor with RDM’s or 1-vmdk-per-disk vmdk’s for performance testing or “near” production use. CPU, memory and storage are the basic elements of virtualization and there is no reason that storage must be the bottleneck. For instance, this environment is GREAT for testing SSD applications on a resource limited budget.

Read the rest of this entry ?

h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 2

August 19, 2009

In Part 1 of this series we introduced the basic Lab-in-a-Box platform and outlined how it would be used to provide the three major components of a vMotion lab: (1) shared storage, (2) high speed network and (3) multiple ESX hosts. If you have followed along in your lab, you should now have an operating VMware ESXi 4 system with at least two drives and a properly configured network stack.

In Part 2 of this series we’re going to deploy a Virtual Storage Appliance (VSA) based on an open storage platform which uses Sun’s Zetabyte File System (ZFS) as its underpinnings. We’ve been working with Nexenta’s NexentaStor SAN operating system for some time now and will use it – with its web-based volume management – instead of deploying OpenSolaris and creating storage manually.

Part 2, Choosing a Virtual Storage Architecture

To get started on the VSA, we want to identify some key features and concepts that caused us to choose NexentaStor over a myriad of other options. These are:

  • NexentaStor is based on open storage concepts and licensing;
  • NexentaStor comes in a “free” developer’s version with 4TB 2TB of managed storage;
  • NexentaStor developer’s version includes snapshots, replication, CIFS, NFS and performance monitoring facilities;
  • NexentaStor is available in a fully supported, commercially licensed variant with very affordable $/TB licensing costs;
  • NexentaStor has proven extremely reliable and forgiving in the lab and in the field;
  • Nexenta is a VMware Technology Alliance Partner with VMware-specific plug-ins (commercial product) that facilitate the production use of NexentaStor with little administrative input;
  • Sun’s ZFS (and hence NexentaStor) was designed for commodity hardware and makes good use of additional RAM for cache as well as SSD’s for read and write caching;
  • Sun’s ZFS is designed to maximize end-to-end data integrity – a key point when ALL system components live in the storage domain (i.e. virtualized);
  • Sun’s ZFS employs several “simple but advanced” architectural concepts that maximize performance capabilities on commodity hardware: increasing IOPs and reducing latency;

While the performance features of NexentaStor/ZFS are well outside the capabilities of an inexpensive “all-in-one-box” lab, the concepts behind them are important enough to touch on briefly. Once understood, the concepts behind ZFS make it a compelling architecture to use with virtualized workloads. Eric Sproul has a short slide deck on ZFS that’s worth reviewing.

ZFS and Cache – DRAM, Disks and SSD’s

Legacy SAN architectures are typically split into two elements: cache and disks. While not always monolithic, the cache in legacy storage typically are single-purpose pools set aside to hold frequently accessed blocks of storage – allowing this information to be read/written from/to RAM instead of disk. Such caches are generally very expensive to expand (when possible) and may only accomodate one specific cache function (i.e. read or write, not both). Storage vendors employ many strategies to “predict” what information should stay in cache and how to manage it to effectively improve overall storage throughput.

New cache model used by ZFS allows main memory and fast SSDs to be used as read cache and write cache, reducing the need for large DRAM cache facilities.

New cache model used by ZFS allows main memory and fast SSDs to be used as read cache and write cache, reducing the need for large DRAM cache facilities.

Read the rest of this entry ?

h1

Nexenta Turns 2.0

June 30, 2009

Beth Pariseau, Senior News Writer for SearchStorage.com, has an interview with Nexenta CEO Evan Powell about the release of NexentaStor 2.0 today. The open source storage vendor is making some fundamentally “enterprise focused” changes to its platform in this release by adding active-active high availability features and 24/7 phone support.

“Version 2.0 is Nexenta’s attempt to “cross the chasm” between the open-source community and the traditional enterprise. Chief among these new features is the ability to perform fully automated two-way high availability between ZFS server nodes. Nexenta has already made synchronous replication and manual failover available for ZFS, which doesn’t offer those features natively, Powell said. With the release of Nexenta’s High Availability 1.0 software, failover and failback to the secondary server can happen without human intervention.”

SearchStorage.com

In a related webinar and conference call today, Powell reiterated Nexenta’s support of open storage saying, “we believe that you should own your storage. Legacy vendors want to lock you into their storage platform, but with Nexenta you can take your storage to any platform that speaks ZFS.” Powell sees Nexenta’s anti-lock-in approach as part of their wider value proposition. When asked about de-duplication technology, he referred to Sun’s prototyped de-duplication technology and the promise to introduce it into the main line this summer.

Read the rest of this entry ?

h1

StorMagic offers Free VSA

March 3, 2009

StorMagic (UK) is offering a “free” license for their new $1,000 virtual storage appliance (VSA) targeted speficically at VMware ESX users. This VSA – like all VSAs to date – targets the directly attached storage (DAS) of your ESX server as fodder for shared storage: by commandeering and redistributing the DAS as a network share for all ESX servers in your farm.

How is StorMagic’s VSA – the call it the StorMagic SvSAN – different from other VSA offerings?

  • First, it is being offered “free’ for the 2TB management license if you “qualify” by getting a “promo code” from a reseller. Fortunately, getting a promo code is as easy clicking the “help balloon” on the download form.
  • Second, it offers a commercially supported SAN platform – under ESX – that can be managed directly from vCenter. This allows direct management of the underlying RAID controller on the ESX hardware. Currently, all LSI MegaRAID SAS controllers are supported, as well as 3Ware’s 9500S, 9650SE and 9690 Series, plus support for Intel’s SRCSAS-RB/JV and SRCSATAWB controllers.
  • Thirdly, the VSA supports all of the basic functions needed in an ESX/HA environment: high availability and mirroring, snapshots and iSCSI. HA features are available hrough an additional license (now being offered 2-for-1) and 256 levels of snapshot – per VSA – that work with a VSS provider for Windows.

More importantly, StorMagic is a VMware Technology Alliance Partner, implying a depth of support that OpenSource “free” products can not offer. SvSAN requires ESX 3.5+, one vCPU, 2000MHz reservation, 1GB memory, Gigabit Ethernet connection(s), 500MB disk space and a supported RAID controller. Follow this link to try SvSAN.