Posts Tagged ‘virtual storage appliance’

h1

Quick-Take: NexentaStor 4.0.1GA

April 14, 2014

Our open storage partner, Nexenta Systems Inc., hit a milestone this month by releasing NexentaStor 4.0.1 for general availability. This release is significant mainly because it is the first commercial release of NexentaStor based on the Open Source Illumos kernel and not Oracle’s OpenSolaris (now closed source). With this move, NexentaStor’s adhering to the company’s  promise of “open source technology” that enables hardware independence and targeted flexibility.

Some highlights in 4.0.1:

  • Faster Install times
  • Better HA Cluster failover times and “easier” cluster manageability
  • Support for large memory host configurations – up to 512GB of DRAM per head/controller
  • Improved handling of intermittently faulty devices (disks with irregular I/O responses under load)
  • New (read: “not backward compatible”) Auto-Sync replication (user configurable zfs+ssh still available for backward compatibility) with support for replication of HA to/from non-HA clusters
    • Includes LZ4 compression (fast) option
    • Better Control of “Force Flags” from NMV
    • Better Control of Buffering and Connections
  • L2ARC Compression now supported
    • Potentially doubles the effective coverage of L2ARC (for compressible data sets)
    • Supports LZ4 compression (fast)
    • Automatically applied if dataset is likewise compressed
  • Server Message Block v2.1 support for Windows (some caveats for IDMAP users)
  • iSCSI support for Microsoft Server 2012 Cluster and Cluster Shared Volume (CSV)
  • Guided storage pool configuration wizards – Performance, Balanced and Capacity modes
  • Enhanced Support Data and Log Gathering
  • High Availability Cluster plug-in (RSF-1) binaries are now part of the installation image
  • VMware: Much better VMXNET3 support
    • no more log spew
    • MTU settings work from NMV
  • VMware: Install to PVSCSI (boot disk) from ISO no longer requires tricks
  • Upgrade from 3.x is currently “disruptive” – promised “non-disruptive” in next maintenance update
  • Improved DTrace capabilities from NMC shell for
    • COMSTAR/iSCSI/FC
    • general IO
  • Snappier, more stable NMV/GUI
    • Service port changes from 2000 to 8457
    • Multi-NMS default
    • Fast refresh for ZFS containers
    • RSF-1 defaults in “Server” settings
    • Improved iSCSI

See Nexenta’s 4.0.1 Release Notes for additional changes and details.

Note, the 18TB Community Edition EULA is still hampered by the “non-commercial” language, restricting it’s use to home, education and academic (ie. training, testing, lab, etc.) targets. However, the “total amount of Storage Space” license for Community is a deviation from the Enterprise licensing (typical “raw” storage entitlement)

2.2 If You have acquired a Community Edition license, the total amount of Storage Space is limited as specified on the Site and is subject to change without notice. The Community Edition may ONLY be used for educational, academic and other non-commercial purposes expressly excluding any commercial usage. The Trial Edition licenses may ONLY be used for the sole purposes of evaluating the suitability of the Product for licensing of the Enterprise Edition for a fee. If You have obtained the Product under discounted educational pricing, You are only permitted to use the Product for educational and academic purposes only and such license expressly excludes any commercial purposes.

– NexentaStor EULA, Version 4.0; Last updated: March 18, 2014

For those who operate under the Community license, this means your total physical storage is UNLIMITED, provided your space “IN USE” falls short of 18TB (18,432 GB) at all times. Where this is important is in constructing useful arrays with “currently available” disks (SATA, SAS, etc.) Let’s say you needed 16TB of AVAILABLE space using “modern” 3TB disks. The fact that your spinning disks are individually larger than 600GB indicates that array rebuild times might run afoul of failure PRIOR to the completion of the rebuild (encountering data loss) and mirror or raidz2/raidz3 would be your best bet for array configuration.

SOLORI Note: Richard Elling made this concept exceedingly clear back in 2010, and his “ZFS data protection comparison” of 2, 3 and 4-way mirrors to raidz, raidz2 and raidz3 is still a great reference on the topic.

Elling’s MTTDL Comparison by RAID Type

 

Given 16TB in 3-way mirror or raidz2 (roughly equivalent MTTDL predictors), your 3TiB disk count would follow as:

3-way Mirror Disks := RoundUp( 16 * (1024 / 1000)^3 / 70% / ( 3 * (1000 / 1024)^3 )  ) * 3 = 27 disks, or

6-disk Raidz2 Disks := RoundUp( 16 * (1024 / 1000)^3 / 70% / ( 4 * 3 * (1000 / 1024)^3 )  ) * 6 = 18 disks

By “raw” licensing standards, the 3-way mirror would require a 76TB license while the raidz2 volume would require a 51TB license – a difference of 25TB in licensing (around $5,300 retail). However, under the Community License, the “cost” is exactly the same, allowing for a considerable amount of flexibility in array loadout and configuration.

Why do I need 54TiB in disk to make 16TB of “AVAILABLE” storage in Raidz2?

The RAID grouping we’ve chosen is 6-disk raidz2 – that’s akin to 4 data and 2 parity disks in RAID6 (without the fixed stripe requirement or the “write hole penalty.”) This means, on average, one third of the space consumed on-disk will be in the form of parity information. Therefore, right of the top, we’re losing 33% of the disk capacity. Likewise, disk manufacturers make TiB not TB disks, so we lose 7% of “capacity” in the conversion from TiB to TB. Additionally, we like to have a healthy amount of space reserved for new block allocation and recommend 30% unused space as a target. All combined, a 6-disk raidz array is, at best, 43% efficient in terms of capacity (by contrast, 3-way mirror is only 22% space efficient). For an array based on 3TiB disks, we therefore get only 1.3TB of usable storage – per disk – with 6-disk raidz (by contrast, 10-disk raidz nets only 160GB additional “usable” space per disk.)

 SOLORI’s Take: If you’re running 3.x in production, 4.0.1 is not suitable for in-place upgrades (yet) so testing and waiting for the “non-disruptive” maintenance release is your best option. For new installations – especially inside a VM or hypervisor environment as a Virtual Storage Appliance (VSA) – version 4.0.1 presents a better option over it’s 3.x siblings. If you’re familiar with 3.x, there’s not much new on the NMV side outside better tunables and snappier response.

h1

In-the-Lab: Install VMware Tools on NexentaStor VSA

June 17, 2010

Physical lab resources can be a challenge to “free-up” just to test a potential storage appliance. With NexentaStor, you can download a pre-configured VMware (or Xen) appliance from NexentaStor.Org, but what if you want to build your own? Here’s a little help on the subject:

  1. Download the ISO from NexentaStor.Org (see link above);
  2. Create a VMware virtual machine:
    1. 2 vCPU
    2. 4GB RAM (leaves about 3GB for ARC);
    3. CD-ROM (mapped to the ISO image);
    4. One (optionally two if you want to simulate the OS mirror) 4GB, thin provisioned SCSI disks (LSI Logic Parallel);
    5. Guest Operating System type: Sun Solaris 10 (64-bit)
    6. One E1000 for Management/NAS
    7. (optional) One E1000 for iSCSI
  3. Streamline the guest by disabling unnecessary components:
    1. floppy disk
    2. floppy controller (remove from BIOS)
    3. primary IDE controller (remove from BIOS)
    4. COM ports (remove from BIOS)
    5. Parallel ports (remove from BIOS)
  4. Boot to ISO and install NexentaStor CE
    1. (optionally) choose second disk as OS mirror during install
  5. Register your installation with Nexenta
    1. http://www.nexenta.com/register-eval
    2. (optional) Select “Solori” as the partner
  6. Complete initial WebGUI configuration wizard
    1. If you will join it to a domain, use the domain FQDN (i.e. microsoft.msft)
    2. If you choose “Optimize I/O performance…” remember to re-enable ZFS intent logging under Settings>Preferences>System
      1. Sys_zil_disable = No
  7. Shutdown the VSA
    1. Settings>Appliance>PowerOff
  8. Re-direcect the CD-ROM
    1. Connect to Client Device
  9. Power-on the VSA and install VMware Tools
    1. login as admin
      1. assume root shell with “su” and root password
    2. From vSphere Client, initiate the VMware Tools install
    3. cd /tmp
      1. untar VMware Tools with “tar zxvf  /media/VMware\ Tools/vmware-solaris-tools.tar.gz”
    4. cd to /tmp/vmware-tools-distrib
      1. install VMware Tools with “./vmware-install.pl”
      2. Answer with defaults during install
    5. Check that VMware Tools shows and OK status
      1. IP address(es) of interfaces should now be registered

        VMware Tools are registered.

  10. Perform a test “Shutdown” of your VSA
    1. From the vSphere Client, issue VM>Power>Shutdown Guest

      System shutting down from VMware Tools request.

    2. Restart the VSA…

      VSA restarting in vSphere

Now VMware Tools has been installed and you’re ready to add more virtual disks and build ZFS storage pools. If you get a warning about HGFS not loading properly at boot time:

HGFS module mismatch warning.

it is not usually a big deal, but the VMware Host-Guest File System (HGFS) has been known to cause issues in some installations. SInce the NexentaStor appliance is not a general purpose operating system, you should customize the install to not use HGFS at all. To disable it, perform the following:

  1. Edit “/kernel/drv/vmhgfs.conf”
    1. Change:     name=”vmhgfs” parent=”pseudo” instance=0;
    2. To:     #name=”vmhgfs” parent=”pseudo” instance=0;
  2. Re-boot the VSA

Upon reboot, there will be no complaint about the offending HGFS module. Remember that, after updating VMware Tools at a future date, the HGFS configuration file will need to be adjusted again. By the way, this process works just as well on the NexentaStor Commercial edition, however you might want to check with technical support prior to making such changes to a licensed/supported deployment.

h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 4

August 26, 2009

In Part 3 of this series we showed how to install and configure a basic NexentaStor VSA using iSCSI and NFS storage. We also created a CIFS bridge for managing ISO images that are available to our ESX servers using NFS. We now have a fully functional VSA with working iSCSI target (unmounted as of yet) and read-only NFS export mounted to the hardware host.

In this segment, Part 4, we will create an ESXi instance on NFS along with an ESX instance on iSCSI, and, using writable snapshots, turn both of these installations into quick-deploy templates. We’ll then mount our large iSCSI target (created in Part 3) and NFS-based ISO images to all ESX/ESXi hosts (physical and virtual), and get ready to install our vCenter virtual machine.

Part 4, Making an ESX Cluster-in-a-Box

With a lot of things behind us in Parts 1 through 3, we are going to pick-up the pace a bit. Although ZFS snapshots are immediately available in a hidden “.zfs” folder for each snapshotted file system, we are going to use cloning and mount the cloned file systems instead.

Cloning allows us to re-use a file system as a template for a copy-on-write variant of the source. By using the clone instead of the original, we can conserve storage because only the differences between the two file systems (the clone and the source) are stored to disk. This process allows us to save time as well, leveraging “clean installations” as starting points (templates) along with their associate storage (much like VMware’s linked-clone technology for VDI.) While VMware’s “template” capability allows us save time by using a VM as a “starting point” it does so by copying storage, not cloning it, and therefore conserves no storage.

Using clones in NexentaStor to aid rapid deployment and testing.

Using clones in NexentaStor to conserve storage and aid rapid deployment and testing. Only the differences between the source and the clone require additional storage on the NexentaStor appliance.

While the ESX and ESXi use cases might not seem the “perfect candidates” for cloning in a “production” environment, in the lab it allows for an abundance of possibilities in regression and isolation testing. In production you might find that NFS and iSCSI boot capabilities could make cloned hosts just as effective for deployment and backup as they are in the lab (but that’s another blog).

Here’s the process we will continue with for this part in the lab series:

  1. Create NFS folder in NexentaStor for the ESXi template and share via NFS;
  2. Modify the NFS folder properties in NexentaStor to:
    1. limit access to the hardware ESXi host only;
    2. grant the hardware ESXi host “root” access;
  3. Create a folder in NexentaStor for the ESX template and create a Zvol;
  4. From VI Client’s “Add Storage…” function, we’ll add the new NFS and iSCSI volumes to the Datastore;
  5. Create ESX and ESXi clean installations in these “template” volumes as a cloning source;
  6. Unmount the “template” volumes using the VI Client and unshare them in NexentaStore;
  7. Clone the “template” Zvol and NFS file systems using NexentaStore;
  8. Mount the clones with VI Client and complete the ESX and ESXi installations;
  9. Mount the main Zvol and ISO storage to ESX and ESXi as primary shared storage;
Basic storage architecture for the ESX-on-ESX lab.

Basic storage architecture for the ESX-on-ESX lab.

Read the rest of this entry ?