Archive for the ‘Virtual Storage’ Category

h1

In-the-Lab: Install VMware Tools on NexentaStor VSA

June 17, 2010

Physical lab resources can be a challenge to “free-up” just to test a potential storage appliance. With NexentaStor, you can download a pre-configured VMware (or Xen) appliance from NexentaStor.Org, but what if you want to build your own? Here’s a little help on the subject:

  1. Download the ISO from NexentaStor.Org (see link above);
  2. Create a VMware virtual machine:
    1. 2 vCPU
    2. 4GB RAM (leaves about 3GB for ARC);
    3. CD-ROM (mapped to the ISO image);
    4. One (optionally two if you want to simulate the OS mirror) 4GB, thin provisioned SCSI disks (LSI Logic Parallel);
    5. Guest Operating System type: Sun Solaris 10 (64-bit)
    6. One E1000 for Management/NAS
    7. (optional) One E1000 for iSCSI
  3. Streamline the guest by disabling unnecessary components:
    1. floppy disk
    2. floppy controller (remove from BIOS)
    3. primary IDE controller (remove from BIOS)
    4. COM ports (remove from BIOS)
    5. Parallel ports (remove from BIOS)
  4. Boot to ISO and install NexentaStor CE
    1. (optionally) choose second disk as OS mirror during install
  5. Register your installation with Nexenta
    1. http://www.nexenta.com/register-eval
    2. (optional) Select “Solori” as the partner
  6. Complete initial WebGUI configuration wizard
    1. If you will join it to a domain, use the domain FQDN (i.e. microsoft.msft)
    2. If you choose “Optimize I/O performance…” remember to re-enable ZFS intent logging under Settings>Preferences>System
      1. Sys_zil_disable = No
  7. Shutdown the VSA
    1. Settings>Appliance>PowerOff
  8. Re-direcect the CD-ROM
    1. Connect to Client Device
  9. Power-on the VSA and install VMware Tools
    1. login as admin
      1. assume root shell with “su” and root password
    2. From vSphere Client, initiate the VMware Tools install
    3. cd /tmp
      1. untar VMware Tools with “tar zxvf  /media/VMware\ Tools/vmware-solaris-tools.tar.gz”
    4. cd to /tmp/vmware-tools-distrib
      1. install VMware Tools with “./vmware-install.pl”
      2. Answer with defaults during install
    5. Check that VMware Tools shows and OK status
      1. IP address(es) of interfaces should now be registered

        VMware Tools are registered.

  10. Perform a test “Shutdown” of your VSA
    1. From the vSphere Client, issue VM>Power>Shutdown Guest

      System shutting down from VMware Tools request.

    2. Restart the VSA…

      VSA restarting in vSphere

Now VMware Tools has been installed and you’re ready to add more virtual disks and build ZFS storage pools. If you get a warning about HGFS not loading properly at boot time:

HGFS module mismatch warning.

it is not usually a big deal, but the VMware Host-Guest File System (HGFS) has been known to cause issues in some installations. SInce the NexentaStor appliance is not a general purpose operating system, you should customize the install to not use HGFS at all. To disable it, perform the following:

  1. Edit “/kernel/drv/vmhgfs.conf”
    1. Change:     name=”vmhgfs” parent=”pseudo” instance=0;
    2. To:     #name=”vmhgfs” parent=”pseudo” instance=0;
  2. Re-boot the VSA

Upon reboot, there will be no complaint about the offending HGFS module. Remember that, after updating VMware Tools at a future date, the HGFS configuration file will need to be adjusted again. By the way, this process works just as well on the NexentaStor Commercial edition, however you might want to check with technical support prior to making such changes to a licensed/supported deployment.

h1

vSphere, Hardware Version 7 and Hot Plug

December 5, 2009

VMware’s vSphere added hot plug features in hardware version 7 (first introduced in VMware Workstation 6.5) that were not available in the earlier version 4 virtual hardware. Virtual hardware version 7 adds the following new features to VMware virtual machines:

  • LSI SAS virtual device – provides support for Windows Server 2008 fail-over cluster configurations
  • Paravirtual SCSI devicesrecently updated to allow booting, can allow higher-performance (greater throughput and lower CPU utilization) than the standard virtual SCSI adapter – especially in SAN environments where I/O-intensive applications are used. Currently supported in Windows Server 2003/2008 and Red Hat Linux 5 – although any version of Linux could be modified to support PVSCSI.
  • IDE virtual device – useful for older OSes that don’t support SCSI drivers
  • VMXNET 3 – next generation Vmxnet device with enhanced performance and enhanced networking features.
  • Hot plug virtual devices, memory and CPU – supports hot add/remove of virtual devices, memory and CPU for supported OSes.

While the “upgrade” process from version 4 to version 7 is well-known, some of the side effects are not well publicised. The most obvious change after the migration from version 4 to version 7 is the affect hot plug has on the PCI bus adapters – some are now hot plug by default, including the network adapters!

Safe to remove network adapters. Really?

Safe to remove network adapters. Really?

Note that the above example demonstrates also that the updated hardware re-enumerates the network adapters (see #3 and #4) because they have moved to a new PCI bus – one that supports hot plug. Removing the “missing” devices requires a trip to device manager (set devmgr_show_nonpresent_devices=1 in your shell environment first.) This hot plug PCI bus also allows for an administrator to mistakenly remove the device from service – potentially disconnecting tier 1 services from operations (totally by accident, of course.

Devices that can be added while the VM runs with hardware version 4

Devices that can be added while the VM runs with hardware version 4

In virtual hardware version 4, only SCSI devices and hard disks were allowed to be added to a running virtual machine. Now with hardware version 7,

Devices that can be added while the VM runs with hardware version 7

Devices that can be added while the VM runs with hardware version 7

additional devices (USB and Ethernet) are available for hot add. You could change memory and CPU on the fly too, if the OS supports that feature and they are enabled in the virtual machine properties prior to running the VM:

CPU and Memory Hot Plug Properties

CPU and Memory Hot Plug Properties

However, the hot plug NIC issue isn’t discussed in the documentation, but Carlo Costanzo at VMwareInfo.com passes on Chris Hahn’s great tip to disable hot plug behaviour in his blog post complete with visual aids. The key is to add a new “Advanced Configuration Parameter” to the virtual machine configuration: this new parameter is called “devices.hotplug” and its value should be set to “false.” However, adding this parameter requires the virtual machine to be turned-off, so it is currently an off-line fix.

h1

StorMagic offers Free VSA

March 3, 2009

StorMagic (UK) is offering a “free” license for their new $1,000 virtual storage appliance (VSA) targeted speficically at VMware ESX users. This VSA – like all VSAs to date – targets the directly attached storage (DAS) of your ESX server as fodder for shared storage: by commandeering and redistributing the DAS as a network share for all ESX servers in your farm.

How is StorMagic’s VSA – the call it the StorMagic SvSAN – different from other VSA offerings?

  • First, it is being offered “free’ for the 2TB management license if you “qualify” by getting a “promo code” from a reseller. Fortunately, getting a promo code is as easy clicking the “help balloon” on the download form.
  • Second, it offers a commercially supported SAN platform – under ESX – that can be managed directly from vCenter. This allows direct management of the underlying RAID controller on the ESX hardware. Currently, all LSI MegaRAID SAS controllers are supported, as well as 3Ware’s 9500S, 9650SE and 9690 Series, plus support for Intel’s SRCSAS-RB/JV and SRCSATAWB controllers.
  • Thirdly, the VSA supports all of the basic functions needed in an ESX/HA environment: high availability and mirroring, snapshots and iSCSI. HA features are available hrough an additional license (now being offered 2-for-1) and 256 levels of snapshot – per VSA – that work with a VSS provider for Windows.

More importantly, StorMagic is a VMware Technology Alliance Partner, implying a depth of support that OpenSource “free” products can not offer. SvSAN requires ESX 3.5+, one vCPU, 2000MHz reservation, 1GB memory, Gigabit Ethernet connection(s), 500MB disk space and a supported RAID controller. Follow this link to try SvSAN.