Archive for the ‘Management’ Category

h1

Quick-Take: Removable Media and Update Manager Host Remediation

January 31, 2013

Thanks to a spate of upgrades to vSphere 5.1, I recently (re)discovered the following inconvenient result when applying an update to a DRS cluster from Update Manager (5.1.0.13071, using vCenter Server Appliance 5.1.0 build 947673):

Remediate entity ‘vm11.solori.labs’  Host has VMs ‘View-PSG’ , vUM5 with connected removable media devices. This prevents putting the host into maintenance mode. Disconnect the removable devices and try again.

Immediately I thought: “Great! I left a host-only ISO connected to these VMs.” However, that assumption was as flawed as Update Manager’s assumption that the workloads cannot be vMotion’d without disconnecting the removable media. In fact, the removable media indicated was connected to a shared ISO repository available to all hosts in the cluster. However, I was to blame and not Update Manager, as I had not remembered that Update Manager’s default response to removable media is to abort the process. Since cluster remediation is a powerful feature made possible by Distributed Resource Scheduler (DRS) in Enterprise (and above) vSphere editions that may be new to the feature to many (especially uplifted “Advanced AK” users), it seemed like something worth reviewing and blogging about.

Why is this a big deal?

More the the point, why does this seem to run contrary to “a common sense” response?

First, the manual for remediation of a host in a DRS cluster would include:

  1. Applying “Maintenance Mode” to the host,
  2. Selecting the appropriate action for “powered-off and suspended” workloads, and
  3. Allowing DRS to choose placement and finally vMotion those workloads to an alternate host.

In the case of VMs with removable media attached, this set of actions will result in the workloads being vMotion’d (without warning or hesitation) so long as the other hosts in the cluster have access to the removable media source (i.e. shared storage, not “Host Device.”) However, in the case of Update Manger remediation, the following are documented road blocks to a successful remediation (without administrative override):

  1. A CD/DVD drive is attached (any method),
  2. A floppy drive is attached (any method),
  3. HA admission control prevents migration of the virtual machine,
  4. DPM is enabled on the cluster,
  5. EVC is disabled on the cluster,
  6. DRS is disabled on the cluster (preventing migration),
  7. Fault Tolerance (FT) is enabled for a VM on the host in the cluter.

Therefore it is “by design” that a scheduled remediation would have failed – even if the removable media would be eligible for vMotion. To assist in the evaluation of “obstacles to successful deferred remediation” a cluster remediation report is available (see below).

Generating a remediation report prior to scheduling a Update Manager remediation.

Generating a remediation report prior to scheduling a Update Manager remediation.

In fact, the report will list all possible road blocks to remediation whether or not matching overrides are selected (potentially misleading, certainly not useful for predicting the outcome of the remediation attempt). While this too is counter intuitive, it serves as a reminder of the show-stoppers to successful remediation. For the offending “removable media” override, the appropriate check-box can be found on the options page just prior to the remediation report:

Disabling removable media during Update Manager driven remediation.

Disabling removable media during Update Manager driven remediation.

The inclusion of this override allows Update Manager to slog through the remediation without respect to the attached status of removable media. Likewise, the other remediation overrides will enable successful completion of the remediation process; these overrides are:

  1. Maintenance Mode Settings:
    1. VM Power State prior to remediation:  Do not change, Power off, Suspend
    2. Temporarily disable any removable media devices;
    3. Retry maintenance mode in case of failure (delay and attempts);
  2. Cluster Settings:
    1. Temporarily Disable Distributed Power Management (forces “sleeping” hosts to power-on prior to next steps in remediation);
    2. Temporarily Disable High Availability Admission Control (allows for host remediation to violate host-resource reservation margins);
    3. Temporarily Disable Fault Tolerance (FT) (admonished  to remediate all cluster hosts in the same update cycle to maintain FT compatibility);
    4. Enable parallel remediation for hosts in cluster (will not violate DRS anti-affinity constraints);
      1. Automatically determine the maximum number of concurrently remediated hosts, or
      2. Limit the number of concurrent hosts (1-32);
    5. Migrate powered off and suspended virtual machines to other hosts in the cluster (helpful when a remediation leaves a host in an unserviceable condition);
  3.  PXE Booted ESXi Host Settings:
    1. Allow installation of additional software on PXE booted ESXi 5.x hosts (requires the use of an updated PXE boot image – Update Manager will NOT reboot the PXE booted ESXi host.)

These settings are available at the time of remediation scheduling and as host/cluster defaults (Update Manager Admin View.)

SOLORI’s Take: So while it follows that the remediation process is NOT as similar to the manual process as one might think, it still can be made to function accordingly (almost.) There IS a big difference between disabling removable media and making vMotion-aware decisions about hosts. Perhaps VMware could take a few cycles to determine whether or not a host is bound to a removable media device (either through Host Device or local storage resource) and make a more intelligent decision about removable media.

vSphere already has the ability to identify point-resource dependencies, it would be nice to see this information more intelligently correlated where cluster management is concerned. Currently, instead of “asking” DRS for a dependency list, it just seems to just ask the hosts “do you have removable media plugged-into any VM’s” – and if the answer is “yes” it stops right there… Still, not very intuitive for a feature (DRS) that’s been around since Virtual Infrastructure 3 and vCenter 2.

h1

Short-Take: Clock Ticking on VI 3.5 Updates, June 1 Deadline

May 26, 2011

If you’re still not quite ready to upgrade from VI 3.x to vSphere, time may be running out on your ESX hosts to stay “current” inside of VI3 unless you act before June 1, 2011. If your VMware VI3 hosts have not been patched since November of 2010, you are at risk for losing update/patching capabilities unless you apply ESX350-201012410-BG before the deadline. This patch ONLY addresses the expiring secure key on the ESX host which will otherwise become invalid on June 1, 2011.

If you need to bring your hosts current (without upgrading to vSphere) the last full patch release from VMware for VI 3.5 addresses the following issues:

Enablement of Intel Xeon Processor 3400 Series – Support for the Intel Xeon processor 3400 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

Driver Update for Broadcom bnx2 Network Controller – The driver for bnx2 controllers has been upgraded to version 1.6.9. This driver supports bootcode upgrade on bnx2 chipsets and requires bmapilnx and lnxfwnx2 tools upgrade from Broadcom. This driver also adds support for Network Controller – Sideband Interface (NC-SI) for SOL (serial over LAN) applicable to Broadcom NetXtreme 5709 and 5716 chipsets.

Driver Update for LSI SCSI and SAS Controllers – The driver for LSI SCSI and SAS controllers is updated to version 2.06.74. This version of the driver is required to provide a better support for shared SAS environments.

Newly Supported Guest Operating Systems – Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the VMware Compatibility Guide: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=software.

  • Windows 7 Enterprise (32-bit and 64-bit)
  • Windows 7 Ultimate (32-bit and 64-bit)
  • Windows 7 Professional (32-bit and 64-bit)
  • Windows 7 Home Premium (32-bit and 64-bit)
  • Windows 2008 R2 Standard Edition (64-bit)
  • Windows 2008 R2 Enterprise Edition (64-bit)
  • Windows 2008 R2 Datacenter Edition (64-bit)
  • Windows 2008 R2 Web Server (64-bit)
  • Ubuntu Desktop 9.04 (32-bit and 64-bit)
  • Ubuntu Server 9.04 (32-bit and 64-bit)

Newly Supported Management Agents – See VMware ESX Server Supported Hardware Lifecycle Management Agents for current information on supported management agents.

Newly Supported Network Cards – This release of ESX Server supports HP NC375T (NetXen) PCI Express Quad Port Gigabit Server Adapter.

Newly Supported SATA Controllers – This release of ESX Server supports the Intel Ibex Peak SATA AHCI controller.

Note:
  • Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5. (KB 1008673)
  • Storing VMFS datastores on native SATA drives is not supported.

This patch comes with a roll-up approach that VMware describes this way:

Note: As part of the end of availability for some VMware Virtual Infrastructure product releases, the ESX 3.5 Update 5 upgrade package ESX350-Update05.zip has been replaced by ESX350-Update05a.zip in order to remove dependencies upon patches that will no longer be available for download. Hosts upgraded using ESX350-Update05a.zip are equivalent to those upgraded using the older package, but patch bundles released before ESX 3.5 Update 5 will not be required during the upgrade process.

h1

Quick-Take: vCMA Updated, SSL now Default

March 17, 2011
vCMA Login Screen, iPhone

vCMA Login Screen

In February, we detailed the installation and first use of the VMware vCenter Mobile Access appliance (version 1.0.41). In that write up, we pointed out that vCMA had some security issues and said the following:

Being HTTP-only, vCMA doesn’t lend itself to secure computing over the public Internet or untrusted intranet. Instead, it is designed to work with security layer(s) in front of it. While it IS possible to add HTTPS to the Apache/Tomcat server delivering its web application, vCMA is meant to be deployed as-is and updated as-is – it’s an appliance.

SOLORI’s blog, 28-Feb-2011

Seems VMware is listening. Yesterday, VMware announced the release and immediate availability of vCMA v1.0.42 with HTTPS/SSL enabled by default. We got this from the “vSphere MicroClient Functional Specification Guide:”

SSL Connections
By default “https” (or SSL certificate) is enabled in the appliance for the vCMA for enhanced security. You can replace the out-of-the-box certificate with your own, if needed. However, http->https redirection is currently not supported.

Other deployment considerations

  1. The vCMA server comes with a default userid/password. For security reasons, we strongly recommended that you change root password.
  2. If you prefer, you can set a hostname or IP address for the appliance.
  3. Using standard Linux utilities, you can change the date and time in the appliance.
  4. You can also upgrade the hardware version and VMware Tools in the vCMA appliance following standard procedures.

SOLORI’s Take: This welcomed change circumvents any additional kludge work necessary to secure the appliance. Using an HTTPS proxy was cumbersome and kludgey in its own right and “hacking” the appliance was tricky and doomed to be reversed by the next appliance update. VMware’s move opens the door for more widespread use vCMA and (hopefully) more interesting applications of its use in the future.

h1

In-the-Lab: NexentaStor vs. Grub

November 16, 2010

In this In-the-Lab segment we’re going to look at how to recover from a failed ZFS version update in case you’ve become ambitious with your NexentaStor installation after the last Short-Take on ZFS/ZPOOL versions. If you used the “root shell” to make those changes, chances are your grub is failing after reboot. If so, this blog can help, but before you read on, observe this necessary disclaimer:

NexentaStor is an appliance operating system, not a general purpose one. The accepted way to manage the system volume is through the NMC shell and NMV web interface. Using a “root shell” to configure the file system(s) is unsupported and may void your support agreement(s) and/or license(s).

That said, let’s assume that you updated the syspool filesystem and zpool to the latest versions using the “root shell” instead of the NMC (i.e. following a system update where zfs and zpool warnings declare that your pool and filesystems are too old, etc.) In such a case, the resulting syspool will not be bootable until you update grub (this happens automagically when you use the NMC commands.) When this happens, you’re greeted with the following boot prompt:

grub>

Grub is now telling you that it has no idea how to boot your NexentaStor OS. Chances are there are two things that will need to happen before your system boots again:

  1. Your boot archive will need updating, pointing to the latest checkpoint;
  2. Your master boot record (MBR) will need to have grub installed again.

We’ll update both in the same recovery session to save time (this assumes you know or have a rough idea about your intended boot checkpoint – it is usually the highest numbered rootfs-nmu-NNN checkpoint, where NNN is a three digit number.) The first step is to load the recovery console. This could have been done from the “Safe Mode” boot menu option if grub was still active. However, since grub is blown-away, we’ll boot from the latest NexentaStor CD and select the recovery option from the menu.

Import the syspool

Then, we login as “root” (empty password.) From this “root shell” we can import the existing (disks connected to active controllers) syspool with the following command:

# zpool import -f syspool

Note the use of the “-f” card to force the import of the pool. Chances are, the pool will not have been “destroyed” or “exported” so zpool will “think” the pool belongs to another system (your boot system, not the rescue system). As a precaution, zpool assumes that the pool is still “in use” by the “other system” and the import is rejected to avoid “importing an imported pool” which would be completely catastrophic.

With the syspool imported, we need to mount the correct (latest) checkpointed filesystem as our boot reference for grub, destroy the local zfs.cache file (in case the pool disks have been moved, but still all there), update the boot archive to correspond to the mounted checkpoint and install grub to the disk(s) in the pool (i.e. each mirror member).

List the Checkpoints

# zfs list -r syspool

From the resulting list, we’ll pick our highest-numbered checkpoint; for the sake of this article let’s say it’s “rootfs-nmu-013” and mount it.

Mount the Checkpoint

# mkdir /tmp/syspool
# mount -F zfs syspool/rootfs-nmu-013 /tmp/syspool

Remove the ZPool Cache File

# cd /tmp/syspool/etc/zfs
# rm -f zpool.cache

Update the Boot Archive

# bootadm update-archive -R /tmp/syspool

Determine the Active Disks

# zpool status syspool

For the sake of this article, let’s say the syspool was a three-way mirror and the zpool status returned the following:

  pool: syspool
 state: ONLINE
  scan: resilvered 8.64M in 0h0m with 0 errors on Tue Nov 16 12:34:40 2010
config:
        NAME           STATE     READ WRITE CKSUM
        syspool        ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            c6t13d0s0  ONLINE       0     0     0
            c6t14d0s0  ONLINE       0     0     0
            c6t15d0s0  ONLINE       0     0     0

errors: No known data errors

This enumerates the three disk mirror as being composed of disks/slices c6t13d0s0, c6t14d0s0 and c6t15d0s0. We’ll use that information for the grub installation.

Install Grub to Each Mirror Disk

# cd /tmp/syspool/boot/grub
# installgrub -f -m stage[12] /dev/rdsk/c6t13d0s0
# installgrub -f -m stage[12] /dev/rdsk/c6t14d0s0
# installgrub -f -m stage[12] /dev/rdsk/c6t15d0s0

Unmount and Reboot

# umount /tmp/syspool
# sync
# reboot

Now, the system should be restored to a bootable configuration based on the selected system checkpoint. A similar procedure can be found on Nexenta’s site when using the “Safe Mode” boot option. If you follow that process, you’ll quickly encounter an error – likely intentional and meant to elicit a call to support for help. See if you can spot the step…

h1

Short-Take: ZFS version ZPOOL Versions

November 15, 2010

As features are added to ZFS – the ZFS (filesystem) code may change and/or the underlying ZFS POOL code may change. When features are added, older versions of ZFS/ZPOOL will not be able to take advantage of these new features without the ZFS filesystem and/or pool being updated first.

Since ZFS filesystems exist inside of ZFS pools, the ZFS pool may need to be upgraded before a ZFS filesystem upgrade may take place. For instance, in ZFS pool version 24, support for system attributes was added to ZFS. To allow ZFS filesystems to take advantage of these new attributes, ZFS filesystem version 4 (or higher) is required. The proper order to upgrade would be to bring the ZFS pool up to at least version 24, and then upgrade the ZFS filesystem(s) as needed.

Systems running a newer version of ZFS (pool or filesystem) may “understand” an earlier version. However, older versions of ZFS will not be able to access ZFS streams from newer versions of ZFS.

For NexentaStor users, here are the current versions of the ZFS filesystem (see “zfs upgrade -v”):

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties
 5   System attributes

For NexentaStor users, here are the current versions of the ZFS pool (see “zpool upgrade -v”):

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance

As versions change, upgrading the ZFS pool and filesystem is possible using the respective upgrade command. To upgrade all imported ZFS pools, issue the following command as root:

zpool upgrade -a

Likewise, to upgrade the ZFS filesystem(s) inside the pool and all child filesystems, issue the following command as root:

zfs upgrade -r -a

The new ZFS features available to these pool and filesystem version(s) will now be available to the upgraded pools/filesystems.

h1

Quick-Take: ZFS and Early Disk Failure

September 17, 2010

Anyone who’s discussed storage with me knows that I “hate” desktop drives in storage arrays. When using SAS disks as a standard, that’s typically a non-issue because there’s not typically a distinction between “desktop” and “server” disks in the SAS world. Therefore, you know I’m talking about the other “S” word – SATA. Here’s a tale of SATA woe that I’ve seen repeatedly cause problems for inexperienced ZFS’ers out there…

When volumes fail in ZFS, the “final” indicator is data corruption. Fortunately, ZFS checksums recognize corrupted data and can take action to correct and report the problem. But that’s like treating cancer only after you’ve experienced the symptoms. In fact, the failing disk will likely begin to “under-perform” well before actual “hard” errors show-up as read, write or checksum errors in the ZFS pool. Depending on the reason for “under-performing” this can affect the performance of any controller, pool or enclosure that contains the disk.

Wait – did he say enclosure? Sure. Just like a bad NIC chattering on a loaded network, a bad SATA device can occupy enough of the available service time for a controller or SAS bus (i.e. JBOD enclosure) to make a noticeable performance drop in otherwise “unrelated” ZFS pools. Hence, detection of such events is an important thing. Here’s an example of an old WD SATA disk failing as viewed from the NexentaStor “Data Sets” GUI:

Disk Statistics showing failing drive

Something is wrong with device c5t84d0...

Device c5t84d0 is having some serious problems. Busy time is 7x higher than counterparts, and its average service time is 14x higher. As a member of a RAIDz group, the entire group is being held-back by this “under-performing” member. From this snapshot, it appears that NexentaStor is giving us some good information about the disk from the “web GUI” but this assumption would not be correct. In fact, the “web GUI” is only reporting “real time” data so long as the disk is under load. In the case of a lightly loaded zpool, the statistics may not even be reported.

However, from the command shell, historic and real-time access to per-device performance is available. The output of “iostat -exn” shows the count of all errors for devices since the last time counters were reset, and average I/O loads for each:

Device statistics from 'iostat' show error and I/O history.

Device statistics from 'iostat' show error and I/O history.

The output of iostat clearly shows this disk has serious hardware problems. It indicates hardware errors as well as transmission errors for the device recognized as ‘c5t84d0’ and the I/O statistics – chiefly read, write and average service time – implicate this disk as a performance problem for the associated RAIDz group. So, if the device is really failing, shouldn’t there be a log report of such an event? Yes, and here’s a snip from the message log showing the error:

SCSI error with ioc_status=0x8048 reported in /var/log/messages

SCSI error with ioc_status=0x8048 reported in /var/log/messages for failing device.

However, in this case, the log is not “full” with messages of this sort. In fact, it only showed-up under the stress of an iozone benchmark (run from the NexentaStor ‘nmc’ console). I can (somewhat safely) conclude this to be a device failure since at least one other disk in this group is of the same make, model and firmware revision of the culprit. The interesting aspect about this “failure” is that it does not result in a read, write or checksum error for the associated zpool. Why? Because the device is only loosely coupled to the zpool as a constituent leaf device, and it also implies that the device errors were recoverable by either the drive or the device driver (mapping around a bad/hard error.)

Since these problems are being resolved at the device layer, the ZFS pool is “unaware” of the problem as you can see from the output of ‘zpool status’ for this volume:

zpool status output for pool with undetected failing device

Problems with disk device as yet undetected at the zpool layer.

This doesn’t mean that the “consumers” of the zpool’s resources are “unaware” of the problem, as the disk error has manifested itself in the zpool as higher delays, lower I/O through-put and subsequently less pool bandwidth. In short, if the error is persistent under load, the drive has a correctable but catastrophic (to performance) problem and will need to be replaced. If, however, the error goes away, it is possible that the device driver has suitably corrected for the problem and the drive can stay in place.

SOLORI’s Take: How do we know if the drive needs to be replaced? Time will establish an error rate. In short, running the benchmark again and watching the error counters for the device will determine if the problem persists. Eventually, the errors will either go away or they wont. For me, I’m hoping that the disk fails to give me an excuse to replace the whole pool with a new set of SATA “eco/green” disks for more lab play. Stay tuned…

SOLORI’s Take: In all of its flavors, 1.5Gbps, 3Gbps and 6Gbps, I find SATA drives inferior to “similarly” spec’d SAS for just about everything. In my experience, the worst SAS drives I’ve ever used have been more reliable than most of the SATA drives I’ve used. That doesn’t mean there are “no” good SATA drives, but it means that you really need to work within tighter boundaries when mixing vendors and models in SATA arrays. On top of that, the additional drive port and better typical sustained performance make SAS a clear winner over SATA (IMHO). The big exception to the rule is economy – especially where disk arrays are used for on-line backup – but that’s another discussion…

h1

vSphere Client in Windows7

September 4, 2009

Until there is an updated release of the VMware vSphere Client, running the client on a Windows7 system will require a couple of tricks. While the basic process outlined in these notes accomplishes the task well, the use of additional “helper” batch files is not necessary. By adding the path to the “System.dll” library to your user’s environment, the application can be launched from the standard icon without further modification.

First, add the XML changes at the end of the “VpxClient.exe.config” file. The end of your config file will now look something like this:

  <runtime>
    <developmentMode developerInstallation="true"/>
  </runtime>
</configuration>

Once the changes are made, save the “VpxClient.exe.config” file (if your workstation is secured, you may need “Administrator” privileges to save the file.) Next, copy the “System.dll” file from the “%WINDOWS%\Microsoft.NET\Framework\v2.0.50727” folder on an XP/Vista machine to a newly created “lib” folder in the VpxClient’s directory. Now, you will need to update the user environment to reflect the path to “System.dll” to complete the “developer” hack.

To do this, right-click on the “Computer” menu item on the “Start Menu” and select “Properties.” In the “Control Panel Home” section, click on “Advanced system settings” to open the “System Properties” control panel. Now, click on the “Environment Variables…” button to open the Environment Variables control panel. If “DEVPATH” is already defined, simply add a semi-colon (“;”) to the existing path and add the path to your copied “System.dll” file (not including “System.dll”) to the existing path. If it does not exist, create a new variable called “DEVPATH” and enter the path string in the “Variable Value” field.

System-Properties-Environment-Panel-Windows7

The path begins with either %ProgramFiles% or %ProgramFiles(x86)% depending on whether or not 32-bit or 64-bit Windows7 is installed, respectively. Once the path is entered into the environment and the “System.dll” file is in place, the vSphere Client will launch and run without additional modification. Remember to remove the DEVPATH modification to the environment when a Windows7 vSphere Client is released.

Note that this workaround is not supported by VMware and that the use of the DEVPATH variable could have unforseen consequences in your specific computing environment. Therefore, appropriate considerations should be made prior to the implementation of this “hack.” While SOLORI presents this information “AS-IS” without warranty of any kind, we can report that this workaround is effective for our Windows7 workstations, however …