Posts Tagged ‘update manager’

h1

Quick-Take: Removable Media and Update Manager Host Remediation

January 31, 2013

Thanks to a spate of upgrades to vSphere 5.1, I recently (re)discovered the following inconvenient result when applying an update to a DRS cluster from Update Manager (5.1.0.13071, using vCenter Server Appliance 5.1.0 build 947673):

Remediate entity ‘vm11.solori.labs’  Host has VMs ‘View-PSG’ , vUM5 with connected removable media devices. This prevents putting the host into maintenance mode. Disconnect the removable devices and try again.

Immediately I thought: “Great! I left a host-only ISO connected to these VMs.” However, that assumption was as flawed as Update Manager’s assumption that the workloads cannot be vMotion’d without disconnecting the removable media. In fact, the removable media indicated was connected to a shared ISO repository available to all hosts in the cluster. However, I was to blame and not Update Manager, as I had not remembered that Update Manager’s default response to removable media is to abort the process. Since cluster remediation is a powerful feature made possible by Distributed Resource Scheduler (DRS) in Enterprise (and above) vSphere editions that may be new to the feature to many (especially uplifted “Advanced AK” users), it seemed like something worth reviewing and blogging about.

Why is this a big deal?

More the the point, why does this seem to run contrary to “a common sense” response?

First, the manual for remediation of a host in a DRS cluster would include:

  1. Applying “Maintenance Mode” to the host,
  2. Selecting the appropriate action for “powered-off and suspended” workloads, and
  3. Allowing DRS to choose placement and finally vMotion those workloads to an alternate host.

In the case of VMs with removable media attached, this set of actions will result in the workloads being vMotion’d (without warning or hesitation) so long as the other hosts in the cluster have access to the removable media source (i.e. shared storage, not “Host Device.”) However, in the case of Update Manger remediation, the following are documented road blocks to a successful remediation (without administrative override):

  1. A CD/DVD drive is attached (any method),
  2. A floppy drive is attached (any method),
  3. HA admission control prevents migration of the virtual machine,
  4. DPM is enabled on the cluster,
  5. EVC is disabled on the cluster,
  6. DRS is disabled on the cluster (preventing migration),
  7. Fault Tolerance (FT) is enabled for a VM on the host in the cluter.

Therefore it is “by design” that a scheduled remediation would have failed – even if the removable media would be eligible for vMotion. To assist in the evaluation of “obstacles to successful deferred remediation” a cluster remediation report is available (see below).

Generating a remediation report prior to scheduling a Update Manager remediation.

Generating a remediation report prior to scheduling a Update Manager remediation.

In fact, the report will list all possible road blocks to remediation whether or not matching overrides are selected (potentially misleading, certainly not useful for predicting the outcome of the remediation attempt). While this too is counter intuitive, it serves as a reminder of the show-stoppers to successful remediation. For the offending “removable media” override, the appropriate check-box can be found on the options page just prior to the remediation report:

Disabling removable media during Update Manager driven remediation.

Disabling removable media during Update Manager driven remediation.

The inclusion of this override allows Update Manager to slog through the remediation without respect to the attached status of removable media. Likewise, the other remediation overrides will enable successful completion of the remediation process; these overrides are:

  1. Maintenance Mode Settings:
    1. VM Power State prior to remediation:  Do not change, Power off, Suspend
    2. Temporarily disable any removable media devices;
    3. Retry maintenance mode in case of failure (delay and attempts);
  2. Cluster Settings:
    1. Temporarily Disable Distributed Power Management (forces “sleeping” hosts to power-on prior to next steps in remediation);
    2. Temporarily Disable High Availability Admission Control (allows for host remediation to violate host-resource reservation margins);
    3. Temporarily Disable Fault Tolerance (FT) (admonished  to remediate all cluster hosts in the same update cycle to maintain FT compatibility);
    4. Enable parallel remediation for hosts in cluster (will not violate DRS anti-affinity constraints);
      1. Automatically determine the maximum number of concurrently remediated hosts, or
      2. Limit the number of concurrent hosts (1-32);
    5. Migrate powered off and suspended virtual machines to other hosts in the cluster (helpful when a remediation leaves a host in an unserviceable condition);
  3.  PXE Booted ESXi Host Settings:
    1. Allow installation of additional software on PXE booted ESXi 5.x hosts (requires the use of an updated PXE boot image – Update Manager will NOT reboot the PXE booted ESXi host.)

These settings are available at the time of remediation scheduling and as host/cluster defaults (Update Manager Admin View.)

SOLORI’s Take: So while it follows that the remediation process is NOT as similar to the manual process as one might think, it still can be made to function accordingly (almost.) There IS a big difference between disabling removable media and making vMotion-aware decisions about hosts. Perhaps VMware could take a few cycles to determine whether or not a host is bound to a removable media device (either through Host Device or local storage resource) and make a more intelligent decision about removable media.

vSphere already has the ability to identify point-resource dependencies, it would be nice to see this information more intelligently correlated where cluster management is concerned. Currently, instead of “asking” DRS for a dependency list, it just seems to just ask the hosts “do you have removable media plugged-into any VM’s” – and if the answer is “yes” it stops right there… Still, not very intuitive for a feature (DRS) that’s been around since Virtual Infrastructure 3 and vCenter 2.

h1

Quick Take: VMware ESXi 5.0, Patch ESXi50-Update01

March 16, 2012

VMware releases ESXi 5.0 Complete Update 1 for vSphere 5. An important change for this release is the inclusion of general and security-only image profiles:

Starting with ESXi 5.0 Update 1, VMware patch and update releases contain general and security-only image profiles. Security-only image profiles are applicable to new security fixes only. No new bug fixes are included, but bug fixes from earlier patch/update releases are included.

The general release image profile supersedes the security-only profile. Application of the general release image profile applies to new security and bug fixes.

The security-only image profiles are identified with the additional “s” identifier in the image profile name.

Just a few of the more interesting bugs fixed in this release:

PR 712342: Cannot assign VMware vSphere Hypervisor license key to an ESXi host with pRAM greater than 32GB

PR 719895: Unable to add a USB device to a virtual machine (KB 1039359).

PR 721191: Modifying snapshots using the commands vim-cmd vmsvc/snapshot.remove or vim-cmd vmsvc/snapshot.revert
will fail when applied against certain snapshot tree structures.

This issue is resolved in this release. Now a unique identifier, snapshotId, is created for every snapshot associated to a virtual machine. You can get the snapshotId by running the command vim-cmd vmsvc/snapshot.get <vmid>. You can use the following new syntax when working with the same commands:

Revert to snapshot: vim-cmd vmsvc/snapshot.revert <vmid> <snapshotId> [suppressPowerOff/suppressPowerOn]
Remove a snapshot: vim-cmd vmsvc/snapshot.remove <vmid> <snapshotId>

PR 724376: Data corruption might occur if you copy large amounts of data (more than 1GB) from a 64-bit Windows virtual machine to a USB storage device.

PR 725429: Applying a host profile to an in-compliance host causes non-compliance (KB 2003472).

PR 728257: On a pair of HA storage controllers configured for redundancy, if you take over one controller, the datastores that reside on LUNs on the taken over controller might show inactive and remain inactive until you perform a rescan manually.

PR 734366: Purple diagnostic screen with vShield or third-party vSphere integrated firewall products (KB 2004893)

PR 734707: Virtual machines on a vNetwork Distributed Switch (vDS) configured with VLANs might lose network connectivity upon boot if you configure Private VLANs on the vDS. However, disconnecting and reconnecting the uplink solves the problem.This issue has been observed on be2net NICs and ixgbe vNICs.

PR 742242: XCOPY commands that VAAI sends to the source storage device might fail. By default, XCOPY commands should be sent to the destination storage device in accordance with VAAI specification.

PR 750460: Adding and removing a physical NIC might cause an ESXi host to fail with a purple screen. The purple diagnostic screen displays an error message similar to the following:

NDiscVlanCheck (data=0x2d16, timestamp=<value optimized out>) at bora/vmkernel/public/list.h:386

PR 751803: When disks larger than 256GB are protected using vSphere Replication (VR), any operation that causes an internal restart of the virtual disk device causes the disk to complete a full sync. Internal restarts are caused by a number of conditions including any time:

  • A virtual machine is restarted
  • A virtual machine is vMotioned
  • A virtual machine is reconfigured
  • A snapshot is taken of the virtual machine
  • Replication is paused and resumed

PR 754047: When you upgrade VMware Tools the upgrade might fail because, some Linux distributions periodically delete old files and folders in /tmp. VMware Tools upgrade requires this directory in /tmp for auto upgrades.

PR 766179: ESXi host installed on a server with more than 8 NUMA nodes fails and displays a purple screen.

PR 769677: If you perform a VMotion operation to an ESXi host on which the boot-time option “pageSharing” is disabled, the ESXi host might fail with a purple screen.

Disabling pageSharing severely affects performance of the ESXi host. Because pageSharing should never be disabled, starting with this release, the “pageSharing” configuration option is removed.

PR 773187: On an ESXi host, if you configure the Network I/O Control (NetIOC) to set the Host Limit for Virtual Machine Traffic to a value higher than 2000Mbps, the bandwidth limit is not enforced.

PR 773769: An ESXi host halts and displays a purple diagnostic screen when using Network I/O Control with a Network Adapter that does not support VLAN Offload (KB 2011474).

PR 788962: When an ESXi host encounters a corrupt VMFS volume, VMFS driver might leak memory causing VMFS heap exhaustion. This stops all VMFS operations causing orphaned virtual machines and missing datastores. vMotion operations might not work and attempts to start new virtual machines might fail with errors about missing files and memory exhaustion. This issue might affect all ESXi hosts that share the corrupt LUN and have running virtual machines on that LUN.

PR 789483: After you upgrade to ESXi 5.0 from ESXi 4.x, Windows 2000 Terminal Servers might perform poorly. The consoles of these virtual machines might stop responding and their CPU usage show a constant 100%.

PR 789789: ESXi host might fail with a purple screen when a virtual machine connected to VMXNET 2 vNIC is powered on. The purple diagnostic screen displays an error message similar to the following:

0x412261b07ef8:[0x41803b730cf4]Vmxnet2VMKDevTxCoalesceTimeout@vmkernel#nover+0x2b stack: 0x412261b0
0x412261b07f48:[0x41803b76669f]Net_HaltCheck@vmkernel#nover+0xf6 stack: 0x412261b07f98

You might also observe an error message similar to the following written to VMkernel.log:

WARNING: Vmxnet2: 5720: failed to enable port 0x2000069 on vSwitch1: Limit exceeded^[[0m

SOLORI’s Take: Lions, tigers and bears – oh my! In all, I count seven (7) unique PSD bugs (listed in the full KB) along with some rather head-scratching gotchas.  Lots of reasons to keep your vSphere hosts current in this release to be sure… Use Update Manager or start your update journey here…