Posts Tagged ‘vsphere 4’


Short-Take: Clock Ticking on VI 3.5 Updates, June 1 Deadline

May 26, 2011

If you’re still not quite ready to upgrade from VI 3.x to vSphere, time may be running out on your ESX hosts to stay “current” inside of VI3 unless you act before June 1, 2011. If your VMware VI3 hosts have not been patched since November of 2010, you are at risk for losing update/patching capabilities unless you apply ESX350-201012410-BG before the deadline. This patch ONLY addresses the expiring secure key on the ESX host which will otherwise become invalid on June 1, 2011.

If you need to bring your hosts current (without upgrading to vSphere) the last full patch release from VMware for VI 3.5 addresses the following issues:

Enablement of Intel Xeon Processor 3400 Series – Support for the Intel Xeon processor 3400 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

Driver Update for Broadcom bnx2 Network Controller – The driver for bnx2 controllers has been upgraded to version 1.6.9. This driver supports bootcode upgrade on bnx2 chipsets and requires bmapilnx and lnxfwnx2 tools upgrade from Broadcom. This driver also adds support for Network Controller – Sideband Interface (NC-SI) for SOL (serial over LAN) applicable to Broadcom NetXtreme 5709 and 5716 chipsets.

Driver Update for LSI SCSI and SAS Controllers – The driver for LSI SCSI and SAS controllers is updated to version 2.06.74. This version of the driver is required to provide a better support for shared SAS environments.

Newly Supported Guest Operating Systems – Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the VMware Compatibility Guide:

  • Windows 7 Enterprise (32-bit and 64-bit)
  • Windows 7 Ultimate (32-bit and 64-bit)
  • Windows 7 Professional (32-bit and 64-bit)
  • Windows 7 Home Premium (32-bit and 64-bit)
  • Windows 2008 R2 Standard Edition (64-bit)
  • Windows 2008 R2 Enterprise Edition (64-bit)
  • Windows 2008 R2 Datacenter Edition (64-bit)
  • Windows 2008 R2 Web Server (64-bit)
  • Ubuntu Desktop 9.04 (32-bit and 64-bit)
  • Ubuntu Server 9.04 (32-bit and 64-bit)

Newly Supported Management Agents – See VMware ESX Server Supported Hardware Lifecycle Management Agents for current information on supported management agents.

Newly Supported Network Cards – This release of ESX Server supports HP NC375T (NetXen) PCI Express Quad Port Gigabit Server Adapter.

Newly Supported SATA Controllers – This release of ESX Server supports the Intel Ibex Peak SATA AHCI controller.

  • Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5. (KB 1008673)
  • Storing VMFS datastores on native SATA drives is not supported.

This patch comes with a roll-up approach that VMware describes this way:

Note: As part of the end of availability for some VMware Virtual Infrastructure product releases, the ESX 3.5 Update 5 upgrade package has been replaced by in order to remove dependencies upon patches that will no longer be available for download. Hosts upgraded using are equivalent to those upgraded using the older package, but patch bundles released before ESX 3.5 Update 5 will not be required during the upgrade process.


Short-Take: vSphere Multi-core Virtual Machines

November 8, 2010

Virtual machines were once relegated to a second class status of single-core vCPU configurations. To get multiple process threads, you had to add to add one “virtual CPU” for each thread. This approach, while functional, had potential serious software licensing ramifications. This topic drew some attention on Jason Boche’s blog back in July, 2010 with respect to vSphere 4.1.

With vSphere 4U2 and vSphere 4.1 you have the option of using an advanced configuration setting to change the “virtual cores per socket” to allow thread count needs to have a lesser impact on OS and application licensing. The advanced configuration parameter name is “cpuid.coresPerSocket” (default 1) and acts as a divisor for the virtual hardware setting “CPUs” which must be an integral multiple of the “cpuid.coresPerSocket” value. More on the specifics and limitations of this setting can be found in “Chapter 7, Configuring Virtual Machines” (page 79) of the vSphere Virtual Machine Administrator Guide for vSphere 4.1. [Note: See also VMware KB1010184.]

The value of “cpuid.coresPerSocket” is effectively ignored when “CPUs” is set to 1. In case “cpuid.coresPerSocket” is an imperfect divisor, the power-on operation will fail with the following message in the VI Client’s task history:


Virtual core count is imperfect divisor of CPUs

If virtual machine logging is enabled, the following messages (only relevant items listed) will appear in the VM’s log (Note: CPUs = 3, cpuid.coresPerSocket = 2):

Nov 08 14:17:43.676: vmx| DICT         virtualHW.version = 7
Nov 08 14:17:43.677: vmx| DICT                  numvcpus = 3
Nov 08 14:17:43.677: vmx| DICT      cpuid.coresPerSocket = 2
Nov 08 14:17:43.727: vmx| VMMon_ConfigMemSched: vmmon.numVCPUs=3
Nov 08 14:17:43.799: vmx| NumVCPUs 3
Nov 08 14:17:44.008: vmx| Msg_Post: Error
Nov 08 14:17:44.008: vmx| [msg.cpuid.asymmetricalCores] The number of VCPUs is not a multiple of the number of cores per socket of your VM, so it cannot be powered on.----------------------------------------
Nov 08 14:17:44.033: vmx| Module CPUID power on failed.

While the configuration guide clearly states (as Jason Boche rightly pointed out in his blog):

The number of virtual CPUs must be divisible by the number of cores per socket. The coresPerSocketsetting must be a power of two.

– Virtual Machine Configuration Guide, vSphere 4.1

We’ve found that “cpuid.coresPerCPU” simply needs to be a perfect divisor of the “CPUs” value. This tracks much better with prior versions of vSphere where “odd numbered” socket/CPU counts were allowed, so therefore odd numbers of cores-per-CPU allowed provided the division of CPUs by coresPerCPU is integral. Suffice to say, if the manual says “power of two” (1, 2, 4, 8, etc.)  then those are likely the only “supported” configuration available. Any other configuration that “works” (i.e. 3, 5, 6, 7, etc.) will likely be unsupported by VMware in the event of a problem.

That said, odd values of “cpuid.coresPerCPU” do work just fine. Since SOLORI has a large number of AMD-only eco-systems, it is useful to test configurations that match the physical core count of the underlying processors (i.e. 2, 3, 4, 6, 8, 12). For instance, we were able to create a single, multi-core virtual CPU with 3-cores (CPUs = 3, cpuid.coresPerSocket = 3) and run Windows Server 2003 without incident:

Virtual Tri-core CPU

Windows Server 2003 with virtual "tri-core" CPU

It follows, then, that we were likewise able to run a 2P virtual machine with a total of 6-cores (3-per CPU) running the same installation of Windows Server 2003 (CPUs = 6, cpuid.coresPerSocket = 3):

2P Virtual Tri-core

Virtual Dual-processor (2P), Tri-core (six cores total)

Here are the relevant vmware log messages associated with this 2P, six total core virtual machine boot-up:

Nov 08 14:54:21.892: vmx| DICT         virtualHW.version = 7
Nov 08 14:54:21.893: vmx| DICT                  numvcpus = 6
Nov 08 14:54:21.893: vmx| DICT      cpuid.coresPerSocket = 3
Nov 08 14:54:21.944: vmx| VMMon_ConfigMemSched: vmmon.numVCPUs=6
Nov 08 14:54:22.009: vmx| NumVCPUs 6
Nov 08 14:54:22.278: vmx| VMX_PowerOn: ModuleTable_PowerOn = 1
Nov 08 14:54:22.279: vcpu-0| VMMon_Start: vcpu-0: worldID=530748
Nov 08 14:54:22.456: vcpu-1| VMMon_Start: vcpu-1: worldID=530749
Nov 08 14:54:22.487: vcpu-2| VMMon_Start: vcpu-2: worldID=530750
Nov 08 14:54:22.489: vcpu-3| VMMon_Start: vcpu-3: worldID=530751
Nov 08 14:54:22.489: vcpu-4| VMMon_Start: vcpu-4: worldID=530752
Nov 08 14:54:22.491: vcpu-5| VMMon_Start: vcpu-5: worldID=530753

It’s clear from the log that each virtual core spawns a new virtual machine monitor thread within the VMware kernel. Confirming the distribution of cores from the OS perspective is somewhat nebulous due to the mismatch of the CPU’s ID (follows the physical CPU on the ESX host) and the “arbitrary” configuration set through the VI Client. CPU-z shows how this can be confusing:

CPU-z output, 1 of 2

CPU#1 as described by CPU-z

CPU-z CPU 2 of 2

CPU#2 as described by CPU-z

Note that CPU-z identifies the first 4-cores with what it calls “Processor #1” and the remaining 2-cores with “Processor #2” – this appears arbitrary due to CPU-z’s “knowledge” of the physical CPU layout. In (virtual) reality, this assessment by CPU-z is incorrect in terms of cores per CPU, however it does properly demonstrate the existence of two (virtual) CPUs. Here’s the same VM with a “cpuid.coresPerSocket” of 6 (again, not 1, 2, 4 or 8 as supported ):

Single 6-core (virtual) CPU

CPU-z demonstrating a 1P, six-core virtual CPU

Note again that CPU-z correctly identifies the underlying physical CPU as an Opteron 2376 (2.3GHz quad-core) but shows 6-cores, 6-threads as configured through VMware. Note also that the “grayed-out” selection for “Processor #1” demonstrates a single processor is enumerated in virtual hardware. [Note: VMware’s KB1030067 demonstrates accepted ways of verifying cores per socket in a VM.)

How does this help with per-CPU licensing in a virtual world? It effectively evens the playing field between physical and virtual configurations. In the past (VI3 and early vSphere 4) multiple virtual threads were only possible through the use of additional virtual sockets. This paradigm did not track with OS licensing and CPU-socket-aware application licensing since the OS/applications would recognize the additional threads as CPU sockets in excess of the license count.

With virtual cores, the underlying CPU configuration (2P, 12 total cores, etc.) can be emulated to the virtual machine layer and deliver thread-count parity to the virtual machine. Since most per-CPU licenses speak to the physical hardware layer, this allows for parity between the ESX host CPU count and the virtual machine CPU count, regardless of the number of physical cores.

Also, in NUMA systems where core/socket/memory affinity is a potential performance issue, addressing physical/virtual parity is potentially important. This could have performance implications for AMD 2400/6100 and Intel 5600 systems where 6 and 12 cores/threads are delivered per physical CPU socket.