Archive for the ‘Windows Server 2008’ Category

h1

Microsoft Update Kills vSphere Client

June 11, 2010

Got a problem running vSphere Client today? Seeing the following pop-up when trying to access your VMware stack?

Error parsing the server...Login doesn’t really continue, but in fact, ends with the following error:

The type initializer for...

Your environment has not been hacked! It’s a problem with your most recent Windows Update, introducing a .NET exception that your “old” version of VMware vSphere Client can’t handle. While you can uninstall the offending patch(es) to resolve the problem, the best remedy is to login to VMware’s site and download the latest vSphere Client (VMware KB 1022611).

By the way, if you’re vSphere Client is old enough to be affected (prior to Update 1), you might need to scan your vSphere environment for updates too. If you have SnS, run over to VMware’s download page for vSphere and get the updated packages, starting with the vSphere Client: you can find the installable Client package with the vCenter Server Update 2 downloads.

h1

ESXi Patches fix VSS, NTP and VMkernel

June 3, 2010

Two patches were made available on May 27, 2010 for ESXi 4.0 to fix certain bugs and security vulnerabilities in the platform. These patches are identified as ESXi400-201005401-SG and ESXi400-201005402-BG.

The first patch is security related and requires a reboot of the ESXi host:

This patch fixes a security issue. The updated NTP daemon fixes a flaw in the way it handled certain malformed NTP packets. The NTP daemon logged information about all such packets and replied with a NTP packet that was treated as malformed when received by another ntpd. A remote attacker could use this flaw to create an NTP packet reply loop between two ntpd servers through a malformed packet with a spoofed source IP address and port, causing ntpd on those servers to use excessive amounts of CPU time and fill disk space with log messages. The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2009-3563 to this issue.

ESXi 4.0 hosts might stop responding when interrupts are shared between VMkernel and service console. You might also observe the following additional symptoms:

  • Network pings to the ESXi hosts might fail.
  • Baseboard management controllers (BMC) such as HP Integrated Lights-Out (iLO) console might appear to be in a non-responsive state.

– VMware KB Article, 1021041

An interesting note is the reference to the service console in ESXi, however the sharing of interrupts between ESX drivers and the service console has long been a problem in ESX (not ESXi since there is no service console)… The second patch does not require a reboot, although it includes an update to VMware Tools which could impact uptime on affected virtual machines (Windows Server 2008 R2 and Windows 7). The KB article says this about the patch:

The VMware Snapshot Provider service is not listed in the Services panel. The quiesced snapshots do not use VMware Tools VSS components in Windows Server 2008 R2 or Windows 7 operating systems. This issue is seen when the user or backup software performs a quiesced snapshot on virtual machines on ESXi 4.0 hosts. This patch fixes the issue.

– VMware KB Article, 1021042

Since VSS quiescence is at issue here, DR snapshots and backups relying on VMware Data Recovery may be unreliable without the new VMware Tools installed. If your systems rely on VMware Data Recovery APIs for backup, this patch should be considered mandatory.

h1

vSphere, Hardware Version 7 and Hot Plug

December 5, 2009

VMware’s vSphere added hot plug features in hardware version 7 (first introduced in VMware Workstation 6.5) that were not available in the earlier version 4 virtual hardware. Virtual hardware version 7 adds the following new features to VMware virtual machines:

  • LSI SAS virtual device – provides support for Windows Server 2008 fail-over cluster configurations
  • Paravirtual SCSI devicesrecently updated to allow booting, can allow higher-performance (greater throughput and lower CPU utilization) than the standard virtual SCSI adapter – especially in SAN environments where I/O-intensive applications are used. Currently supported in Windows Server 2003/2008 and Red Hat Linux 5 – although any version of Linux could be modified to support PVSCSI.
  • IDE virtual device – useful for older OSes that don’t support SCSI drivers
  • VMXNET 3 – next generation Vmxnet device with enhanced performance and enhanced networking features.
  • Hot plug virtual devices, memory and CPU – supports hot add/remove of virtual devices, memory and CPU for supported OSes.

While the “upgrade” process from version 4 to version 7 is well-known, some of the side effects are not well publicised. The most obvious change after the migration from version 4 to version 7 is the affect hot plug has on the PCI bus adapters – some are now hot plug by default, including the network adapters!

Safe to remove network adapters. Really?

Safe to remove network adapters. Really?

Note that the above example demonstrates also that the updated hardware re-enumerates the network adapters (see #3 and #4) because they have moved to a new PCI bus – one that supports hot plug. Removing the “missing” devices requires a trip to device manager (set devmgr_show_nonpresent_devices=1 in your shell environment first.) This hot plug PCI bus also allows for an administrator to mistakenly remove the device from service – potentially disconnecting tier 1 services from operations (totally by accident, of course.

Devices that can be added while the VM runs with hardware version 4

Devices that can be added while the VM runs with hardware version 4

In virtual hardware version 4, only SCSI devices and hard disks were allowed to be added to a running virtual machine. Now with hardware version 7,

Devices that can be added while the VM runs with hardware version 7

Devices that can be added while the VM runs with hardware version 7

additional devices (USB and Ethernet) are available for hot add. You could change memory and CPU on the fly too, if the OS supports that feature and they are enabled in the virtual machine properties prior to running the VM:

CPU and Memory Hot Plug Properties

CPU and Memory Hot Plug Properties

However, the hot plug NIC issue isn’t discussed in the documentation, but Carlo Costanzo at VMwareInfo.com passes on Chris Hahn’s great tip to disable hot plug behaviour in his blog post complete with visual aids. The key is to add a new “Advanced Configuration Parameter” to the virtual machine configuration: this new parameter is called “devices.hotplug” and its value should be set to “false.” However, adding this parameter requires the virtual machine to be turned-off, so it is currently an off-line fix.

h1

vSphere 4, Update 1 and ESXi

November 30, 2009

On November 19, 2009 VMware released Update 1 for vSphere 4.0 which, among other bug fixes and errata, adds the following new features:

  • ESX/ESXi
    • VMware View 4.0 support (previously unsupported)
    • Windows 7 and Windows 2008 R2 support (previously “experimental”) – guest customizations now supported
    • Enhanced Clustering support for Microsoft Windows – adds support for VMware HA and DRS by allowing HA/DRS to be disabled per MSCS VM instead of per ESX host
    • Enhanced VMware Paravirtualized SCSI support (pvSCSI boot disks now supported in Windows 2003/2008)
    • Improved vNetwork Distributed Switch performance
    • Increased vCPU per Core limit (raised from 20 to 25)
    • Intel Xeon 3400 (uni-processor variant of the Nehalem)
  • vCenter Server
    • Support for IBM DB2 (Enterprise, Workgroup and Express 9, Express C)
    • Windows 7 and Windows 2008 R2 support (previously “experimental”) – guest customizations now supported
  • vCenter Update Manager
    • Does not support IBM DB2
    • Still no scan or remediate for Windows Server 2003 SP2/R2 64-bit, Windows Server 2008 or Windows 7
  • vSphere Client
  • vSphere Command-Line Interface
    • allows the use of comma-separated bulletins with –bundle option in “vihostupdate”

Authorized VMware users can download the necessary updates for vSphere Update 1 directly from VMware. For ESX and ESXi, updates can be managed and installed from the vCenter Update Manager within the vSphere Client. In addition to the normal backup procedure and those steps recommend by VMware, the following observations may be helpful to you:

  • DRS/HA cluster members CAN be updated auto-magically, however we observed very low end-to-end success rates in our testing lab. We recommend the following:
    • Manually enter maintenance mode for the ESXi server
    • Manually stage/remediate the patches to avoid conflicts
    • Manually re-boot ESXi servers if they do not reboot on their own
    • Re-scan patches when re-boot is complete, to check/confirm upgrade success
    • Manually recover from maintenance mode and confirm HA host configuration
  • For the vSphere Client on Windows 7, completely remove the “hacked” version and clean-install the latest version (download from the updated ESX/ESXi server(s))

SOLORI’s Notes: When upgrading ESXi “auto-magically” we experienced the following failures and unwanted behavior:

  • Update manager failure to stage pending updates and upgrades correctly, resulting in a “time-out” failure. However, updates are/were applied properly after manual reboot.
  • DRS/DPM conflicts with upgrade process:
    • inadequate time given to servers for servers to recover from sleep mode
    • Hosts sent to sleep while updates being evaluated, causing DRS to hold maintenance mode while sleeping hosts awakened, resulting in failed/time-out update process
  • Off-line VMs and templates not automatically migrated during update (maintenance mode) process causing extended unavailability of these assets during update

Additional Notes: Directed to the question of which updates/patches may or may not be “rolled-up” into this update, the release notes are very clear. However, for the sake of convenience, we repeat them here:

Patch Release ESX400-Update01 contains the following individual bulletins:

ESXi 4.0 Update 1 also contains all fixes in the following previously released bundles:

Patch Release ESX400-200906001
Patch Release ESX400-200907001
Patch Release ESX400-200909001

h1

Quick Take: Red Hat and Microsoft Virtual Inter-Op

October 9, 2009

This week Red Hat and Microsoft announced support of certain of their OSes as guests in their respective hypervisor implementations: Kernel Virtual Machine (KVM) and Hyper-V, respectively. This comes on the heels of Red Hat’s Enterprise Server 5.4 announcement last month.

KVM is Red Hat’s new hypervisor that leverages the Linux kernel to accelerate support for hardware and capabilities. It was Red Hat and AMD that first demonstrated live migration between AMD and Intel-based hypervisors using KVM late last year – then somewhat of a “Holy Grail” of hypervisor feats. With nearly a year of improvements and integration into their Red Hat Enterprise Server and Fedora “free and open source” offerings, Red Hat is almost ready to strike-out in a commercially viable way.

Microsoft now officially supports the following Red Hat guest operating systems in Hyper-V:

Red Hat Enterprise Linux 5.2, 5.3 and 5.4

Red Hat likewise officially supports the following Microsoft quest operating systems in KVM:

Windows Server 2003, 2008 and 2008 R2

The goal of the announcement and associated agreements between Red Hat and Microsoft was to enable a fully supported virtualization infrastructure for enterprises with Red Hat and Microsoft assets. As such, Microsoft and Red Hat are committed to supporting their respective products whether the hypervisor environment is all Red Hat, all Hyper-V or totally heterogeneous – mixing Red Hat KVM and Microsoft Hyper-V as necessary.

“With this announcement, Red Hat and Microsoft are ensuring their customers can resolve any issues related to Microsoft Windows on Red Hat Enterprise Virtualization, and Red Hat Enterprise Linux operating on Microsoft Hyper-V, regardless of whether the problem is related to the operating system or the virtualization implementation.”

Red Hat press release, October 7, 2009

Many in the industry cite Red Hat’s adoption of KVM as a step backwards [from Xen] requiring the re-development of significant amount of support code. However, Red Hat’s use of libvirt as a common management API has allowed the change to happen much more rapidly that critics assumptions had allowed. At Red Hat Summit 2009, key Red Hat officials were keen to point out just how tasty their “dog food” is:

Tim Burke, Red Hat’s vice president of engineering, said that Red Hat already runs much of its own infrastructure, including mail servers and file servers, on KVM, and is working hard to promote KVM with key original equipment manufacturer partners and vendors.

And Red Hat CTO Brian Stevens pointed out in his Summit keynote that with KVM inside the Linux kernel, Red Hat customers will no longer have to choose which applications to virtualize; virtualization will be everywhere and the tools to manage applications will be the same as those used to manage virtualized guests.

Xen vs. KVM, by Pam Derringer, SearchDataCenter.com

For system integrators and virtual infrastructure practices, Red Hat’s play is creating opportunities for differentiation. With a focus on light-weight, high-performance, I/O-driven virtualization applications and no need to support years-old established processes that are dragging on Xen and VMware, KVM stands to leap-frog the competition in the short term.

SOLORI’s Take: This news is good for all Red Hat and Microsoft customers alike. Indeed, it shows that Microsoft realizes that its licenses are being sold into the enterprise whether or not they run on physical hardware. With 20+:1 consolidation ratios now common, that represents a 5:1 license to hardware sale for Microsoft, regardless of the hypervisor. With KVM’s demonstrated CPU agnostic migration capabilities, this opens the door to an even more diverse virtualization infrastructure than ever before.

On the Red Hat side, it demonstrates how rapidly Red Hat has matured its offering following the shift to KVM earlier this year. While KVM is new to Red Hat, it is not new to Linux or aggressive early adopters since being added to the Linux kernel as of 2.6.20 back in September of 2007. With support already in active projects like ConVirt (VM life cycle management), OpenNebula (cloud administration tools), Ganeti, and Enomaly’s Elastic Computing Platform, the game of catch-up for Red Hat and KVM is very likely to be a short one.

h1

Quick Take: Licensing Benefits of Virtualization

March 17, 2009

For some, the real licensing benefit of virtualization is a hidden entity. This is aided by some rather nebulous language in end user licenses from Microsoft and others.

Steve Kaplan, over at DABCC, has a brief and informative article on how licensing affects the deployment costs of virualized Microsoft products – sometimes offsetting the cost of the hypervisor costs in a VMware environment, for instance.

SOLORI’s 1st take: the virtual data center has new ways to increase costs with equal or better offsets to speed ROI – especially where new initiatives are concerned. When in doubt, talk to your software vendor and press for clear information about implementation licensing costs.

SOLORI’s 2nd take: Steve’s report relies on Gartner’s evaluation which is based on Microsoft policies that are outdated. For instance, Server 2003 R2 is NOT “the only edition that will allow the customer to run one instance in a physical operating system (OS) environment and up to four instances in virtual OS environments for one license fee.” This also applies to Server 2008… (see Microsoft links).

Check out Steve’s evaluation here. Also, see Microsoft’s updated policy here and their current Server 2003 policies here.

h1

SBS 2008 Panics, Needs IPv6

March 4, 2009

Remember how you were told to disable all unused applications and protocols when securing a compute environment? If you’ve been in networking for years – like I have – it’s almost a reflex action. This is also more recently codified in PCI/DSS Section 2.2.2, right? It also seems like a really basic, logical approach. Apparently Microsoft doesn’t think so. Apparently, there is a “somewhat artificial albeit deeply ingrained” dependency on IPv6 in Windows Server 2008.

2.2.2 Disable all unnecessary and insecure services and protocols (services and protocols not directly needed to perform the device’s specified function).

– PCI Security Standards Council

Considering the lackluster adoption rate of IPv6 in the Internet domain, it is hard to argue that IPv6 on the local network is a new requirement. Given that most system administrators have enough difficulty understanding IPv4 networks, a dependency on IPv6 seems both premature and an unnecessary complexity.

Corollary: Disabling IPv6 Kills SBS 2008

Simply disabling IPv6 at the network card level carries no dire warning. Services continue to function properly with no warnings or klaxon calls. However, a reboot tells a different story: the absence of IPv6 KILLS a myriad of service on reboot. Read the rest of this entry ?