Archive for February, 2009

h1

Search Term Review: February 2009

February 26, 2009

Looking at the “Top 3” search index  driving traffic to our blog, its obvious that ESXi and FreeNAS are getting a lot of attention.

Search Term #1: esxi network drivers

White box and DIY lab explorers are trying to push the envelope on network cards. Except for supported-but-PCI-id-does-not-match exceptions, it is not worthwhile to hack-in a “cheap” NIC for your hypervisor. Since NAS/SAN and application data traverses your “hacked NIC,” do you really want to risk it? For the exception, see the next search term.

Search Term #2: nvidia mcp55 sata controller esxi install

The number of MCP55-based motherboards out there is understandably large. However, not all MCP55 systems are “install” compatible. That’s where USB flash install makes your ESXi installation an easy – albeit two step – process. There are blog entries out there that will help you on your way to MCP55/ESX/ESXi nirvana.

The basic problem is your controller has a different revision – and PCI id – then the driver expects. Since this “compatible id list” was cooked into the ISO image, it is likely stale and your “version” of the hardware “may” work. This is common to MCP and ICH SATA controllers. If so, modifying the list of supported PCI ids and making the correct driver associations is the trick to success.

Here’s a good blog outlining the hard way for ESX.

Hers another on creating your own roll-up for ESXi. It includes a downloadable oem.tgz replacement with support for ICH10, ICH8, e1000e, 3Ware, igb and Dvorak support. The discussion give additional insight.

Search Term #3: freenas usb

This is an easy one. We have decent how-to here, and there are numerous others on-line. SOLORI has not explored the nightly build (AMD64) variant – yet. If you want to explore the “wild side” of FreeNAS, go here, the FreeNAS community is vibrant and informative.

h1

Citrix Waving the White Flag? XenServer now free – as in ESXi…

February 23, 2009

According to a recent announcement from Citrix, XenServer (without advanced features) is now Free (as in April 2009). The question now? How does XenServer/Free match-up to ESXi/Free and what does it mean for the enterprise customer?

See the community announcement here… and the official press release here…

According to their approach – which reads more like ESXi plus motion – the “free” server still needs a significant investment in “management” products to be “enterprise worthy.” This still means an “enterprise-class” virtualization product WITH live motion technology can be had for the U-build price, but the revenue shifts from “product license” to “management and service license.” Who does that sound like? VMware (ESXi) and RedHat (CentOS).

Since Citrix will retain the intellectual property rights to its closed-source version of Xen, there is no reason to believe a huge number of open source offerings will immediately crop up. It is more likely this is the first salvo in an ever increasing spiral towards Microsoft’s acquisition of Citrix and wholesale incorporation of XenSource into its product line.

Still, the “free” tag line is compelling. Citrix is claiming that for “free” you will get from Citrix and XenSource what you would have to pay $5,000 in licensing to VMware. However, the free version of XenSource will NOT have HA, detailed monitoring or cluster management. You will need their $5,000 “Essentials” license for that…

h1

Virtualizing Microsoft Small Business Server 2008

February 23, 2009

Microsoft Small Business Server 2008 appears to be a good option for “Microsoft shops” looking to consolidated Active Directory Domain Controller, Windows Update Services, Security and Virus Protection, SharePoint and Exchange functions on a single server. In practice, however, it takes a really “big” server to support this application – even for a very small office environment.

According to Microsoft’s system requirements page a target system will need a 2GHz core, 4GB of RAM and 60GB of disk space to be viable (minimum). As with past experiences with Microsoft’s products, this “minimum system specification” is offered as a “use this only if you want to under-perform horribly as to be unusable” configuration.

In reality, a single-user, clean installation with two 2.3GHz cores and 4GB of RAM committed to the installation, the server struggles with the minimum amount of memory. The “real” minimum should have been specified as 6GB instead of the optimistic 4GB. Where does the memory go?

It appears that Microsoft Exchange-related memory utilization weighs in at over 1.2GB (including 600MB for “store.exe”). Another 1.2GB is consumed by IIS, SharePoint and SQL. Forefront Client Security-related memory usage is around 1GB all by itself. That’s 3.4GB of “application space” drag in Windows Small Business Server 2008. Granted, the “killer app” for SBS is Exchange 2007 and some might argue SharePoint integration as well, so the use of the term “drag” may not be appropriate. However, let’s just agree that 4GB is a ridiculously low (optimistic) minimum.

h1

AMD 6-Core Opteron Demo

February 22, 2009

Chris Tom at AMD Zone let us know about a recent demo of a quad-socket, 6-core Opteron (code name “Istanbul”) running Windows Server 2008 and three virtual machines. The demonstration is a great example of how extendable socket “F” systems are and how, with a simple BIOS update and processor swap, your favorite hypervisor can add 50% more threads and capabilities.

The trick gets even better when you stay in the same power envelope. One of the biggest issues curbing adoption of the Intel 6-core Xeon has been its enormous power consumption. The Opteron “Shanghai” series (45nm) has proved to boost performance per watt considerably, so it is safe to assume a significant gain for “Istanbul” against Xeon in 6-core performance as well. This is already evident in current virtualization benchmarks where 16-core Shanghai systems best 24-core Xeons in head-to-head comparisons with VMware ESX Server 3.5.

That’s right, dual-socket systems will expand to 12-cores, 4-socket systems 24-cores and 8-socket systems top-out at 48-cores: that’s tasty virtualization goodness! In addition to scaling cores, systems with the ability to take advantage of HyperTransport 3 will get a boost, according to the TechReport article cited in Chris’ blog.

For companies investing in AMD Eco-Systems, this news is a significant milestone that stretches-out your platform investment by another 12-18 months. This again shows the excellent value provided by AMD’s “stable image” and related “validated server” platforms where system longevity is by design not by accident.

h1

EMC Celerra VSA

February 18, 2009

Virtual Storage Appliances (VSA) are really picking-up these days. Even the legacy hardware vendors are getting into the fray. They’re great in the lab and – assuming tailored performance – can fit some really interesting “embedded” applications.

If you haven’t had a chance to look at the EMC Celerra VSA then click over to Chad Sakac’s VirtualGeek Blog and have a look at his series on configuration and testing. If you have time, check out some of Chad’s more visionary stuff while your there – he has some good information on what’s to come in the next few months from EMC and VMware…

For the Celerra VSA, look at these two links Read the rest of this entry ?

h1

Installing: Xtravirt Virtual SAN

February 10, 2009

Today we’re looking at the Xtravirt Virtual SAN Appliance (VSA) solution for use with VMware ESX Server 3. It is designed to be a simple to deploy, redundant (DRBD synchronization), high-availability iSCSI SAN between two ESX servers. We are installing it on two ESXi servers, each with local storage, running the latest patch update (3.5.0 build 143129).

Initial Installation

XVS requires a virtual machine import: either using Converter or manual process. We follow the manual process. We used the command line to convert the imported disk into a ESXi-compliant format:

vmkfstools -i XVS.vmdk -d thin XVSnew.vmdk
rm -f XVS.vmdk XVS-*

After conversion, you have a 2GB virtual machine (times two) ready for configuration. We removed the legacy ethernet and hard disk that came with the inventory import. Then add the “existing” disk and new Ethernet (flex) controller.

We then added a 120GB virtual disk to each node using the local storage controllers: LSI 1068SAS (RAID1) for node 1 and NVidia MCP55Pro (RAID1) for node 2. Node 1 and 2 are using the same 250GB Seagate ES.2 (RAID edition) drives. Read the rest of this entry ?

h1

ESXi Firmware Update

February 10, 2009

ESXe350-200901401-I-SG

This release closes an unspecified vulnerability in ESX and ESXi 3.5 that created a host crash when snapshots of corrupt VMDKs were made. This was assigned CVE-2008-4914.

Problems with the following hardware were also addressed:

  • E1000 NICs
  • Broadcom HT-1000 SATA controllers
  • SATA CD/DVD drives
  • SanDisk SD-USB cards
  • 64-bit SLES9 SP3 virtual machines with e1000 drivers
  • Broadcom BCM5700 NICs
  • iSCSI Software Initiator

It also fixes a problem with FC-based N-port ID Virtualization (NPIV) targets used with RDM. The problem caused VM’s with NPIV-based RDM’s to fail (not power on) after removing the world wide name (WWN) assignment. After the patch, the VM powers on even in the absence of the WWN.

For ESXi-on-USB-flash fans, the patch fixes a problem with some SanDisk USB media by updating the usb-storage driver.

For details on the update, see VMware’s Knowledge Base article.