Posts Tagged ‘Virtualization’

h1

Shanghai Economics 101 – Conclusion

May 6, 2009

In the past entries, we’ve looked only at the high-end processors as applied to system prices, and we’ll continue to use those as references through the end of this one. We’ll take a look at other price/performance tiers in a later blog, but we want to finish-up on the same footing as we began; again, with an eye to how these systems play in a virtualization environment.

We decided to finish this series with an analysis of  real world application instead of just theory. We keep seeing 8-to-1, 16-to-1 and 20-to-1 consolidation ratios (VM-to-host) being offered as “real world” in today’s environment so we wanted to analyze what that meant from an economic side.

The Fallacy of Consolidation Ratios

First, consolidation ratios that speak in terms of VM-to-host are not very informative. For instance, a 16-to-1 consolidation ratio sounds good until you realize it was achieved on an $16,000 4Px4C platform. This ratio results in a $1,000-per-VM cost to the consolidator.

In contrast, let’s take the same 16-to-1 ratio on a $6,000 2Px4C platform and it results in a $375-per-VM cost to the consolidator: a savings of nearly 60%. The key to the savings is in vCPU-to-Core consolidation ratio (provided sufficient memory exists to support it). In the first example that ratio was 1:1, but in the last example the ratio is 2:1. Can we find 16:1 vCPU-to-Core ratios out there? Sure, in test labs, but in the enterprise we think the valid range of vCPU-to-Core consolidation ratios is much more conservative, ranging from 1:1 to 8:1 with the average (or sweet spot) falling somewhere between 3:1 and 4:1.

Second, we must note that memory is a growing aspect of the virtualization equation. Modern operating systems no longer “sip” memory and 512MB for a Windows or Linux VM is becoming more an exception than a rule. That puts pressure on both CPU and memory capacity as driving forces for consolidation costs. As operating system “bloat” increases, administrative pressure to satisfy their needs will mount, pushing the “provisioned” amount of memory per VM ever higher.

Until “hot add” memory is part of DRS planning and the requisite operating systems support it, system admins will be forced to either over commit memory, purchase memory based on peak needs or purchase memory based on average memory needs and trust DRS systems to handle the balancing act. In any case, memory is a growing factor in systems consolidation and virtualization.

Modeling the Future

Using data from the Univerity of Chicago and as a baseline and extrapolating forward through 2010, we’ve developed a simple model to predict vMEM and vCPU allocation trends. This approach establishes three key metrics (already used in previous entries) that determine/predict system capacity: Average Memory/VM (vMVa), Average vCPU/VM (vCVa) and Average vCPU/Core (vCCa).

Average Memory per VM (vMVa)

Average memory per VM is determined by taking the allocated memory of all VM’s in a virtualized system – across all hosts – and dividing that by the total number of VM’s in the system (not including non-active templates.) This number is assumed to grow as virtualization moves from consolidation to standardized deployment. Read the rest of this entry ?

h1

AMD’s New Opteron

April 23, 2009

AMD’s announcement yesterday came with some interesting technical tidbits about its new server platform strategy that will affect its competitiveness in the virtualization marketplace. I want to take a look at the two new server platforms and contrast them with what is available today and see what that means for our AMD-based eco-systems in the months to come.

Initially, the introduction of more cores to the mix is good for virtualization allowing us to scale more gracefully and confidently as compared to hyper-threading. While hyper-threading is reported to increase scheduling efficiency in vSphere, it is not effectively a core. Until Nehalem-EX is widely available and we can evaluate 4P performance of hyper-threading in loaded virtual environments I’m comfortable awarding hyper-threading a 5% performance bonus – all things being equal.

AMD's Value Shift

AMD's Value Shift

What’s Coming?

That said, where is AMD going with Opteron in the near future and how will that affect Opteron-based eco-systems? At least one thing is clear: compatibility is assured and performance – at the same thermal footprint – will go up. So let’s look at the ramifications of the new models/sockets and compare them to our well-known 2000/8000 series to glimpse the future.

A fundamental shift away from DDR2 and towards DDR3 for the new sockets is a major difference. Like the Phenom II, Core i7 and Nehalem processors, the new Opteron will be a DDR3 specimen. Assuming DDR3 pricing continues to trend down and the promise of increased memory bandwidth is realized in the HT3/DCA2 and Opteron, DDR3 will deliver solid performance in 4000 and 6000 configurations.

Opteron 6000: Socket G34

From the announcement, G34 is analogous to the familiar 8000-series line with one glaring exception: no 8P on the road-map. In the 2010-2011 time frame, we’ll see 8-core, 12-core and 16-core variants with a new platform being introduced in 2012. Meanwhile, the 6000-series will support 4-channels of “unbuffered” or “registered” DDR3 across up to 12DIMMs per socket (3 banks by 4 channels). Assuming 6000 will support DDR3-1600, the theoretical bandwidth of a 4 channel design would yield memory bandwidths in the 40-50GB/sec range per link (about twice Istanbul’s).

AMD 2010-2013 Road-map

AMD 2010-2013 Road-map

With a maximum module density of 16GB, a 12-DIMM by 4-socket system could theoretically contain 768GB of DDR3 memory. In 2011, that equates to 12GB/core in a 4-way, 64-core server. At 4:1 consolidation ratios for typical workloads, that’s 256 VM/host at 3GB/VM (4GB/VM with page sharing) and an average of 780MB/sec of memory bandwidth per VM. I think the math holds-up pretty well against today’s computing norms and trends. Read the rest of this entry ?

h1

Virtualizing Microsoft Small Business Server 2008

February 23, 2009

Microsoft Small Business Server 2008 appears to be a good option for “Microsoft shops” looking to consolidated Active Directory Domain Controller, Windows Update Services, Security and Virus Protection, SharePoint and Exchange functions on a single server. In practice, however, it takes a really “big” server to support this application – even for a very small office environment.

According to Microsoft’s system requirements page a target system will need a 2GHz core, 4GB of RAM and 60GB of disk space to be viable (minimum). As with past experiences with Microsoft’s products, this “minimum system specification” is offered as a “use this only if you want to under-perform horribly as to be unusable” configuration.

In reality, a single-user, clean installation with two 2.3GHz cores and 4GB of RAM committed to the installation, the server struggles with the minimum amount of memory. The “real” minimum should have been specified as 6GB instead of the optimistic 4GB. Where does the memory go?

It appears that Microsoft Exchange-related memory utilization weighs in at over 1.2GB (including 600MB for “store.exe”). Another 1.2GB is consumed by IIS, SharePoint and SQL. Forefront Client Security-related memory usage is around 1GB all by itself. That’s 3.4GB of “application space” drag in Windows Small Business Server 2008. Granted, the “killer app” for SBS is Exchange 2007 and some might argue SharePoint integration as well, so the use of the term “drag” may not be appropriate. However, let’s just agree that 4GB is a ridiculously low (optimistic) minimum.

h1

Installing VMware ESXi on the Tyan Transport GT28

January 15, 2009

Once the GT28 nodes are BIOS-updated to the AGESA v3.3.0.0+ release, a few adjustments are needed to to support my boot-from-flash deployment model. If you are not familiar with boot-from-USB-flash, there are many helpful blogs issued on the subject like this one from vm-help.com. Suffice to say, boot-from-USB-flash for ESXi in a relatively simple process to setup:

  1. Make sure your BIOS supports boot-from-USB-flash;
  2. Download the latest release of ESX 3i from VMware;
  3. Mount the ISO image of the 3i installer;
  4. Find the “VMvisor-big” image as a “.dd.bz2” file in the mounted image;
  5. Un-bzip the VMvisor-big image to a temporary directory;
  6. Plug-in your “donor” USB flash device (I’m using the Sandisk Cruzer 4GB);
  7. Find the device handle of the mounted USB device and unmount it (i.e. “umount /dev/sdm”;
  8. Use dd to copy the VMvisor image to the flash device (i.e. “dd if=/tmp/VMware-VMvisor-big-3.5.0_Update_3-123629.i386.dd of=/dev/sdm”);
  9. Eject the USB device and label it as ESXi;
  10. Insert the USB flash device into a USB 2.0 port on your equipment and boot;

Preparing the BIOS

To prepare my GT28 for ESX 3i and boot-from-USB-flash, I insert the USB “thumb drive” into one of the rear ports and turn-on the GT28. Hitting the “delete” key on boot gets me to the BIOS setup.  I will start with the BIOS “Optimal Defaults”, and make modifications from there; these adjustments are (follow links for screen shots):

S2935 BIOS screen on boot

S2935 BIOS screen on boot

  1. Reset BIOS to “Optimal Defaults”;
  2. Adjust Northbridge IOMMU window from 128MB to 256MB;
  3. Disable AMD PowerNow in BIOS;
  4. Adjust PCI Latency Timer from 64 to 128 (optional);
  5. Disable nVidia MCP55 SATA controller (ESXi has no driver ESXi has a driver, however may be issues with nVRAID);
  6. Adjust USB Mass Storage, setting the USB flash drive to Hard Disk;
  7. Disable the CD/DVD boot devices to avoid boot conflicts;
  8. Select the USB flash drive as the only boot device;
  9. Finally, save the BIOS changes and reboot;
  10. Now, the system should boot into ESXi for initial configuration;

As you can see, boot-from-USB-flash is “wicked simple” to implement (at least on this platform) and open-up all kinds of testing scenarios. In this case, the ESXi image is now running from USB flash, and only the basic configuration tasks remain. However, it is a good idea to know which Ethernet ports are which on the rear panel of the GT28.

S2935 I/O Ports, Rear

S2935 I/O Ports, Rear

If the PCI bus scan order is configured for “Ascent” the LAN ports will be configured as indicated in the image shown. If you modify the bus scan for “Descent” (i.e. to accommodate a RAID controller) then E2/E3 becomes E0/E1 and E0/E1 becomes E2/E3 due to the new initialization sequence. You may want to, therefore, be cautious when making such a change since ESXi will re-enumerate the interfaces (although any used interface will be pinned to the MAC address.)

Initial Configuration of ESXi

Once your network connections are plugged-in, you should have already mapped-out some IP assignments and VLAN and/or trunking arrangements. While these steps are not strictly necessary in testing, they are a good practice maintain even in testing. To make the initial configurations to ESXi, from the console do the following:

S2935 ESXi demo, Initial Configuration

S2935 ESXi demo, Initial Configuration

  1. Hit “F2” to enter the configuration screen;
  2. Set the “root” password for the ESXi server;
  3. Skip “Lockdown mode” for now;
  4. Configure the management network of the ESXi server;
    1. Select the network adapter(s) to be used for management;
    2. If not using DHCP:
      1. Fix the management IP address
      2. Fix the management IP Subnet mask
      3. Fix the management IP Default gateway
      4. Fix the management DNS configuration;
      5. Update the DNS suffix(es) for your local network;
    3. Hit “Enter” to save (“Escape” exits without change);
  5. Test the management network and restart if necessary;
  6. Exit the configuration menu and hit “F12” to restart;
S2935 ESXi demo, Restarting ESXi after Configuration

S2935 ESXi demo, Restarting ESXi after Configuration

Initial Management with VI Client

Once the ESXi server is up and online, you will need to grab the VMware Infrastructure Client from the web service provided by the ESXi server (http://<ESXi_IP_ADDRESS>/) and install it on your Windows client. If you don’t run windows (like me) you should have a version running in a VM for just such an occasion. I find VirtuaBox to be a better (free) choice for workstation-on-workstation applications and VMware Server a good choice if the client is to be minimal and accessible from multiple hosts.

S2935 ESXi demo, VMware Infrastructure Client Login

S2935 ESXi demo, VMware Infrastructure Client Login

Once the VI Client is installed, run the application and enter the ESXi server’s hostname/IP-address, root username and root password where requested. A VI Client window will eventually open and allow you to complete the setup of the ESXi server as needed.

That’s really all there is to it: we have a reliable, running ESXi platform in mere minutes with minimal effort.

Notes:

Updated January 15, 2009. Corrected statement that ESX 3i update 3 is not MCP55 aware – support has been added in ESX 3i update 2 and newer. In my test configuration (with the SATA controller enabled) ESX 3i update 3 does properly identify and configure the MCP55 SATA Controller as  an abstracted SCSI controller.

# vmkvsitools lspci
00:00.00 Memory controller: nVidia Corporation
00:01.00 Bridge: nVidia Corporation
00:01.01 Serial bus controller: nVidia Corporation
00:02.00 Serial bus controller: nVidia Corporation
00:02.01 Serial bus controller: nVidia Corporation
00:05.00 Mass storage controller: nVidia Corporation MCP55 SATA Controller [vmhba0]
00:06.00 Bridge: nVidia Corporation
00:10.00 Bridge: nVidia Corporation
00:11.00 Bridge: nVidia Corporation
00:12.00 Bridge: nVidia Corporation
00:13.00 Bridge: nVidia Corporation
00:14.00 Bridge: nVidia Corporation
00:15.00 Bridge: nVidia Corporation
00:24.00 Bridge: Advanced Micro Devices [AMD]
00:24.01 Bridge: Advanced Micro Devices [AMD]
00:24.02 Bridge: Advanced Micro Devices [AMD]
00:24.03 Bridge: Advanced Micro Devices [AMD]
00:24.04 Bridge: Advanced Micro Devices [AMD]
01:05.00 Display controller:
04:00.00 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic0]
04:00.01 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic1]
05:00.00 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic2]
05:00.01 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic3]

This enables the use of the MCP55 SATA controller for flash drives at least. I will do further tests on this platform to determine the stability of the NVRAID component and its suitability for local storage (i.e. embedded VM, like virtual SAN/NAS) needs.

h1

Spot Poll

December 13, 2008