Archive for January, 2009

h1

Virtualization Rant: Solid Networking Skills Required

January 24, 2009

<RANT>My local pastor recently completed a series on getting your life “from here to there,” and it made me think: what single administrative skill and knowledge base is necessary to make robust virtualization solutions work? Is that a simple question? Are there simple answers? Read the rest of this entry ?

h1

Virtualbox updated: Version 2.1.2

January 22, 2009

A laundry list of bug fixes were released in Wednesday’s update. Of special interest are:

  • VMM: allow running up to 1023 VMs on 64-bit hosts (used to be 127)
  • VMM: several FreeBSD guest related fixes (bugs #2342, #2341, #2761)
  • VMM: clear MSR_K6_EFER_SVME after probing for AMD-V (bug #3058)
  • VMM: fixed hang during OS/2 MCP2 boot (AMD-V and VT-x only)
  • VMM: fixed random crashes related to FPU/XMM with 64 bits guests on 32 bits hosts
  • VMM: fixed occasional XMM state corruption with 64 bits guests
  • VMM: speed improvements for real mode and protected mode without paging (software virtualization only)
  • GUI: raised the RAM limit for new VMs to 75% of the host memory
  • GUI: added Windows 7 as operating system type
  • VBoxSDL: fixed -fixed fixedmode parameter (bug #3067)
  • Clipboard: stability fixes (Linux and Solaris hosts only, bug #2675 and #3003)
  • 3D support: fixed VM crashes for certain guest applications (bugs #2781, #2797, #2972, #3089)
  • LsiLogic: improved support for Windows guests (still experimental)
  • VGA: fixed a 2.1.0 regression where guest screen resize events were not properly handled (bug #2783)
  • VGA: better handling for VRAM offset changes (fixes GRUB2 and Dos DOOM display issues)
  • VGA: custom VESA modes with invalid widths are now rounded up to correct ones (bug #2895)
  • IDE: fixed ATAPI passthrough support (Linux hosts only; bug #2795)
  • Networking: fixed kernel panics due to NULL pointer dereference in Linux kernels < 2.6.20 (Linux hosts only; bug #2827)
  • Networking: fixed the issue with displaying hostif NICs without assigned IP addresses (Linux hosts only; bug #2780)
  • Networking: fixed the issue with sent packets coming back to internal network when using hostif (Linux hosts only; bug #3056).
  • NAT: fixed booting from the builtin TFTP server (bug #1959)
  • NAT: fixed occasional crashes (bug #2709)
  • SATA: vendor product data (VPD) is now configurable
  • SATA: fixed timeouts in the guest when using raw VMDK files (Linux host only, bug #2796)
  • SATA: huge speed up during certain I/O operations like formatting a drive
  • SATA/IDE: fixed possible crash/errors during VM shutdown
  • VRDP: fixed RDP client disconnects
  • VRDP: fixed VRDP server misbehavior after a broken client connection
  • Linux hosts: don’t depend on libcap1 anymore (bug #2859)
  • Linux hosts: compile fixes for 2.6.29-rc1
  • Linux hosts: don’t drop any capability if the VM was started by root (2.1.0 regression)
  • Windows Additions: fixed guest property and logging OS type detection for Windows 2008 and Windows 7 Beta
  • Windows Additions: added support for Windows 7 Beta (bugs #2995, #3015)
  • Windows Additions: fixed Windows 2000 guest freeze when accessing files on shared folders (bug #2764)
  • Windows Additions: fixed Ctrl-Alt-Del handling when using VBoxGINA
  • Windows Additions Installer: Added /extract switch to only extract (not install) the files to a directory (can be specified with /D=path)
  • Linux installer and Additions: added support for the Linux From Scratch distribution (bug #1587) and recent Gentoo versions (bug #2938)
  • Additions: added experimental support for X.Org Server 1.6 RC on Linux guests
  • Linux Additions: fixed bug which prevented to properly set fmode on mapped shared folders (bug #1776)
  • Linux Additions: fixed appending of files on shared folders (bug #1612)
  • Linux Additions: ignore noauto option when mounting a shared folder (bug #2498)
  • Linux Additions: fixed a driver issue preventing X11 from compiling keymaps (bug #2793 and #2905)
  • X11 Additions: workaround in the mouse driver for a server crash when the driver is loaded manually (bug #2397)

While my workstation has been Linux-based since the 1990’s, I have been using virtualbox to “contain” my Windows workstation-within-a-workstation since Sun purchased Virtualbox from Innotek in 2008. Virtualbox provides a rich, high performance workstation experience (fast video, good sound and USB 2.0 support) albeit at the sacrifice of considerable CPU overhead.

Download the updated open source version here or the Sun xVM version here.

h1

Installing FreeNAS to USB Flash: Easy as 1,2,3

January 21, 2009

I don’t want to get too deep into a re-hash of how to install FreeNAS onto USB flash (or “thumb”) drives – there is a wealth of community information in that regard. However, any time I come across the same simple question so many times in one week I have to investigate it more thoroughly.  This week, that question has been “have you tried FreeNAS?”

Anyone familiar with the fine m0n0wall project (and off-shoot pfSense) will instantly recognize the FreeBSD appliance approach taken for FreeNAS. The look-and-feel is very M0n0wall-ish as well. In short, this is a no-nonsense and easy-to-install appliance-oriented distribution that covers the basics of network attached storage: CIFS, NFS and iSCSI. Given that M0n0wall and pfSense both virtualize very well, I have no doubt the VMware appliance version performs likewise.

1, 2, 3… NAS

That said, let’s quickly run-down a 1, 2, 3 approach for booting FreeNAS to hardware from USB drive… This is a run-from-ram-disk appliance, so the size of the USB storage device is minimal: about 50-60MB. Since I am still testing the Tyan Transport GT28 system, I will catalog my steps for that platform: Read the rest of this entry ?

h1

Tyan S2935-SI to be released???

January 15, 2009

While following-up on the availability issue with GT28 systems, I stumbled-over a question fielded by Tyan’s technical support group about hypertransport v3 (HT3) interconnects and the S2935. The GT28 is still being front-page-listed on Tyan’s home site making it one of their top three barebones systems.  The S2935 motherboard in the GT28 was also noted in a recent press release as one of the AMD 45nm-ready systems. Tyan support group indicated that the S2935 has been updated to “S2935-SI” (release TBD) to include support for HT3 at some future date:

The S2935 was designed before HT3 (HyperTransport interconnect) technology was announced. The S2935 does not have HT3 link capability. If you want to have HT3 link capability then you can use all the same components on the S2935-SI product instead. This product was re-designed to support HT3 link technologies – Revised 10/22/2008

This is good news for those of you shopping for Transport GT28 systems today and finding short supply: it looks like a new release is on the horizon. The last GT28 system we got took two weeks to fulfill and since then NewEgg has removed the part number from active inventory due to the shortage.

h1

Installing VMware ESXi on the Tyan Transport GT28

January 15, 2009

Once the GT28 nodes are BIOS-updated to the AGESA v3.3.0.0+ release, a few adjustments are needed to to support my boot-from-flash deployment model. If you are not familiar with boot-from-USB-flash, there are many helpful blogs issued on the subject like this one from vm-help.com. Suffice to say, boot-from-USB-flash for ESXi in a relatively simple process to setup:

  1. Make sure your BIOS supports boot-from-USB-flash;
  2. Download the latest release of ESX 3i from VMware;
  3. Mount the ISO image of the 3i installer;
  4. Find the “VMvisor-big” image as a “.dd.bz2” file in the mounted image;
  5. Un-bzip the VMvisor-big image to a temporary directory;
  6. Plug-in your “donor” USB flash device (I’m using the Sandisk Cruzer 4GB);
  7. Find the device handle of the mounted USB device and unmount it (i.e. “umount /dev/sdm”;
  8. Use dd to copy the VMvisor image to the flash device (i.e. “dd if=/tmp/VMware-VMvisor-big-3.5.0_Update_3-123629.i386.dd of=/dev/sdm”);
  9. Eject the USB device and label it as ESXi;
  10. Insert the USB flash device into a USB 2.0 port on your equipment and boot;

Preparing the BIOS

To prepare my GT28 for ESX 3i and boot-from-USB-flash, I insert the USB “thumb drive” into one of the rear ports and turn-on the GT28. Hitting the “delete” key on boot gets me to the BIOS setup.  I will start with the BIOS “Optimal Defaults”, and make modifications from there; these adjustments are (follow links for screen shots):

S2935 BIOS screen on boot

S2935 BIOS screen on boot

  1. Reset BIOS to “Optimal Defaults”;
  2. Adjust Northbridge IOMMU window from 128MB to 256MB;
  3. Disable AMD PowerNow in BIOS;
  4. Adjust PCI Latency Timer from 64 to 128 (optional);
  5. Disable nVidia MCP55 SATA controller (ESXi has no driver ESXi has a driver, however may be issues with nVRAID);
  6. Adjust USB Mass Storage, setting the USB flash drive to Hard Disk;
  7. Disable the CD/DVD boot devices to avoid boot conflicts;
  8. Select the USB flash drive as the only boot device;
  9. Finally, save the BIOS changes and reboot;
  10. Now, the system should boot into ESXi for initial configuration;

As you can see, boot-from-USB-flash is “wicked simple” to implement (at least on this platform) and open-up all kinds of testing scenarios. In this case, the ESXi image is now running from USB flash, and only the basic configuration tasks remain. However, it is a good idea to know which Ethernet ports are which on the rear panel of the GT28.

S2935 I/O Ports, Rear

S2935 I/O Ports, Rear

If the PCI bus scan order is configured for “Ascent” the LAN ports will be configured as indicated in the image shown. If you modify the bus scan for “Descent” (i.e. to accommodate a RAID controller) then E2/E3 becomes E0/E1 and E0/E1 becomes E2/E3 due to the new initialization sequence. You may want to, therefore, be cautious when making such a change since ESXi will re-enumerate the interfaces (although any used interface will be pinned to the MAC address.)

Initial Configuration of ESXi

Once your network connections are plugged-in, you should have already mapped-out some IP assignments and VLAN and/or trunking arrangements. While these steps are not strictly necessary in testing, they are a good practice maintain even in testing. To make the initial configurations to ESXi, from the console do the following:

S2935 ESXi demo, Initial Configuration

S2935 ESXi demo, Initial Configuration

  1. Hit “F2” to enter the configuration screen;
  2. Set the “root” password for the ESXi server;
  3. Skip “Lockdown mode” for now;
  4. Configure the management network of the ESXi server;
    1. Select the network adapter(s) to be used for management;
    2. If not using DHCP:
      1. Fix the management IP address
      2. Fix the management IP Subnet mask
      3. Fix the management IP Default gateway
      4. Fix the management DNS configuration;
      5. Update the DNS suffix(es) for your local network;
    3. Hit “Enter” to save (“Escape” exits without change);
  5. Test the management network and restart if necessary;
  6. Exit the configuration menu and hit “F12” to restart;
S2935 ESXi demo, Restarting ESXi after Configuration

S2935 ESXi demo, Restarting ESXi after Configuration

Initial Management with VI Client

Once the ESXi server is up and online, you will need to grab the VMware Infrastructure Client from the web service provided by the ESXi server (http://<ESXi_IP_ADDRESS>/) and install it on your Windows client. If you don’t run windows (like me) you should have a version running in a VM for just such an occasion. I find VirtuaBox to be a better (free) choice for workstation-on-workstation applications and VMware Server a good choice if the client is to be minimal and accessible from multiple hosts.

S2935 ESXi demo, VMware Infrastructure Client Login

S2935 ESXi demo, VMware Infrastructure Client Login

Once the VI Client is installed, run the application and enter the ESXi server’s hostname/IP-address, root username and root password where requested. A VI Client window will eventually open and allow you to complete the setup of the ESXi server as needed.

That’s really all there is to it: we have a reliable, running ESXi platform in mere minutes with minimal effort.

Notes:

Updated January 15, 2009. Corrected statement that ESX 3i update 3 is not MCP55 aware – support has been added in ESX 3i update 2 and newer. In my test configuration (with the SATA controller enabled) ESX 3i update 3 does properly identify and configure the MCP55 SATA Controller as  an abstracted SCSI controller.

# vmkvsitools lspci
00:00.00 Memory controller: nVidia Corporation
00:01.00 Bridge: nVidia Corporation
00:01.01 Serial bus controller: nVidia Corporation
00:02.00 Serial bus controller: nVidia Corporation
00:02.01 Serial bus controller: nVidia Corporation
00:05.00 Mass storage controller: nVidia Corporation MCP55 SATA Controller [vmhba0]
00:06.00 Bridge: nVidia Corporation
00:10.00 Bridge: nVidia Corporation
00:11.00 Bridge: nVidia Corporation
00:12.00 Bridge: nVidia Corporation
00:13.00 Bridge: nVidia Corporation
00:14.00 Bridge: nVidia Corporation
00:15.00 Bridge: nVidia Corporation
00:24.00 Bridge: Advanced Micro Devices [AMD]
00:24.01 Bridge: Advanced Micro Devices [AMD]
00:24.02 Bridge: Advanced Micro Devices [AMD]
00:24.03 Bridge: Advanced Micro Devices [AMD]
00:24.04 Bridge: Advanced Micro Devices [AMD]
01:05.00 Display controller:
04:00.00 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic0]
04:00.01 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic1]
05:00.00 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic2]
05:00.01 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic3]

This enables the use of the MCP55 SATA controller for flash drives at least. I will do further tests on this platform to determine the stability of the NVRAID component and its suitability for local storage (i.e. embedded VM, like virtual SAN/NAS) needs.

h1

Tyan Transport GT28 Overview, Part 2

January 14, 2009

As I indicated in the earlier posts, the GT28 uses narrow motherboards within a single 1U chassis to create a compact, dual-node HPC platform. Small as it is, the motherboard still packs two socket-F processors, sixteen 240-pin DDR2 DIMM slots, four GigabitEthernet ports, space for an embedded Mellanox Infiniband processor, on board video, a single low-profile PCIe riser slot (x8 slot/signal) and an SMDC slot.

Tyan S2935 Motherboard

Tyan S2935 Motherboard

The GT28 is not a quiet machine once powered-up, and with eight 15,000 RPM fans (3-per node plus 2 power supply fans) the system only gets louder under heavy load. Fortunately, the 45nm Opteron processor is easier on the thermal envelope than its predecessors so the fans stay around 9,000 RPM most of the time. That said, you do not want one of these systems in anything but a full rack enclosure as the tell-tale whine of the system fans is not conducive to office work. This is no appliance chassis and it was not designed to be one, but compared to an 8U AIC storage chassis, this thing is quiet.

The GT28 does provide more than your typical standard compact server motherboard in terms of I/O options – especially network. Although we were not testing the Infiniband variant (about $600 additional) this $1,600 barebones systems comes well equipped for a variety of tasks. As I indicated before, the test system is to become part of a four node storage and hypervisor system based on VMware ESXi and Nexenta NAS (using Solaris’ ZFS storage) . One node will run ESXi with Nexenta running in a virtual machine and three nodes will run ESXi using both NFS and iSCSI as the virtual shared storage medium (provided by Nexenta).

I do want to drift briefly into a dicsussion on cost. Although rock-bottom prices are not a focus, per-node costs are – especially where relative to non-commodity computing models. Read the rest of this entry ?

h1

Tyan Transport GT28 Overview, Part 1

January 13, 2009

First, let me say that I have been using Tyan Barebones Server products for many years. I have found them both robust in features and reliable. Furthermore, I have focused on AMD Opteron Eco-System products since the products became available due to the I/O benefits of hypertransport and the overall better price/performance of Opteron vs. Xeon offerings.

While I have considerable experience with Xeon systems and have participated in many side-by-side comparisons, I am convinced – as a result of such testing – the I/O systems in Opteron-based platforms are far superior than comparable Xeon FSB systems (front-side bus dependent systems). For low TCO systems, the ability to load I/O elegantly is not just an advantage: it’s a must. This loading factors considerably into TCO where per-node utilization and efficiency are large factors.

That said, the GT28 has a Xeon-based counterpart – the Tank GT24 – to satisfy Xeon-based Eco-Systems. The Tank GT24 does not support quad Gigabit Ethernet nor does it support the built-in Mellanox Infiniband interface. The lack of support for these advanced I/O capabilities is further testament to the weak I/O support of the FSB paradigm. Read the rest of this entry ?