<RANT>My local pastor recently completed a series on getting your life “from here to there,” and it made me think: what single administrative skill and knowledge base is necessary to make robust virtualization solutions work? Is that a simple question? Are there simple answers? Read the rest of this entry ?
Archive for January, 2009
A laundry list of bug fixes were released in Wednesday’s update. Of special interest are:
- VMM: allow running up to 1023 VMs on 64-bit hosts (used to be 127)
- VMM: several FreeBSD guest related ﬁxes (bugs #2342, #2341, #2761)
- VMM: clear MSR_K6_EFER_SVME after probing for AMD-V (bug #3058)
- VMM: ﬁxed hang during OS/2 MCP2 boot (AMD-V and VT-x only)
- VMM: ﬁxed random crashes related to FPU/XMM with 64 bits guests on 32 bits hosts
- VMM: ﬁxed occasional XMM state corruption with 64 bits guests
- VMM: speed improvements for real mode and protected mode without paging (software virtualization only)
- GUI: raised the RAM limit for new VMs to 75% of the host memory
- GUI: added Windows 7 as operating system type
- VBoxSDL: ﬁxed -fixed fixedmode parameter (bug #3067)
- Clipboard: stability ﬁxes (Linux and Solaris hosts only, bug #2675 and #3003)
- 3D support: ﬁxed VM crashes for certain guest applications (bugs #2781, #2797, #2972, #3089)
- LsiLogic: improved support for Windows guests (still experimental)
- VGA: ﬁxed a 2.1.0 regression where guest screen resize events were not properly handled (bug #2783)
- VGA: better handling for VRAM offset changes (ﬁxes GRUB2 and Dos DOOM display issues)
- VGA: custom VESA modes with invalid widths are now rounded up to correct ones (bug #2895)
- IDE: ﬁxed ATAPI passthrough support (Linux hosts only; bug #2795)
- Networking: ﬁxed kernel panics due to NULL pointer dereference in Linux kernels < 2.6.20 (Linux hosts only; bug #2827)
- Networking: ﬁxed the issue with displaying hostif NICs without assigned IP addresses (Linux hosts only; bug #2780)
- Networking: ﬁxed the issue with sent packets coming back to internal network when using hostif (Linux hosts only; bug #3056).
- NAT: ﬁxed booting from the builtin TFTP server (bug #1959)
- NAT: ﬁxed occasional crashes (bug #2709)
- SATA: vendor product data (VPD) is now conﬁgurable
- SATA: ﬁxed timeouts in the guest when using raw VMDK ﬁles (Linux host only, bug #2796)
- SATA: huge speed up during certain I/O operations like formatting a drive
- SATA/IDE: ﬁxed possible crash/errors during VM shutdown
- VRDP: ﬁxed RDP client disconnects
- VRDP: ﬁxed VRDP server misbehavior after a broken client connection
- Linux hosts: don’t depend on libcap1 anymore (bug #2859)
- Linux hosts: compile ﬁxes for 2.6.29-rc1
- Linux hosts: don’t drop any capability if the VM was started by root (2.1.0 regression)
- Windows Additions: ﬁxed guest property and logging OS type detection for Windows 2008 and Windows 7 Beta
- Windows Additions: added support for Windows 7 Beta (bugs #2995, #3015)
- Windows Additions: ﬁxed Windows 2000 guest freeze when accessing ﬁles on shared folders (bug #2764)
- Windows Additions: ﬁxed Ctrl-Alt-Del handling when using VBoxGINA
- Windows Additions Installer: Added /extract switch to only extract (not install) the ﬁles to a directory (can be speciﬁed with /D=path)
- Linux installer and Additions: added support for the Linux From Scratch distribution (bug #1587) and recent Gentoo versions (bug #2938)
- Additions: added experimental support for X.Org Server 1.6 RC on Linux guests
- Linux Additions: ﬁxed bug which prevented to properly set fmode on mapped shared folders (bug #1776)
- Linux Additions: ﬁxed appending of ﬁles on shared folders (bug #1612)
- Linux Additions: ignore noauto option when mounting a shared folder (bug #2498)
- Linux Additions: ﬁxed a driver issue preventing X11 from compiling keymaps (bug #2793 and #2905)
- X11 Additions: workaround in the mouse driver for a server crash when the driver is loaded manually (bug #2397)
While my workstation has been Linux-based since the 1990′s, I have been using virtualbox to “contain” my Windows workstation-within-a-workstation since Sun purchased Virtualbox from Innotek in 2008. Virtualbox provides a rich, high performance workstation experience (fast video, good sound and USB 2.0 support) albeit at the sacrifice of considerable CPU overhead.
I don’t want to get too deep into a re-hash of how to install FreeNAS onto USB flash (or “thumb”) drives – there is a wealth of community information in that regard. However, any time I come across the same simple question so many times in one week I have to investigate it more thoroughly. This week, that question has been “have you tried FreeNAS?”
Anyone familiar with the fine m0n0wall project (and off-shoot pfSense) will instantly recognize the FreeBSD appliance approach taken for FreeNAS. The look-and-feel is very M0n0wall-ish as well. In short, this is a no-nonsense and easy-to-install appliance-oriented distribution that covers the basics of network attached storage: CIFS, NFS and iSCSI. Given that M0n0wall and pfSense both virtualize very well, I have no doubt the VMware appliance version performs likewise.
1, 2, 3… NAS
That said, let’s quickly run-down a 1, 2, 3 approach for booting FreeNAS to hardware from USB drive… This is a run-from-ram-disk appliance, so the size of the USB storage device is minimal: about 50-60MB. Since I am still testing the Tyan Transport GT28 system, I will catalog my steps for that platform: Read the rest of this entry ?
While following-up on the availability issue with GT28 systems, I stumbled-over a question fielded by Tyan’s technical support group about hypertransport v3 (HT3) interconnects and the S2935. The GT28 is still being front-page-listed on Tyan’s home site making it one of their top three barebones systems. The S2935 motherboard in the GT28 was also noted in a recent press release as one of the AMD 45nm-ready systems. Tyan support group indicated that the S2935 has been updated to “S2935-SI” (release TBD) to include support for HT3 at some future date:
The S2935 was designed before HT3 (HyperTransport interconnect) technology was announced. The S2935 does not have HT3 link capability. If you want to have HT3 link capability then you can use all the same components on the S2935-SI product instead. This product was re-designed to support HT3 link technologies – Revised 10/22/2008
This is good news for those of you shopping for Transport GT28 systems today and finding short supply: it looks like a new release is on the horizon. The last GT28 system we got took two weeks to fulfill and since then NewEgg has removed the part number from active inventory due to the shortage.
Once the GT28 nodes are BIOS-updated to the AGESA v188.8.131.52+ release, a few adjustments are needed to to support my boot-from-flash deployment model. If you are not familiar with boot-from-USB-flash, there are many helpful blogs issued on the subject like this one from vm-help.com. Suffice to say, boot-from-USB-flash for ESXi in a relatively simple process to setup:
- Make sure your BIOS supports boot-from-USB-flash;
- Download the latest release of ESX 3i from VMware;
- Mount the ISO image of the 3i installer;
- Find the “VMvisor-big” image as a “.dd.bz2″ file in the mounted image;
- Un-bzip the VMvisor-big image to a temporary directory;
- Plug-in your “donor” USB flash device (I’m using the Sandisk Cruzer 4GB);
- Find the device handle of the mounted USB device and unmount it (i.e. “umount /dev/sdm”;
- Use dd to copy the VMvisor image to the flash device (i.e. “dd if=/tmp/VMware-VMvisor-big-3.5.0_Update_3-123629.i386.dd of=/dev/sdm”);
- Eject the USB device and label it as ESXi;
- Insert the USB flash device into a USB 2.0 port on your equipment and boot;
Preparing the BIOS
To prepare my GT28 for ESX 3i and boot-from-USB-flash, I insert the USB “thumb drive” into one of the rear ports and turn-on the GT28. Hitting the “delete” key on boot gets me to the BIOS setup. I will start with the BIOS “Optimal Defaults”, and make modifications from there; these adjustments are (follow links for screen shots):
- Reset BIOS to “Optimal Defaults”;
- Adjust Northbridge IOMMU window from 128MB to 256MB;
- Disable AMD PowerNow in BIOS;
- Adjust PCI Latency Timer from 64 to 128 (optional);
- Disable nVidia MCP55 SATA controller (ESXi has no driver ESXi has a driver, however may be issues with nVRAID);
- Adjust USB Mass Storage, setting the USB flash drive to Hard Disk;
- Disable the CD/DVD boot devices to avoid boot conflicts;
- Select the USB flash drive as the only boot device;
- Finally, save the BIOS changes and reboot;
- Now, the system should boot into ESXi for initial configuration;
As you can see, boot-from-USB-flash is “wicked simple” to implement (at least on this platform) and open-up all kinds of testing scenarios. In this case, the ESXi image is now running from USB flash, and only the basic configuration tasks remain. However, it is a good idea to know which Ethernet ports are which on the rear panel of the GT28.
If the PCI bus scan order is configured for “Ascent” the LAN ports will be configured as indicated in the image shown. If you modify the bus scan for “Descent” (i.e. to accommodate a RAID controller) then E2/E3 becomes E0/E1 and E0/E1 becomes E2/E3 due to the new initialization sequence. You may want to, therefore, be cautious when making such a change since ESXi will re-enumerate the interfaces (although any used interface will be pinned to the MAC address.)
Initial Configuration of ESXi
Once your network connections are plugged-in, you should have already mapped-out some IP assignments and VLAN and/or trunking arrangements. While these steps are not strictly necessary in testing, they are a good practice maintain even in testing. To make the initial configurations to ESXi, from the console do the following:
- Hit “F2″ to enter the configuration screen;
- Set the “root” password for the ESXi server;
- Skip “Lockdown mode” for now;
- Configure the management network of the ESXi server;
- Select the network adapter(s) to be used for management;
- If not using DHCP:
- Fix the management IP address
- Fix the management IP Subnet mask
- Fix the management IP Default gateway
- Fix the management DNS configuration;
- Update the DNS suffix(es) for your local network;
- Hit “Enter” to save (“Escape” exits without change);
- Test the management network and restart if necessary;
- Exit the configuration menu and hit “F12″ to restart;
Initial Management with VI Client
Once the ESXi server is up and online, you will need to grab the VMware Infrastructure Client from the web service provided by the ESXi server (http://<ESXi_IP_ADDRESS>/) and install it on your Windows client. If you don’t run windows (like me) you should have a version running in a VM for just such an occasion. I find VirtuaBox to be a better (free) choice for workstation-on-workstation applications and VMware Server a good choice if the client is to be minimal and accessible from multiple hosts.
Once the VI Client is installed, run the application and enter the ESXi server’s hostname/IP-address, root username and root password where requested. A VI Client window will eventually open and allow you to complete the setup of the ESXi server as needed.
That’s really all there is to it: we have a reliable, running ESXi platform in mere minutes with minimal effort.
Updated January 15, 2009. Corrected statement that ESX 3i update 3 is not MCP55 aware – support has been added in ESX 3i update 2 and newer. In my test configuration (with the SATA controller enabled) ESX 3i update 3 does properly identify and configure the MCP55 SATA Controller as an abstracted SCSI controller.
# vmkvsitools lspci 00:00.00 Memory controller: nVidia Corporation 00:01.00 Bridge: nVidia Corporation 00:01.01 Serial bus controller: nVidia Corporation 00:02.00 Serial bus controller: nVidia Corporation 00:02.01 Serial bus controller: nVidia Corporation 00:05.00 Mass storage controller: nVidia Corporation MCP55 SATA Controller [vmhba0] 00:06.00 Bridge: nVidia Corporation 00:10.00 Bridge: nVidia Corporation 00:11.00 Bridge: nVidia Corporation 00:12.00 Bridge: nVidia Corporation 00:13.00 Bridge: nVidia Corporation 00:14.00 Bridge: nVidia Corporation 00:15.00 Bridge: nVidia Corporation 00:24.00 Bridge: Advanced Micro Devices [AMD] 00:24.01 Bridge: Advanced Micro Devices [AMD] 00:24.02 Bridge: Advanced Micro Devices [AMD] 00:24.03 Bridge: Advanced Micro Devices [AMD] 00:24.04 Bridge: Advanced Micro Devices [AMD] 01:05.00 Display controller: 04:00.00 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic0] 04:00.01 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic1] 05:00.00 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic2] 05:00.01 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic3]
This enables the use of the MCP55 SATA controller for flash drives at least. I will do further tests on this platform to determine the stability of the NVRAID component and its suitability for local storage (i.e. embedded VM, like virtual SAN/NAS) needs.
As I indicated in the earlier posts, the GT28 uses narrow motherboards within a single 1U chassis to create a compact, dual-node HPC platform. Small as it is, the motherboard still packs two socket-F processors, sixteen 240-pin DDR2 DIMM slots, four GigabitEthernet ports, space for an embedded Mellanox Infiniband processor, on board video, a single low-profile PCIe riser slot (x8 slot/signal) and an SMDC slot.
The GT28 is not a quiet machine once powered-up, and with eight 15,000 RPM fans (3-per node plus 2 power supply fans) the system only gets louder under heavy load. Fortunately, the 45nm Opteron processor is easier on the thermal envelope than its predecessors so the fans stay around 9,000 RPM most of the time. That said, you do not want one of these systems in anything but a full rack enclosure as the tell-tale whine of the system fans is not conducive to office work. This is no appliance chassis and it was not designed to be one, but compared to an 8U AIC storage chassis, this thing is quiet.
The GT28 does provide more than your typical standard compact server motherboard in terms of I/O options – especially network. Although we were not testing the Infiniband variant (about $600 additional) this $1,600 barebones systems comes well equipped for a variety of tasks. As I indicated before, the test system is to become part of a four node storage and hypervisor system based on VMware ESXi and Nexenta NAS (using Solaris’ ZFS storage) . One node will run ESXi with Nexenta running in a virtual machine and three nodes will run ESXi using both NFS and iSCSI as the virtual shared storage medium (provided by Nexenta).
I do want to drift briefly into a dicsussion on cost. Although rock-bottom prices are not a focus, per-node costs are – especially where relative to non-commodity computing models. Read the rest of this entry ?
First, let me say that I have been using Tyan Barebones Server products for many years. I have found them both robust in features and reliable. Furthermore, I have focused on AMD Opteron Eco-System products since the products became available due to the I/O benefits of hypertransport and the overall better price/performance of Opteron vs. Xeon offerings.
While I have considerable experience with Xeon systems and have participated in many side-by-side comparisons, I am convinced – as a result of such testing – the I/O systems in Opteron-based platforms are far superior than comparable Xeon FSB systems (front-side bus dependent systems). For low TCO systems, the ability to load I/O elegantly is not just an advantage: it’s a must. This loading factors considerably into TCO where per-node utilization and efficiency are large factors.
That said, the GT28 has a Xeon-based counterpart – the Tank GT24 – to satisfy Xeon-based Eco-Systems. The Tank GT24 does not support quad Gigabit Ethernet nor does it support the built-in Mellanox Infiniband interface. The lack of support for these advanced I/O capabilities is further testament to the weak I/O support of the FSB paradigm. Read the rest of this entry ?
Over the next few posts I’ll be investigating the Tyan GT28, 1U, 2-socket, dual-node (4-socket total) barebones system. This cost effective system comes some impressive features built-in:
- 1U, Dual-node Chassis (shared 1000W supply)
- Dual AMD Opteron 2000-series processor (per node)
- Shanghai processor support, (AGESA v184.108.40.206 required, BIOS v1.03+)
- 16 DRAM slots (per node, DDR2/REG/ECC)
- 2 x Dual Gigabit Ethernet (per node, Intel 82571EB)
- 20Gbps Infiniband 4x port (per node, B2935G28VHI only, Mellanox MT25204)
- 2 x Hot-Swap SATA/SAS bays (per node)
- 1 x Low-profile PCI Express x 8 slot (per node)
- XGI Volari Z9S Graphics Controller
- nVIDIA nForce Professional 3600 (NFP 3600)
- IPMI/SMDC slot (per node, Tyan M3295-2/M3296)
I am testing mine with the M3296 SMDC card with IP/KVM support (Raritan KIRA100, firmware version 1.00, build 5772, GT28 r01). WARNING: The units I received did not come with AGESA v220.127.116.11 updated BIOS and would not boot with a Shanghai processor – requiring a BIOS update with an older 2000-series chip.
After BIOS update on both the SMDC and motherboard, system boots without issue in my test configuration (per node):
- 4 x Kingston 4GB 240-Pin DDR2 SDRAM DDR2 667 (PC2 5300) ECC Registered Server Memory Model KVR667D2D4P5/4G
- 1 x AMD Opteron 2376 Shanghai 2.3GHz 45nm Socket F 75W Quad-Core Server Processor
- 1 x LSI LSI00110 8-lane, 2.5Gb/s PCI Express SATA / SAS SAS3442E-R 3Gb/s 8-port
- 1 x Tyan M3296 IP/KVM SMDC Card (IPMI 2.0)
I’ll have some screen shots of BIOS settings and configurations (courtesy of the M3296′s) soon. Since the Gigabit Ethernet controllers support both “jumbo frame” and TCP off-load, we’ll try to squeeze-in a few performance comparisons of these features as well.
If storage is the key, shared storage the key that opens all locks. In the early days of file servers, shared storage meant a common file store presented to users over the network infrastructure. This storage was commonly found in a DAS array – usually RAID1 or RAID5 – and managed by a general purpose server operating system (like Windows or Netware). Eventually, such storage paradigms adopted clustering technologies for high-availability, but the underlying principles remained much the same: an extrapolation of a general purpose implementation.
Today, shared storage means something completely different. In fact, the need for “file servers” of old has not disappeared but the dependency on DAS for the place where stored data is ultimately placed has moved to the network. Network attached storage – in the form of filers and network block devices – are replacing DAS as companies retire legacy systems and expand their data storage and business continuity horizons. Today, commercial and open source software options abound that provide stability, performance, scalability, redundancy and feature sets that can provide increased functionality and accelerated ROI to their adopters. Read the rest of this entry ?