h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 1

August 17, 2009

There are many features in vSphere worth exploring but to do so requires committing time, effort, testing, training and hardware resources. In this feature, we’ll investigate a way – using your existing VMware facilities – to reduce the time, effort and hardware needed to test and train-up on vSphere’s ESXi, ESX and vCenter components. We’ll start with a single hardware server running VMware ESXi free as the “lab mule” and install everything we need on top of that system.

Part 1, Getting Started

To get started, here are the major hardware and software items you will need to follow along:

ESX-hardwareRecommended Lab Hardware Components

  • One 2P, 6-core AMD “Istanbul” Opteron system
  • Two 500-1,500GB Hard Drives
  • 24GB DDR2/800 Memory
  • Four 1Gbps Ethernet Ports (4×1, 2×2 or 1×4)
  • One 4GB SanDisk “Cruiser” USB Flash Drive
  • Either of the following:
    • One CD-ROM with VMware-VMvisor-Installer-4.0.0-164009.x86_64.iso burned to it
    • An IP/KVM management card to export ISO images to the lab system from the network

Recommended Lab Software Components

  • One ISO image of NexentaStor 2.x (for the Virtual Storage Appliance, VSA, component)
  • One ISO image of ESX 4.0
  • One ISO image of ESXi 4.0
  • One ISO image of VCenter Server 4
  • One ISO image of Windows Server 2003 STD (for vCenter installation and testing)

For the hardware items to work, you’ll need to check your system components against the VMware HCL and community supported hardware lists. For best results, always disable (in BIOS) or physically remove all unsupported or unused hardware- this includes communication ports, USB, software RAID, etc. Doing so will reduce potential hardware conflicts from unsupported devices.

The Lab Setup

We’re first going to install VMware ESXi 4.0 on the “test mule” and configure the local storage for maximum use. Next, we’ll create three (3) machines two create our “virtual testing lab” – deploying ESX, ESXi and NexentaStor running directly on top of our ESXi “test mule.” All subsequent tests VMs will be running in either of the virtualized ESX platforms from shared storage provided by the NexentaStor VSA.

ESX, ESXi and VSA running atop ESXi

ESX, ESXi and VSA running atop ESXi

Next up, quick-and-easy install of ESXi to USB Flash…

Installing ESXi to Flash

This is actually a very simple part of the lab installation. ESXi 4.0 installs to flash directly from the basic installer provided on the ESXi disk. In our lab, we use the IP/KVM’s “virtual CD” capability to mount the ESXi ISO from network storage and install it over the network. If using an attached CD-ROM drive, just put the disk in, boot and follow the instructions on-screen. We’ve produced a blog showing how to “Install ESXi 4.0 to Flash” if you need more details – screen shots are provided.

Once ESXi reboots for the first time, you will need to configure the network cards in an appropriate manner for your lab’s networking needs. This represents your first decision point: will the “virtual lab” be isolated from the rest of your network? If the answer is yes, one NIC will be plenty for management since all other “virtual lab” traffic will be contained within the ESXi host. If the answer is no, let’s say you want to have two or more “lab mules” working together, then consider the following common needs:

  • One dedicated VMotion/Management NIC
  • One dedicated Storage NIC (iSCSI initiator)
  • One dedicated NIC for Virtual Machine networks

We recommend following interface configurations:

  • Using one redundancy group
    • Add all NICs to the same group in the configuration console
    • Use NIC Teaming Failover Order to dedicate one NIC to  management/VMotion and one NIC to iSCSI traffic within the default vSwitch
    • Load balancing will be based on port ID
  • Using two redundancy groups (2 NIC per group)
    • Add only two NICs to the management group in the configuration console
    • Use NIC Teaming Failover Order to dedicate one NIC to  management/VMotion traffic within the default vSwitch (vSwitch0)
    • From the VI Client, create a new vSwitch, vSwitch1, with the remaining two NICs
    • Use either port ID (default) or hash load balancing depending on your SAN needs
Port and switch interconnections.

Our switch ports and redundancy groups - 2-NICs using port ID load balancing, 2-NICs using IP hash load balancing.

Test the network configuration by failing each port and make sure that all interfaces provide equal function. If you are new to VMware networking concepts, stick to the single redundancy group until your understanding matures – it will save time and hair… If you are a veteran looking to hone your ESX4 or vSphere skills, then you’ll want to tailor the network fit your intended lab use.

Next, we cover some ESXi topics for first-timers…

Pages: 1 2 3

One comment

  1. […] Results. In search of the elegant solution to complex business problems… « In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 1 In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 3 » In-the-Lab: Full ESX/vMotion […]

    Like



Comments are closed.

%d bloggers like this: