h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 4

August 26, 2009

In Part 3 of this series we showed how to install and configure a basic NexentaStor VSA using iSCSI and NFS storage. We also created a CIFS bridge for managing ISO images that are available to our ESX servers using NFS. We now have a fully functional VSA with working iSCSI target (unmounted as of yet) and read-only NFS export mounted to the hardware host.

In this segment, Part 4, we will create an ESXi instance on NFS along with an ESX instance on iSCSI, and, using writable snapshots, turn both of these installations into quick-deploy templates. We’ll then mount our large iSCSI target (created in Part 3) and NFS-based ISO images to all ESX/ESXi hosts (physical and virtual), and get ready to install our vCenter virtual machine.

Part 4, Making an ESX Cluster-in-a-Box

With a lot of things behind us in Parts 1 through 3, we are going to pick-up the pace a bit. Although ZFS snapshots are immediately available in a hidden “.zfs” folder for each snapshotted file system, we are going to use cloning and mount the cloned file systems instead.

Cloning allows us to re-use a file system as a template for a copy-on-write variant of the source. By using the clone instead of the original, we can conserve storage because only the differences between the two file systems (the clone and the source) are stored to disk. This process allows us to save time as well, leveraging “clean installations” as starting points (templates) along with their associate storage (much like VMware’s linked-clone technology for VDI.) While VMware’s “template” capability allows us save time by using a VM as a “starting point” it does so by copying storage, not cloning it, and therefore conserves no storage.

Using clones in NexentaStor to aid rapid deployment and testing.

Using clones in NexentaStor to conserve storage and aid rapid deployment and testing. Only the differences between the source and the clone require additional storage on the NexentaStor appliance.

While the ESX and ESXi use cases might not seem the “perfect candidates” for cloning in a “production” environment, in the lab it allows for an abundance of possibilities in regression and isolation testing. In production you might find that NFS and iSCSI boot capabilities could make cloned hosts just as effective for deployment and backup as they are in the lab (but that’s another blog).

Here’s the process we will continue with for this part in the lab series:

  1. Create NFS folder in NexentaStor for the ESXi template and share via NFS;
  2. Modify the NFS folder properties in NexentaStor to:
    1. limit access to the hardware ESXi host only;
    2. grant the hardware ESXi host “root” access;
  3. Create a folder in NexentaStor for the ESX template and create a Zvol;
  4. From VI Client’s “Add Storage…” function, we’ll add the new NFS and iSCSI volumes to the Datastore;
  5. Create ESX and ESXi clean installations in these “template” volumes as a cloning source;
  6. Unmount the “template” volumes using the VI Client and unshare them in NexentaStore;
  7. Clone the “template” Zvol and NFS file systems using NexentaStore;
  8. Mount the clones with VI Client and complete the ESX and ESXi installations;
  9. Mount the main Zvol and ISO storage to ESX and ESXi as primary shared storage;
Basic storage architecture for the ESX-on-ESX lab.

Basic storage architecture for the ESX-on-ESX lab.

When these steps are completed, our virtualization lab will be ready to install vCenter and start experimenting with virtual machines and vMotion. Some of the astute will have figured-out by now that the host ESXi server could be added to the same virtual datacenter as the virtualized ESX hosts, however we want to keep experimentation limited to the virtualized hosts (and not have to juggle start order and volume mounting, etc.) Keep in mind that, for some lab use cases, adding the ESXi host machine to the datacenter might prove valuable.

Review Configurations

Before we move on to the lab implementation, we want to point out some changes we have made to our VMware and NexetnaStor environments to bolster the lab-in-a-box performance. The following settings were changed in VMware’s “Advanced Settings” panel based on guidance from NetApp’s TR-3428-1 and VMware’s Performance Best Practices Guide:

  • NFS.MaxVolumes = 32
  • NFS.HeartbeatFrequency = 12
  • NFS.HearbeatMaxFailures = 10
  • Net.TcpIpHeapSize = 30
  • Net.TcpIpHeapMax = 120
  • VMkernel.Boot.debugLogToSerial = 0
  • VMkernel.Boot.disableC1E = 1

Additional “Advanced Settings” modified based on additional guidance:

  • IRQ.RoutingPolicy = 0
  • BufferCache.SoftMaxDirty = 65
  • Disk.SchedNumReqOutstanding = 64
  • UserVars.CIMEnabled = 0
  • Misc.LogToSerial = 0
  • Misc.LogToFile = 0
  • Syslog.Remote.Hostname = <our_syslog_server>

We also made some changes to our system’s BIOS per VMware’s Performance Best Practices Guide:

  • Disable memory node interleave, DCT unganged mode
  • Enable AMD-V
  • Disable AMD PowerNow and C1E
  • Disable serial ports (USB enabled for flash boot)
  • Disable unused SATA devices and nvRAID

And for NexentaStor, the following changes were made corresponding to the above references and our own best practices:

  • NFS Folder Properties
    • default block size = 32K
    • Access Time = off
  • ZVol Folder Properties
    • default block size = 64K
    • Access Time = off
  • NFS share settings
    • Read-Write = <esxi_host_name>:<esxi_ip_address>
    • Root = <esxi_host_name>:<esxi_ip_address>
    • Anonymous Read-Write = disabled
  • NFS Client Version = 3
  • Settings/Preferences/Net_tcp_recv_hiwat set to “64240”

Obviously, the BIOS settings will require a reboot, therefore the ESX changes – some requiring a reboot also – should be affected prior to the reboot for BIOS changes. While nothing we will do in the remaining sections of this lab will require these changes, it is a good practice to “standardize” your environment and this is a good place to start. Realize that any block size changes to ZFS will only occur on write: any blocks written prior to the change will not be converted to the new default size just because the default has been changed.

L2ARC's evict-ahead polict aggregates ARC entries and predictively pushes them to L2ARC devices to eliminate ARC eviction latency. The L2ARC also acts as a ARC cache for processes that may force premature ARC eviction (runaway application) or otherwise adversely affect performance.

MFU blocks are kept in ARC and move to the L2ARC as more favorable entries force eviction. L2ARC's evict-ahead policy aggregates ARC entries and predictively pushes them to L2ARC devices to eliminate ARC eviction latency.

We will get into how the ZIL, ARC and L2ARC are affected by these decisions in the next part of the series. However, you might want to consider how the use of clones might not only improve effective storage density but also improve performance as it increases effective cache density as well. To understand this, refer back to Part 2’s discussion on cache behavior and how ARC+L2ARC improve read performance based on MFU blocks. Since templates create an obvious opportunity to increase MFU consideration as a template is cloned over and over, the proper use of cloning could create a win-win proposition at the cost of a little administrative overhead.

Next page, Steps 1-4 – Creating NFS Storage for ESXi, iSCSI for ESX

About these ads

Pages: 1 2 3 4 5 6

6 comments

  1. What about Part 5 of this great tutorial?

    Will you release it soon?


    • We will be releasing Part 5 later this week – we took a week off to follow VMworld and related events.


  2. What do you recommend for a ZFS based VSA when more than 2 TB is necessary and one does not have $1400…?


    • In the government finance there’s a saying: “why stop at one when you can have two at twice the price?” With the ability to link NexentaStor appliances (management) it is pretty easy to handle two or more of them within reason. In a VSA, that approach is a bit more resource intensive.

      Theoretically, the Sun Unified Storage Simulator (VSA) can be tweaked to handle as much storage as you can throw at it. It will not allow you to control disk allocations in the way that NexentaStor will as it emulates the hardware build-out and limited functionality of the Sun 7000 Unified Storage array. For instance, you’ll only get one storage pool/volume; allocation of ZIL or L2ARC is not possible through the web interface; etc. However, the analytics are beautiful and it works pretty well overall.

      I reviewed it in March 2009, but a new version has been released since (May, 2009) then that supports VirtualBox as well as VMware. This would have been a “better” choice of VSA’s for this project if there was a hardware migration path. However, I suspect if $1,400 is keeping you off the application, $10,000 for the rather limited Sun Storage 7110 would be a deal breaker. NexentaStor offers a better bang-for-buck migration path to hardware (where your primary storage should be.)

      The other option would be Open Solaris if you’re a Solaris fan, or Nexenta Core if you’re an Ubuntufan. Either will present you with a non-limited, CLI-based control over your ZFS-based storage. If you don’t mind managing the storage without the front-end provided by Sun or Nexenta, these are your most stable and reliable alternatives. The latest release candidate for FreeNAS includes ZFS support, but – based on my experience – performance is not on par with Nexenta or Solaris and there are ongoing stability concerns about the ZFS port.

      The best part about using a VSA is choice and the ability to “kick the tires” without making a big commitment. Experiment with two of three VSA’s before settling on the one that best fits your use case.


  3. […] In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 5 September 28, 2009 In Part 4 of this series we created two vSphere virtual machines – one running ESX and one runnin… – from a set of master images we can use for rapid deployment in case we want to expand the […]


  4. […] Active Posts In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 4Installing FreeNAS to USB Flash: Easy as 1,2,3Preview: Install ESXi 4.0 to FlashIn-the-Lab: Full […]



Comments are closed.

Follow

Get every new post delivered to your Inbox.

Join 49 other followers

%d bloggers like this: