In Part 3 of this series we showed how to install and configure a basic NexentaStor VSA using iSCSI and NFS storage. We also created a CIFS bridge for managing ISO images that are available to our ESX servers using NFS. We now have a fully functional VSA with working iSCSI target (unmounted as of yet) and read-only NFS export mounted to the hardware host.
In this segment, Part 4, we will create an ESXi instance on NFS along with an ESX instance on iSCSI, and, using writable snapshots, turn both of these installations into quick-deploy templates. We’ll then mount our large iSCSI target (created in Part 3) and NFS-based ISO images to all ESX/ESXi hosts (physical and virtual), and get ready to install our vCenter virtual machine.
Part 4, Making an ESX Cluster-in-a-Box
With a lot of things behind us in Parts 1 through 3, we are going to pick-up the pace a bit. Although ZFS snapshots are immediately available in a hidden “.zfs” folder for each snapshotted file system, we are going to use cloning and mount the cloned file systems instead.
Cloning allows us to re-use a file system as a template for a copy-on-write variant of the source. By using the clone instead of the original, we can conserve storage because only the differences between the two file systems (the clone and the source) are stored to disk. This process allows us to save time as well, leveraging “clean installations” as starting points (templates) along with their associate storage (much like VMware’s linked-clone technology for VDI.) While VMware’s “template” capability allows us save time by using a VM as a “starting point” it does so by copying storage, not cloning it, and therefore conserves no storage.
Using clones in NexentaStor to conserve storage and aid rapid deployment and testing. Only the differences between the source and the clone require additional storage on the NexentaStor appliance.
While the ESX and ESXi use cases might not seem the “perfect candidates” for cloning in a “production” environment, in the lab it allows for an abundance of possibilities in regression and isolation testing. In production you might find that NFS and iSCSI boot capabilities could make cloned hosts just as effective for deployment and backup as they are in the lab (but that’s another blog).
Here’s the process we will continue with for this part in the lab series:
- Create NFS folder in NexentaStor for the ESXi template and share via NFS;
- Modify the NFS folder properties in NexentaStor to:
- limit access to the hardware ESXi host only;
- grant the hardware ESXi host “root” access;
- Create a folder in NexentaStor for the ESX template and create a Zvol;
- From VI Client’s “Add Storage…” function, we’ll add the new NFS and iSCSI volumes to the Datastore;
- Create ESX and ESXi clean installations in these “template” volumes as a cloning source;
- Unmount the “template” volumes using the VI Client and unshare them in NexentaStore;
- Clone the “template” Zvol and NFS file systems using NexentaStore;
- Mount the clones with VI Client and complete the ESX and ESXi installations;
- Mount the main Zvol and ISO storage to ESX and ESXi as primary shared storage;
Basic storage architecture for the ESX-on-ESX lab.
When these steps are completed, our virtualization lab will be ready to install vCenter and start experimenting with virtual machines and vMotion. Some of the astute will have figured-out by now that the host ESXi server could be added to the same virtual datacenter as the virtualized ESX hosts, however we want to keep experimentation limited to the virtualized hosts (and not have to juggle start order and volume mounting, etc.) Keep in mind that, for some lab use cases, adding the ESXi host machine to the datacenter might prove valuable.
Before we move on to the lab implementation, we want to point out some changes we have made to our VMware and NexetnaStor environments to bolster the lab-in-a-box performance. The following settings were changed in VMware’s “Advanced Settings” panel based on guidance from NetApp’s TR-3428-1 and VMware’s Performance Best Practices Guide:
- NFS.MaxVolumes = 32
- NFS.HeartbeatFrequency = 12
- NFS.HearbeatMaxFailures = 10
- Net.TcpIpHeapSize = 30
- Net.TcpIpHeapMax = 120
- VMkernel.Boot.debugLogToSerial = 0
- VMkernel.Boot.disableC1E = 1
Additional “Advanced Settings” modified based on additional guidance:
- IRQ.RoutingPolicy = 0
- BufferCache.SoftMaxDirty = 65
- Disk.SchedNumReqOutstanding = 64
- UserVars.CIMEnabled = 0
- Misc.LogToSerial = 0
- Misc.LogToFile = 0
- Syslog.Remote.Hostname = <our_syslog_server>
We also made some changes to our system’s BIOS per VMware’s Performance Best Practices Guide:
- Disable memory node interleave, DCT unganged mode
- Enable AMD-V
- Disable AMD PowerNow and C1E
- Disable serial ports (USB enabled for flash boot)
- Disable unused SATA devices and nvRAID
And for NexentaStor, the following changes were made corresponding to the above references and our own best practices:
- NFS Folder Properties
- default block size = 32K
- Access Time = off
- ZVol Folder Properties
- default block size = 64K
- Access Time = off
- NFS share settings
- Read-Write = <esxi_host_name>:<esxi_ip_address>
- Root = <esxi_host_name>:<esxi_ip_address>
- Anonymous Read-Write = disabled
- NFS Client Version = 3
- Settings/Preferences/Net_tcp_recv_hiwat set to “64240”
Obviously, the BIOS settings will require a reboot, therefore the ESX changes – some requiring a reboot also – should be affected prior to the reboot for BIOS changes. While nothing we will do in the remaining sections of this lab will require these changes, it is a good practice to “standardize” your environment and this is a good place to start. Realize that any block size changes to ZFS will only occur on write: any blocks written prior to the change will not be converted to the new default size just because the default has been changed.
MFU blocks are kept in ARC and move to the L2ARC as more favorable entries force eviction. L2ARC's evict-ahead policy aggregates ARC entries and predictively pushes them to L2ARC devices to eliminate ARC eviction latency.
We will get into how the ZIL, ARC and L2ARC are affected by these decisions in the next part of the series. However, you might want to consider how the use of clones might not only improve effective storage density but also improve performance as it increases effective cache density as well. To understand this, refer back to Part 2’s discussion on cache behavior and how ARC+L2ARC improve read performance based on MFU blocks. Since templates create an obvious opportunity to increase MFU consideration as a template is cloned over and over, the proper use of cloning could create a win-win proposition at the cost of a little administrative overhead.
Next page, Steps 1-4 – Creating NFS Storage for ESXi, iSCSI for ESX
Pages: 1 2 3 4 5 6