In Part 2 of this series we introduced the storage architecture that we would use for the foundation of our “shared storage” necessary to allow vMotion to do its magic. As we have chosen NexentaStor for our VSA storage platform, we have the choice of either NFS or iSCSI as the storage backing.
In Part 3 of this series we will install NexentaStor, make some file systems and discuss the advantages and disadvantages of NFS and iSCSI as the storage backing. By the end of this segment, we will have everything in place for the ESX and ESXi virtual machines we’ll build in the next segment.
Part 3, Building the VSA
For DRAM memory, our lab system has 24GB of RAM which we will apportion as follows: 2GB overhead to host, 4GB to NexentaStor, 8GB to ESXi, and 8GB to ESX. This leaves 2GB that can be used to support a vCenter installation at the host level.
Our lab mule was configured with 2x250GB SATA II drives which have roughly 230GB each of VMFS partitioned storage. Subtracting 10% for overhead, the sum of our virtual disks will be limited to 415GB. Because of our relative size restrictions, we will try to maximize available storage while limiting our liability in case of disk failure. Therefore, we’ll plan to put the ESXi server on drive “A” and the ESX server on drive “B” with the virtual disks of the VSA split across both “A” and “B” disks.
Our VSA Virtual Hardware
For lab use, a VSA with 4GB RAM and 1 vCPU will suffice. Additional vCPU’s will only serve to limit CPU scheduling for our virtual ESX/ESXi servers, so we’ll leave it at the minimum. Since we’re splitting storage roughly equally across the disks, we note that an additional 4GB was taken-up on disk “A” during the installation of ESXi, therefore we’ll place the VSA’s definition and “boot” disk on disk “B” – otherwise, we’ll interleave disk slices equally across both disks.
- Datastore – vLocalStor02B, 8GB vdisk size, thin provisioned, SCSI 0:0
- Guest Operating System – Solaris, Sun Solaris 10 (64-bit)
- Resource Allocation
- CPU Shares – Normal, no reservation
- Memory Shares – Normal, 4096MB reservation
- No floppy disk
- CD-ROM disk – mapped to ISO image of NexentaStor 2.1 EVAL, connect at power on enabled
- Network Adapters – Three total
- One to “VLAN1 Mgt NAT” and
- Two to “VLAN2000 vSAN”
- Additional Hard Disks – 6 total
- vLocalStor02A, 80GB vdisk, thick, SCSI 1:0, independent, persistent
- vLocalStor02B, 80GB vdisk, thick, SCSI 2:0, independent, persistent
- vLocalStor02A, 65GB vdisk, thick, SCSI 1:1, independent, persistent
- vLocalStor02B, 65GB vdisk, thick, SCSI 2:1, independent, persistent
- vLocalStor02A, 65GB vdisk, thick, SCSI 1:2, independent, persistent
- vLocalStor02B, 65GB vdisk, thick, SCSI 2:2, independent, persistent
NOTE: It is important to realize here that the virtual disks above could have been provided by vmdk’s on the same disk, vmdk’s spread out across multiple disks or provided by RDM’s mapped to raw SCSI drives. If your lab chassis has multiple hot-swap bays or even just generous internal storage, you might want to try providing NexentaStor with RDM’s or 1-vmdk-per-disk vmdk’s for performance testing or “near” production use. CPU, memory and storage are the basic elements of virtualization and there is no reason that storage must be the bottleneck. For instance, this environment is GREAT for testing SSD applications on a resource limited budget.