Posts Tagged ‘smb’

h1

Quick-Take: NexentaStor AD Shares in 100% Virtual SMB

July 19, 2010

Here’s a maintenance note for SMB environments attempting 100% virtualization and relying on SAN-based file shares to simplify backup and storage management: beware the chicken-and-egg scenario on restart before going home to capture much needed Zzz’s. If your domain controller is virtualized and it’s VMDK file lives on SAN/NAS, you’ll need to restart SMB services on the NexentaStor appliance before leaving the building.

Here’s the scenario:

  1. An afterhours SAN upgrade in non-HA environment (maybe Auto-CDP for BC/DR, but no active fail-over);
  2. Shutdown of SAN requires shutdown of all dependent VM’s, including domain controllers (AD);
  3. End-user and/or maintenance plans are dependent on CIFS shares from SAN;
  4. Authentication of CIFS shares on NexentaStor is AD-based;

Here’s the typical maintenance plan (detail omitted):

  1. Ordered shutdown of non-critical VM’s (including UpdateManager, vMA, etc.);
  2. Ordered shutdown of application VM’s;
  3. Ordered shutdown of resource VM’s;
  4. Ordered shutdown of AD server VM’s (minus one, see step 7);
  5. Migrate/vMotion remaining AD server and vCenter to a single ESX host;
  6. Ordered shutdown of ESX hosts (minus one, see step 8);
  7. vSphere Client: Log-out of vCenter;
  8. vSphere Client: Log-in to remaining ESX host;
  9. Ordered shutdown of vCenter;
  10. Ordered shutdown of remaining AD server;
  11. Ordered shutdown of remaining ESX host;
  12. Update SAN;
  13. Reboot SAN to update checkpoint;
  14. Test SAN update – restoring previous checkpoint if necessary;
  15. Power-on ESX host containing vCenter and AD server (see step 8);
  16. vSphere Client: Log-in to remaining ESX host;
  17. Power-on AD server (through to VMware Tools OK);
  18. Restart SMB service on NexentaStor;
  19. Power-on vCenter;
  20. vSphere Client: Log-in to vCenter;
  21. vSphere Client: Log-out of ESX host;
  22. Power-on remaining ESX hosts;
  23. Ordered power-on of remaining VM’s;

A couple of things to note in an AD environment:

  1. NexnetaStor requires the use of AD-based DNS for AD integration;
  2. AD-based DNS will not be available at SAN re-boot if all DNS servers are virtual and only one SAN is involved;
  3. Lack of DNS resolution on re-boot will cause a failure for DNS name based NTP service synchronization;
  4. NexentaStor SMB service will fail to properly initialize AD credentials;
  5. VMware 4.1 now pushes AD authentication all the way to ESX hosts, enabling better credential management and security but creating a potential AD dependency as well;
  6. Using auto-startup order on the remaining ESX host for AD and vCenter could automate the process (steps 17 & 19), however, I prefer the “manual” approach after a SAN upgrade in case the upgrade failure is detected only after ESX host is restarted (i.e. storage service interaction in NFS/iSCSI after upgrade).

SOLORI’s Take: This is a great opportunity to re-think storage resources in the SMB as the linchpin to 100% virtualization.  Since most SMB’s will have a tier-2 or backup NAS/SAN (auto-sync or auto-CDP) for off-rack backup, leveraging a shared LUN/volume from that SAN/NAS for a backup domain controller is a smart move. Since tier-2 SAN’s may not have the IOPs to run ALL mission critical applications during the maintenance interval, the presence of at least one valid AD server will promote a quicker RTO, post-maintenance, than coming up cold. [This even works with DAS on the ESX host]. Solution – add the following and you can ignore step 15:

3a. Migrate always-on AD server to LUN/volume on tier-2 SAN/NAS;

24. Migrate always-on AD server from LUN/volume on tier-2 SAN/NAS back to tier-1;

Since even vSphere Essentials Plus has vMotion now (a much requested and timely addition) collapsing all remaining VM’s to a single ESX host is a no brainer. However, migrating the storage is another issue which cannot be resolved without either a shutdown of the VM (off-line storage migration) or Enterprise/Enterprise Plus version of vSphere. That is why the migration of the AD server from tier-2 is reserved for last (step 17) – it will likely need to be shutdown to migrate the storage between SAN/NAS appliances.

h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 3

August 21, 2009

In Part 2 of this series we introduced the storage architecture that we would use for the foundation of our “shared storage” necessary to allow vMotion to do its magic. As we have chosen NexentaStor for our VSA storage platform, we have the choice of either NFS or iSCSI as the storage backing.

In Part 3 of this series we will install NexentaStor, make some file systems and discuss the advantages and disadvantages of NFS and iSCSI as the storage backing. By the end of this segment, we will have everything in place for the ESX and ESXi virtual machines we’ll build in the next segment.

Part 3, Building the VSA

For DRAM memory, our lab system has 24GB of RAM which we will apportion as follows: 2GB overhead to host, 4GB to NexentaStor, 8GB to ESXi, and 8GB to ESX. This leaves 2GB that can be used to support a vCenter installation at the host level.

Our lab mule was configured with 2x250GB SATA II drives which have roughly 230GB each of VMFS partitioned storage. Subtracting 10% for overhead, the sum of our virtual disks will be limited to 415GB. Because of our relative size restrictions, we will try to maximize available storage while limiting our liability in case of disk failure. Therefore, we’ll plan to put the ESXi server on drive “A” and the ESX server on drive “B” with the virtual disks of the VSA split across both “A” and “B” disks.

Our VSA Virtual Hardware

For lab use, a VSA with 4GB RAM and 1 vCPU will suffice. Additional vCPU’s will only serve to limit CPU scheduling for our virtual ESX/ESXi servers, so we’ll leave it at the minimum. Since we’re splitting storage roughly equally across the disks, we note that an additional 4GB was taken-up on disk “A” during the installation of ESXi, therefore we’ll place the VSA’s definition and “boot” disk on disk “B” – otherwise, we’ll interleave disk slices equally across both disks.

NexentaStor-VSA-virtual-hardware

  • Datastore – vLocalStor02B, 8GB vdisk size, thin provisioned, SCSI 0:0
  • Guest Operating System – Solaris, Sun Solaris 10 (64-bit)
  • Resource Allocation
    • CPU Shares – Normal, no reservation
    • Memory Shares – Normal, 4096MB reservation
  • No floppy disk
  • CD-ROM disk – mapped to ISO image of NexentaStor 2.1 EVAL, connect at power on enabled
  • Network Adapters – Three total 
    • One to “VLAN1 Mgt NAT” and
    • Two to “VLAN2000 vSAN”
  • Additional Hard Disks – 6 total
    • vLocalStor02A, 80GB vdisk, thick, SCSI 1:0, independent, persistent
    • vLocalStor02B, 80GB vdisk, thick, SCSI 2:0, independent, persistent
    • vLocalStor02A, 65GB vdisk, thick, SCSI 1:1, independent, persistent
    • vLocalStor02B, 65GB vdisk, thick, SCSI 2:1, independent, persistent
    • vLocalStor02A, 65GB vdisk, thick, SCSI 1:2, independent, persistent
    • vLocalStor02B, 65GB vdisk, thick, SCSI 2:2, independent, persistent

NOTE: It is important to realize here that the virtual disks above could have been provided by vmdk’s on the same disk, vmdk’s spread out across multiple disks or provided by RDM’s mapped to raw SCSI drives. If your lab chassis has multiple hot-swap bays or even just generous internal storage, you might want to try providing NexentaStor with RDM’s or 1-vmdk-per-disk vmdk’s for performance testing or “near” production use. CPU, memory and storage are the basic elements of virtualization and there is no reason that storage must be the bottleneck. For instance, this environment is GREAT for testing SSD applications on a resource limited budget.

Read the rest of this entry ?