Posts Tagged ‘vmotion’

h1

Quick-Take: NexentaStor AD Shares in 100% Virtual SMB

July 19, 2010

Here’s a maintenance note for SMB environments attempting 100% virtualization and relying on SAN-based file shares to simplify backup and storage management: beware the chicken-and-egg scenario on restart before going home to capture much needed Zzz’s. If your domain controller is virtualized and it’s VMDK file lives on SAN/NAS, you’ll need to restart SMB services on the NexentaStor appliance before leaving the building.

Here’s the scenario:

  1. An afterhours SAN upgrade in non-HA environment (maybe Auto-CDP for BC/DR, but no active fail-over);
  2. Shutdown of SAN requires shutdown of all dependent VM’s, including domain controllers (AD);
  3. End-user and/or maintenance plans are dependent on CIFS shares from SAN;
  4. Authentication of CIFS shares on NexentaStor is AD-based;

Here’s the typical maintenance plan (detail omitted):

  1. Ordered shutdown of non-critical VM’s (including UpdateManager, vMA, etc.);
  2. Ordered shutdown of application VM’s;
  3. Ordered shutdown of resource VM’s;
  4. Ordered shutdown of AD server VM’s (minus one, see step 7);
  5. Migrate/vMotion remaining AD server and vCenter to a single ESX host;
  6. Ordered shutdown of ESX hosts (minus one, see step 8);
  7. vSphere Client: Log-out of vCenter;
  8. vSphere Client: Log-in to remaining ESX host;
  9. Ordered shutdown of vCenter;
  10. Ordered shutdown of remaining AD server;
  11. Ordered shutdown of remaining ESX host;
  12. Update SAN;
  13. Reboot SAN to update checkpoint;
  14. Test SAN update – restoring previous checkpoint if necessary;
  15. Power-on ESX host containing vCenter and AD server (see step 8);
  16. vSphere Client: Log-in to remaining ESX host;
  17. Power-on AD server (through to VMware Tools OK);
  18. Restart SMB service on NexentaStor;
  19. Power-on vCenter;
  20. vSphere Client: Log-in to vCenter;
  21. vSphere Client: Log-out of ESX host;
  22. Power-on remaining ESX hosts;
  23. Ordered power-on of remaining VM’s;

A couple of things to note in an AD environment:

  1. NexnetaStor requires the use of AD-based DNS for AD integration;
  2. AD-based DNS will not be available at SAN re-boot if all DNS servers are virtual and only one SAN is involved;
  3. Lack of DNS resolution on re-boot will cause a failure for DNS name based NTP service synchronization;
  4. NexentaStor SMB service will fail to properly initialize AD credentials;
  5. VMware 4.1 now pushes AD authentication all the way to ESX hosts, enabling better credential management and security but creating a potential AD dependency as well;
  6. Using auto-startup order on the remaining ESX host for AD and vCenter could automate the process (steps 17 & 19), however, I prefer the “manual” approach after a SAN upgrade in case the upgrade failure is detected only after ESX host is restarted (i.e. storage service interaction in NFS/iSCSI after upgrade).

SOLORI’s Take: This is a great opportunity to re-think storage resources in the SMB as the linchpin to 100% virtualization.  Since most SMB’s will have a tier-2 or backup NAS/SAN (auto-sync or auto-CDP) for off-rack backup, leveraging a shared LUN/volume from that SAN/NAS for a backup domain controller is a smart move. Since tier-2 SAN’s may not have the IOPs to run ALL mission critical applications during the maintenance interval, the presence of at least one valid AD server will promote a quicker RTO, post-maintenance, than coming up cold. [This even works with DAS on the ESX host]. Solution – add the following and you can ignore step 15:

3a. Migrate always-on AD server to LUN/volume on tier-2 SAN/NAS;

24. Migrate always-on AD server from LUN/volume on tier-2 SAN/NAS back to tier-1;

Since even vSphere Essentials Plus has vMotion now (a much requested and timely addition) collapsing all remaining VM’s to a single ESX host is a no brainer. However, migrating the storage is another issue which cannot be resolved without either a shutdown of the VM (off-line storage migration) or Enterprise/Enterprise Plus version of vSphere. That is why the migration of the AD server from tier-2 is reserved for last (step 17) – it will likely need to be shutdown to migrate the storage between SAN/NAS appliances.

h1

Quick Take: VirtualBox adds Live Migra… uh, Teleportation

November 30, 2009

Sun announced the 3.1.0 release of its desktop hypervisor – VirtualBox – with their own version of live virtual machine host migration called “Teleporting.” Teleporting, according to the user’s manual, is defined as:

“moving a virtual machine over a network from one VirtualBox host to another, while the virtual machine is running. This works regardless of the host operating system that is running on the hosts: you can teleport virtual machines between Solaris and Mac hosts, for example.”

Teleportation operates like an in-place replacement of a VM’s facilities, requiring that the “target” host has a virtual machine in VirtualBox with exactly the same hardware settings as the “source” VM. The source and target VM’s must also share the same storage, etc. and must use either the same VirtualBox accessible iSCSI targets or some other network storage (NFS or SMB/CIFS) – and no snapshots.

“The hosts must have fairly similar CPUs. While VirtualBox can simulate some CPU features to a degree, this does not always work. Teleporting between Intel and AMD CPUs will probably fail with an error message.”

The recipe for teleportation begins on the target and is given in an example, leveraging VirtualBox’s VBoxManage command syntax:

VBoxManage modifyvm  --teleporter on --teleporterport

On the source, the running virtual machine is modified according to the following:

VBoxManage controlvm  teleport --host  --port

For testing, same-host teleportation is allowed (source and target equal loopback). Obviously a ready and clean-up script would be involved to copy the settings to a target location, provide the teleport maintenance and clean-up the former VM configuration that is obsoleted in the teleportation. In the case of an error, the running VM stays running on the source host, and the target VM fails to initialize.

SOLORI’s Take: This represents the writing on the wall for VMware and vMotion. Perhaps the shift from VMotion to vMotion telegraphs the reduced value VMware already sees in the “now standard” feature. Adding vMotion to vSphere Essentials and Essentials Plus would garner a lot of adoption from the SMB market that is moving quickly to Hyper-V over Citrix and VMware. With VirtualBox’s obvious play in desktop virtualization – where minimalist live migration features would be less of a burden – VMware’s market could quickly become divided in 2010 with some crafty third-party integration along with open VDI. It’s a ways off, but the potential is there…

h1

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 1

August 17, 2009

There are many features in vSphere worth exploring but to do so requires committing time, effort, testing, training and hardware resources. In this feature, we’ll investigate a way – using your existing VMware facilities – to reduce the time, effort and hardware needed to test and train-up on vSphere’s ESXi, ESX and vCenter components. We’ll start with a single hardware server running VMware ESXi free as the “lab mule” and install everything we need on top of that system.

Part 1, Getting Started

To get started, here are the major hardware and software items you will need to follow along:

ESX-hardwareRecommended Lab Hardware Components

  • One 2P, 6-core AMD “Istanbul” Opteron system
  • Two 500-1,500GB Hard Drives
  • 24GB DDR2/800 Memory
  • Four 1Gbps Ethernet Ports (4×1, 2×2 or 1×4)
  • One 4GB SanDisk “Cruiser” USB Flash Drive
  • Either of the following:
    • One CD-ROM with VMware-VMvisor-Installer-4.0.0-164009.x86_64.iso burned to it
    • An IP/KVM management card to export ISO images to the lab system from the network

Recommended Lab Software Components

  • One ISO image of NexentaStor 2.x (for the Virtual Storage Appliance, VSA, component)
  • One ISO image of ESX 4.0
  • One ISO image of ESXi 4.0
  • One ISO image of VCenter Server 4
  • One ISO image of Windows Server 2003 STD (for vCenter installation and testing)

For the hardware items to work, you’ll need to check your system components against the VMware HCL and community supported hardware lists. For best results, always disable (in BIOS) or physically remove all unsupported or unused hardware- this includes communication ports, USB, software RAID, etc. Doing so will reduce potential hardware conflicts from unsupported devices.

The Lab Setup

We’re first going to install VMware ESXi 4.0 on the “test mule” and configure the local storage for maximum use. Next, we’ll create three (3) machines two create our “virtual testing lab” – deploying ESX, ESXi and NexentaStor running directly on top of our ESXi “test mule.” All subsequent tests VMs will be running in either of the virtualized ESX platforms from shared storage provided by the NexentaStor VSA.

ESX, ESXi and VSA running atop ESXi

ESX, ESXi and VSA running atop ESXi

Next up, quick-and-easy install of ESXi to USB Flash…
Read the rest of this entry ?