h1

Quick-Take: NexentaStor 4.0.1GA

April 14, 2014

Our open storage partner, Nexenta Systems Inc., hit a milestone this month by releasing NexentaStor 4.0.1 for general availability. This release is significant mainly because it is the first commercial release of NexentaStor based on the Open Source Illumos kernel and not Oracle’s OpenSolaris (now closed source). With this move, NexentaStor’s adhering to the company’s  promise of “open source technology” that enables hardware independence and targeted flexibility.

Some highlights in 4.0.1:

  • Faster Install times
  • Better HA Cluster failover times and “easier” cluster manageability
  • Support for large memory host configurations – up to 512GB of DRAM per head/controller
  • Improved handling of intermittently faulty devices (disks with irregular I/O responses under load)
  • New (read: “not backward compatible”) Auto-Sync replication (user configurable zfs+ssh still available for backward compatibility) with support for replication of HA to/from non-HA clusters
    • Includes LZ4 compression (fast) option
    • Better Control of “Force Flags” from NMV
    • Better Control of Buffering and Connections
  • L2ARC Compression now supported
    • Potentially doubles the effective coverage of L2ARC (for compressible data sets)
    • Supports LZ4 compression (fast)
    • Automatically applied if dataset is likewise compressed
  • Server Message Block v2.1 support for Windows (some caveats for IDMAP users)
  • iSCSI support for Microsoft Server 2012 Cluster and Cluster Shared Volume (CSV)
  • Guided storage pool configuration wizards – Performance, Balanced and Capacity modes
  • Enhanced Support Data and Log Gathering
  • High Availability Cluster plug-in (RSF-1) binaries are now part of the installation image
  • VMware: Much better VMXNET3 support
    • no more log spew
    • MTU settings work from NMV
  • VMware: Install to PVSCSI (boot disk) from ISO no longer requires tricks
  • Upgrade from 3.x is currently “disruptive” – promised “non-disruptive” in next maintenance update
  • Improved DTrace capabilities from NMC shell for
    • COMSTAR/iSCSI/FC
    • general IO
  • Snappier, more stable NMV/GUI
    • Service port changes from 2000 to 8457
    • Multi-NMS default
    • Fast refresh for ZFS containers
    • RSF-1 defaults in “Server” settings
    • Improved iSCSI

See Nexenta’s 4.0.1 Release Notes for additional changes and details.

Note, the 18TB Community Edition EULA is still hampered by the “non-commercial” language, restricting it’s use to home, education and academic (ie. training, testing, lab, etc.) targets. However, the “total amount of Storage Space” license for Community is a deviation from the Enterprise licensing (typical “raw” storage entitlement)

2.2 If You have acquired a Community Edition license, the total amount of Storage Space is limited as specified on the Site and is subject to change without notice. The Community Edition may ONLY be used for educational, academic and other non-commercial purposes expressly excluding any commercial usage. The Trial Edition licenses may ONLY be used for the sole purposes of evaluating the suitability of the Product for licensing of the Enterprise Edition for a fee. If You have obtained the Product under discounted educational pricing, You are only permitted to use the Product for educational and academic purposes only and such license expressly excludes any commercial purposes.

- NexentaStor EULA, Version 4.0; Last updated: March 18, 2014

For those who operate under the Community license, this means your total physical storage is UNLIMITED, provided your space “IN USE” falls short of 18TB (18,432 GB) at all times. Where this is important is in constructing useful arrays with “currently available” disks (SATA, SAS, etc.) Let’s say you needed 16TB of AVAILABLE space using “modern” 3TB disks. The fact that your spinning disks are individually larger than 600GB indicates that array rebuild times might run afoul of failure PRIOR to the completion of the rebuild (encountering data loss) and mirror or raidz2/raidz3 would be your best bet for array configuration.

SOLORI Note: Richard Elling made this concept exceedingly clear back in 2010, and his “ZFS data protection comparison” of 2, 3 and 4-way mirrors to raidz, raidz2 and raidz3 is still a great reference on the topic.

Elling’s MTTDL Comparison by RAID Type

 

Given 16TB in 3-way mirror or raidz2 (roughly equivalent MTTDL predictors), your 3TiB disk count would follow as:

3-way Mirror Disks := RoundUp( 16 * (1024 / 1000)^3 / 70% / ( 3 * (1000 / 1024)^3 )  ) * 3 = 27 disks, or

6-disk Raidz2 Disks := RoundUp( 16 * (1024 / 1000)^3 / 70% / ( 4 * 3 * (1000 / 1024)^3 )  ) * 6 = 18 disks

By “raw” licensing standards, the 3-way mirror would require a 76TB license while the raidz2 volume would require a 51TB license – a difference of 25TB in licensing (around $5,300 retail). However, under the Community License, the “cost” is exactly the same, allowing for a considerable amount of flexibility in array loadout and configuration.

Why do I need 54TiB in disk to make 16TB of “AVAILABLE” storage in Raidz2?

The RAID grouping we’ve chosen is 6-disk raidz2 – that’s akin to 4 data and 2 parity disks in RAID6 (without the fixed stripe requirement or the “write hole penalty.”) This means, on average, one third of the space consumed on-disk will be in the form of parity information. Therefore, right of the top, we’re losing 33% of the disk capacity. Likewise, disk manufacturers make TiB not TB disks, so we lose 7% of “capacity” in the conversion from TiB to TB. Additionally, we like to have a healthy amount of space reserved for new block allocation and recommend 30% unused space as a target. All combined, a 6-disk raidz array is, at best, 43% efficient in terms of capacity (by contrast, 3-way mirror is only 22% space efficient). For an array based on 3TiB disks, we therefore get only 1.3TB of usable storage – per disk – with 6-disk raidz (by contrast, 10-disk raidz nets only 160GB additional “usable” space per disk.)

 SOLORI’s Take: If you’re running 3.x in production, 4.0.1 is not suitable for in-place upgrades (yet) so testing and waiting for the “non-disruptive” maintenance release is your best option. For new installations – especially inside a VM or hypervisor environment as a Virtual Storage Appliance (VSA) – version 4.0.1 presents a better option over it’s 3.x siblings. If you’re familiar with 3.x, there’s not much new on the NMV side outside better tunables and snappier response.

h1

Quick-Take: Removable Media and Update Manager Host Remediation

January 31, 2013

Thanks to a spate of upgrades to vSphere 5.1, I recently (re)discovered the following inconvenient result when applying an update to a DRS cluster from Update Manager (5.1.0.13071, using vCenter Server Appliance 5.1.0 build 947673):

Remediate entity ‘vm11.solori.labs’  Host has VMs ‘View-PSG’ , vUM5 with connected removable media devices. This prevents putting the host into maintenance mode. Disconnect the removable devices and try again.

Immediately I thought: “Great! I left a host-only ISO connected to these VMs.” However, that assumption was as flawed as Update Manager’s assumption that the workloads cannot be vMotion’d without disconnecting the removable media. In fact, the removable media indicated was connected to a shared ISO repository available to all hosts in the cluster. However, I was to blame and not Update Manager, as I had not remembered that Update Manager’s default response to removable media is to abort the process. Since cluster remediation is a powerful feature made possible by Distributed Resource Scheduler (DRS) in Enterprise (and above) vSphere editions that may be new to the feature to many (especially uplifted “Advanced AK” users), it seemed like something worth reviewing and blogging about.

Why is this a big deal?

More the the point, why does this seem to run contrary to “a common sense” response?

First, the manual for remediation of a host in a DRS cluster would include:

  1. Applying “Maintenance Mode” to the host,
  2. Selecting the appropriate action for “powered-off and suspended” workloads, and
  3. Allowing DRS to choose placement and finally vMotion those workloads to an alternate host.

In the case of VMs with removable media attached, this set of actions will result in the workloads being vMotion’d (without warning or hesitation) so long as the other hosts in the cluster have access to the removable media source (i.e. shared storage, not “Host Device.”) However, in the case of Update Manger remediation, the following are documented road blocks to a successful remediation (without administrative override):

  1. A CD/DVD drive is attached (any method),
  2. A floppy drive is attached (any method),
  3. HA admission control prevents migration of the virtual machine,
  4. DPM is enabled on the cluster,
  5. EVC is disabled on the cluster,
  6. DRS is disabled on the cluster (preventing migration),
  7. Fault Tolerance (FT) is enabled for a VM on the host in the cluter.

Therefore it is “by design” that a scheduled remediation would have failed – even if the removable media would be eligible for vMotion. To assist in the evaluation of “obstacles to successful deferred remediation” a cluster remediation report is available (see below).

Generating a remediation report prior to scheduling a Update Manager remediation.

Generating a remediation report prior to scheduling a Update Manager remediation.

In fact, the report will list all possible road blocks to remediation whether or not matching overrides are selected (potentially misleading, certainly not useful for predicting the outcome of the remediation attempt). While this too is counter intuitive, it serves as a reminder of the show-stoppers to successful remediation. For the offending “removable media” override, the appropriate check-box can be found on the options page just prior to the remediation report:

Disabling removable media during Update Manager driven remediation.

Disabling removable media during Update Manager driven remediation.

The inclusion of this override allows Update Manager to slog through the remediation without respect to the attached status of removable media. Likewise, the other remediation overrides will enable successful completion of the remediation process; these overrides are:

  1. Maintenance Mode Settings:
    1. VM Power State prior to remediation:  Do not change, Power off, Suspend
    2. Temporarily disable any removable media devices;
    3. Retry maintenance mode in case of failure (delay and attempts);
  2. Cluster Settings:
    1. Temporarily Disable Distributed Power Management (forces “sleeping” hosts to power-on prior to next steps in remediation);
    2. Temporarily Disable High Availability Admission Control (allows for host remediation to violate host-resource reservation margins);
    3. Temporarily Disable Fault Tolerance (FT) (admonished  to remediate all cluster hosts in the same update cycle to maintain FT compatibility);
    4. Enable parallel remediation for hosts in cluster (will not violate DRS anti-affinity constraints);
      1. Automatically determine the maximum number of concurrently remediated hosts, or
      2. Limit the number of concurrent hosts (1-32);
    5. Migrate powered off and suspended virtual machines to other hosts in the cluster (helpful when a remediation leaves a host in an unserviceable condition);
  3.  PXE Booted ESXi Host Settings:
    1. Allow installation of additional software on PXE booted ESXi 5.x hosts (requires the use of an updated PXE boot image – Update Manager will NOT reboot the PXE booted ESXi host.)

These settings are available at the time of remediation scheduling and as host/cluster defaults (Update Manager Admin View.)

SOLORI’s Take: So while it follows that the remediation process is NOT as similar to the manual process as one might think, it still can be made to function accordingly (almost.) There IS a big difference between disabling removable media and making vMotion-aware decisions about hosts. Perhaps VMware could take a few cycles to determine whether or not a host is bound to a removable media device (either through Host Device or local storage resource) and make a more intelligent decision about removable media.

vSphere already has the ability to identify point-resource dependencies, it would be nice to see this information more intelligently correlated where cluster management is concerned. Currently, instead of “asking” DRS for a dependency list, it just seems to just ask the hosts “do you have removable media plugged-into any VM’s” – and if the answer is “yes” it stops right there… Still, not very intuitive for a feature (DRS) that’s been around since Virtual Infrastructure 3 and vCenter 2.

h1

Short-Take: vSphere vCloud Suite – Cheat Sheet

August 27, 2012

VMworld 2012 Announcements

VMware announces a new product package based on vCloud Director and vSphere Enterprise Plus called vCloud Suite. Existing users of vSphere Enterprise Plus (with valid SnS as of 8/27/2012) – including Academic and Federal users – may qualify for a “free” upgrade (actually $1/CPU) to “Standard” edition of vCloud Suite. Likewise, users with valid SnS and vSphere Enterprise (not Plus) qualify for a reduced cost upgrade to vCloud Suite Standard at $682/CPU.

Qualifying users have until December 15, 2012 to complete the transaction. Upgrades to other editions of vCloud Suite from Enterprise and Enterprise Plus are available as well – at additional cost per CPU.

vCloud Suite Cheat Sheet

Summary of new vCloud Suite offering and tiers (including links):

vCloud Suite
Standard Advanced Enterprise
Virtualization VMware vSphere Enterprise Plus Edition * * *
Cloud Infrastructure vCloud Director and vCloud Connector * * *
Standard vCloud Networking and Security * * *
Advanced vCloud Networking and Security * *
vCenter Site Recovery Manager Enterprise *
Operations Management vCenter Operations Management Suite vCOps Advanced vCOps Enterprise
VMware vCenter Chargeback Manager™ *
VMware vCenter Configuration Manager™ *
VMware vCenter Infrastructure Navigator™ *
vFabric Application Director *
Licensing Per CPU, Enterprise Plus basis $4,995.00 $7,495.00 $11,495.00
Support Basic: Per CPU, Per Year $1,049.00 $1,574.00 $2,414.00
Production: Per CPU, Per Year $1,249.00 $1,874.00 $2,874.00

Per-VM Pricing All But Gone

The introduction of vCloud Suite side-steps the vCloud Director per-VM licensing model and allows private cloud to scale based on the more predictable per-CPU infrastructure metric. Public cloud service providers will still be interested in per-VM foot prints and billing structures, but at least private cloud can be unshackled from the confines of per-VM vCD and vRAM issues; which segways nicely into the next tidbit…

In Other News…

VMware effectively kills vRAM by including “unlimited” vRAM entitlements in all editions of vSphere.

SMB’s may be pleased to note that VMware also now includes the vSphere Storate Appliance with all acceleration kits except vSphere Essentials at no additional cost (versus vSphere 5.0 kits). This is especially good for ROBO operations using Essentials Plus. The standalone cost for vSphere Storage Appliance is now $3,495.

h1

NFS and VMware: Perfect for Small Business? Part 1 – Introduction

August 22, 2012

Nexenta System’s “open storage” software made significant inroads into the VMware community over the last year with NFS storage. Even though Nexenta has been a partner with VMware for much longer, the storage vendor really made it’s debut at last year’s VMworld 2011 Hands-on-Labs by showcasing it’s NFS-for-VMware solution running on commodity hardware:

And, here’s the kicker, NexentaStor was running on industry standard hardware from Supermicro with STEC drives for write and read cache and 7200 rpm SAS drives for capacity.  Monday some DRAM on one of the four servers (two HA pairs) failed.  And no end users noticed because of our HA cluster performed correctly and failed over.  Meanwhile our load increased from a designed 33% to over 60% of the total load of the Hands on Lab due to unspecified issues with either NetApp or EMC.

- Evan Powell, CEO – Nexenta Systems, VMworld Reviewed

While this was indeed an important inflection point in the VMware/Nexenta relationship, in broader terms Nexenta’s success at VMworld was the probably the moment when commodity NFS stepped out of the shadow of block storage. To be fair, there are many enterprise alternatives to Nexenta for NFS storage – like NetApp and EMC, but there are few can be deployed on commodity hardware, fewer that do both hardware and virtual storage appliances, and fewer still that have commercially licensed and community licensed distributions of the same platform.

If you’ve ever asked the question, “what’s the best storage solution for my vSphere stack?” I’d be willing to bet that NFS was not high on the list of recommendations. If you’ve looked at the related product marketing materials, as I have, or engaged front-line VMware personnel in a discussion of primary storage solutions, between 2009 and 2011, as I have, you’d be hard pressed to leave the conversation with a recommendation to use NFS. If Nexenta’s appearance can “prove” that open storage solutions based on NFS (and commodity hardware) are “ready” for big cloud infrastructures, can it be true that it’s a perfect fit for a small business’ private cloud? I’d say a resounding YES, but…

Introduction, NFS versus Block Storage

Before you say, “thanks for the tip, Collin, but who needs commercial stuff when NFS services are included in practically every Linux distribution, and “no cost” solutions -like FreeNAS - make NFS cheap and easy?” While it is true that solutions like this have been very popular with lab and bare-bones users, but most enterprises (even small ones) require a “bet the business” level of support and stability that isn’t often found in “community supported” distributions and do-it-yourself implementations. Even if the though “any NFSv3 server” – properly sized and configured – should work with VMware according to its abilities: it’s up to you to decide if the basket fits your eggs. The commercial NFS vendors really know their stuff, so you’re buying expertise, experience and a well-refined playbook: something you’ll be giving up when you go it alone.

Despite being “block storage’s whipping boy,” to say NFS is “not ready for prime time” in today’s VMware product matrix would be the height of FUD-peddling. On the contrary, a well know post in 2009 from noted EMC’r Chad Sakac and NetApp’s Vaughn Stewart made a great case for NFS in the enterprise in their multi-vendor post back in 2009. Since then, many improvements in NFS offerings and vSphere capabilities have increased NFS’ appeal in that space, not diminished it. To quote the Virtual Geek:

“NFS is an absolutely legitimate storage model for VMware – with many advantages.”

- Chad Sakac, aka Virtual Geek, EMC VP VMware Technology Alliance

Certainly there is a lot to like in pairing NFS with vSphere 5.x no matter the scale of the enterprise. Here are some of the high-points:

  • NFS works seamlessly with Storage I/O Control and Network I/O Control to support converged network architectures;
  • NFS exposes VMDKs to 3rd party tools and scripts without VMFS proxies, enabling:
    • Simple Backup/Recovery of VM, VMDK from NAS is a file copy operation
    • Linux, Windows7, etc. support NFS clients out of the box
    • Replication of VM or VMDK from NAS can be achieved simply with rsync
    • Use of snapshotted NFS volumes does not require ESX/VMFS
  • Reclamation of unused storage is not array dependent (file deletes return to storage immediately without SCSI Unmap support or equivalent)
  • Not subject to LUN locking and related performance issues in block/VMFS
  • It’s simpler to use: in the link above, VMware dedicates 24 pages to block/VMFS and only 3 to NFS
  • Presentation and management of NAS storage is very familiar (it’s a filer)
  • NFS is very forgiving of “imperfect” network configurations – compared to iSCSI, especially where network time-outs and latency are concerned
  • NFS storage does not need to be available at ESXi boot time, enabling VMs to exist on VSA running on-top of the host ESXi server (enabling recursive storage possibilities and reduced/shared hardware costs)
  • Mounting an NFS snapshot to vSphere does not include a signature operation (or risk possible collision)
  • NFS does not require VAAI to resolve SCSI file locking and VM loading limitations consistent with SCSI-based block storage
  • vSphere 5 currently support 256 NFS mounts per host
    • NFS.MaxVolumes (per host) – default 8, max 256
  • Single file size not limited on NFS file systems, however
    • Without 3rd party NAS VAAI, all VMDKs on NAS are always thin provisioned
    • Single file size limited to NAS vendor file system constraints
    • VMDK uses 512-byte sectors, so it suffers from the same limitations as physical disks, hence it will still have a 2TB-512-byte limit (since VMware has no 4K-byte sector VMDK, there will be no way to support 2TB+ VMDKs on NFS until that time)
  • NFS volumes are not limited in size
    • For NetApp WAFL, the limit is up to 100TB (with restrictions)
    • For NexentaStor, the limit is determined by the zpool size
  • On-line expansion of an NFS file system is a one-step operation: expand the file system on the filer

That said, NFS still cannot replace block storage on Tier 1 applications that were designed for block storage. Even iSCSI – arguably the least common denominator in shared block storage for VMware – still has some built-in advantages (and unique disadvantages) as compared to NFS. Likewise, when we’re talking about block storage in VMware we’re usually talking about VMFS too:

  • Writes are almost always asynchronous, making even low-end iSCSI “appear” to be faster than low-end NFS
  • Interface redundancy is straight forward and deterministic with many good options for redundancy
  • Storage latency in iSCSI/block is “more predictable” across common use cases
  • vSphere 5 currently supports 256 LUNs per host (similar to NFS mount limit)
    • Disk.MaxLUN (per target) – default 256, max 256
    • Total VMFS LUNs per host cannot exceed Disk.MaxLUN, regardless of type (FC, SAS, iSCSI, etc.)
  • vSphere VMFS3/5 limits single file size (VMDK and virtual RDM) to 2TB (minus 512 bytes)
  • VMFS3 limits single volume size to 50-64TB depending on block size chosen when formatted
  • VMFS5 limits single volume size to 64TB for VMFS5 (always uses 1MB block size)
  • vSphere’s storage telemetry is still geared towards block versus filer storage, making trouble-shooting of “performance issues” more available
  • Pairing storage to interface is much easier to do, even on-the-fly
  • Exchange 2010 expressly forbids the use of NAS storage as VMDK datastores
  • Virtual RDM and Clustering (shared block) require block storage (in some cases, not even iSCSI qualifies for support)
  • Tier 1 application support on block-based storage is generally better (familiarity and testing)
  • VMware VAAI for block storage ships with vSphere, similar acceleration features for NAS must come from the vendor (creating a much lest robust out-of-the-box experience for SMB)
  • On-line VMFS expansion usually requires two steps, with some caveats:
    • For VMFS expansions using a single LUN expansions under 2TB: (1) expand the underlying LUN on the SAN, (2) expand VMFS with the new space on the LUN
    • Single LUN expansions over 2TB require VMFS5
    • VMFS3 volume expansion beyond 2TB require multiple extents, each of which may not exceed 2TB-512B – loss of a single extent in a multi-extent volume could mean a loss of the entire volume
    • VMFS5 supports single LUNs (extents) as large as 60TB

Sparse VAAI issues aside, NFS is a great go-to storage protocol for most virtual workloads that do not strictly require block or shared-block storage back-ends (clustering, et al). Where NFS struggles today – in terms of VMware implementations in the SMB space – is in network resiliency. It is not that you cannot make NFS resilient to network failures, it’s more or less that redundancy is not neatly baked-into the service or protocol like it is for iSCSI, SAS and Fiber Channel – these block-based services have mature, multi-session amd multi-path capabilities at the service level (multi-path targets and initiators).

Note about 2TB VMDK limitations – given that most modern OSes running as supported virtual machines support some form of LUN concatenation (extents) to bypass 2TB physical disk limitations, the very same facilities can be leveraged to bypass the 2TB VMDK limits for these OSes. While this is not an optimal solution, it is a supported one. Today’s physical disks that exceed 2TB in size do so with 4KB sectors instead of 512B sectors. Currently, there is no 4KB sector VMDK analog.

Next Up, NFS and Path Redundancy

Hopefully by now there’s a compelling argument to look deeper into the NFS/VMware question, but – as with most shared, network storage – the rubber meets the road at the network layer. To me, the secret to making NFS more robust is in the network architecture that underpins it: depending on the complexity of the environment, the network layer will make or break an NFS implementation. In some ways there’s a lot more to making NFS “redundant’ (due to it’s lack of multipath capabilities): it’s not impossible; it’s not difficult; it’s just full of options and caveats.

Unlike block storage, you can’t “throw up two network interfaces, two target ports and two initiator ports” and easily have path redundancy and multipath data. With NFS, the network – not the storage service – does most of the “heavy lifting” and – as you’ll see in the next post – NFS has absolutely no concept of multipath. Therefore, I’m going to spend the next entry reviewing some of the main points driving network and NFS service dependencies that make understanding NFS network resiliency a bit more accessible.

h1

Quick-Take: vCenter Server 5.0 Update 1b, Appliance Replaced DB2 with Postgres

August 17, 2012

VMware announced the availability of vCenter Server 5.0 Update 1b today along with some really good news for the fans of openness:

vCenter Server Appliance Database Support: The DB2 express embedded database provided with the vCenter Server Appliance has been replaced with VMware vPostgres database. This decreases the appliance footprint and reduces the time to deploy vCenter Server further.

- vCenter 5.0U1b Release Notes

Ironically and despite its reference in the release notes, the VMware Product Interoperability Matrix has yet to be updated to include 5.0U1b for reference, so  the official impact of an upgrade is as-yet unknown.

VMware Product Interoperability Matrix not updated at time of vCenter 5U1b release.

Also, couple of new test questions are going to be tricky moving forward as the support for Oracle has been expanded:

  • vCenter Server 5.0 Update 1b introduces support for the following vCenter Databases
    • Oracle 11g Enterprise Edition, Standard Edition, Standard ONE Edition Release 2 [11.2.0.3] – 64 bit
    • Oracle 11g Enterprise Edition, Standard Edition, Standard ONE Edition Release 2 [11.2.0.3] – 32 bit

Besides still not supporting IPv6 and continuing the limitation of 5 hosts and 50 VMs, there is some additional leg work needed to upgrade the vCenter Server Appliance 5.0U1a to U1b as specified in KB2017801:

  1. Create a new virtual disk with size 20GB and attach it to the vCenter Server Appliance.
  2. Log in to the vCenter Server Appliance’s console and format the new disk as follows:
    1. At the command line, type, “echo “- – -” > /sys/class/scsi_host/host0/scan”.
    2. Type, “parted -s /dev/sdc mklabel msdos”.
    3. Type, “parted -s /dev/sdc mkpartfs primary ext2 0 22G”.
  3. Mount the new partition under /storage/db/export:
    1. Type, “mkdir -p /storage/db/export”.
    2. Type, “mount /dev/sdc1 /storage/db/export”.
  4. Repeat the update process.
  5. You can remove the new disk after the update process finishes successfully and the vCenter Server Appliance is shut down.

SOLORI’s Take: Until the interop matrix is updated, it’s hard to know what you’re getting into with the update (Update: as you can see from Joshua Andrews’ post on SOS tech), but the inclusion of vPostgres – VMware’s vFabric deployment of PostgreSQL 9.1.x – makes taking a look at the “crippled” appliance version a bit more tantalizing.  Hopefully, the next release will “unshackle” the vCenter Appliance beyond the 5/50 limitations – certainly vPostgres is up to the task of managing many, many more hosts and VMs (vCD anyone?) Cheers, VMware!

h1

Quick-Take: How Virtual Backup Can Invite Disaster

August 1, 2012

There have always been things about virtualizing the enterprise that have concerned me. Most boil down to Uncle Ben’s admonishment to his nephew, Peter Parker, in Stan Lee’s Spider-Man, “with great power comes great responsibility.” Nothing could be more applicable to the state of modern virtualization today.

Back in “the day” when all this VMware stuff was scary and “complicated,” it carried enough “voodoo mystique” that (often defacto) VMware admins either knew everything there was to know about their infrastructure, or they just left it to the experts. Today, virtualization has reached such high levels of accessibility that I think even my 102 year old Nana could clone a live VM; now that is scary.

Enter Veeam Backup, et al

Case in point is Veeam Backup and Recovery 6 (VBR6). Once an infrastructure exceeds the limits of VMware Data Recovery (VDR), it just doesn’t get much easier to backup your cadre of virtual machines than VBR6. Unlike VDR, VBR6 has three modes of access to virtual machine disks:

  1. Direct SAN  Access – VBR6 backup server/proxy has direct access to the VMFS LUNs containing virtual machine disks – very fast, very low overhead;
  2. Virtual Appliance – VBR6 backup server/proxy, running as a virtual machine, leverages it’s relation to the ESXi host to access virtual machine disks using the ESXi host as a go-between – fast, moderate overhead;
  3. Network – VBR6 backup server/proxy accesses virtual machine disks from ESXi hosts similar in a manner similar to the way the vSphere Client grants access to virtual machine disks across the LAN – slower, with more overhead;

For block-based storage, option (1) appears to be the best way to go: it’s fast with very little overhead in the data channel. For those of us with grey hair, think VMware Consolidated Backup proxy server and you’re on the right track; for everyone else, think shared disk environment. And that, boys and girls, is where we come to the point of today’s lesson…

Enter Windows Server, Updates

For all of its warts, my favorite aspect of VMware Data Recovery is the fact that it is a virtual appliance based on a stripped-down Linux distribution. Those two aspects say “do not tamper” better than anything these days, so admins – especially Windows admins – tend to just install and use as directed. At the very least, the appliance factor offers an opportunity for “special case” handling of updates (read: very controlled and tightly scripted).

The other “advantage” to VMDR is that is uses a relatively safe method for accessing virtual machine disks: something more akin to VBR6′s “virtual appliance” mode of operation. By allowing the ESXi host(s) to “proxy” access to the datastore(s), a couple of things are accomplished:

  1. Access to VMDKs is protocol agnostic – direct attach, iSCSI, AoE, SAS, Fiber Channel and/or NFS all work the same;
  2. Unlike “Direct SAN Access” mode, no additional initiators need to be added to the target(s)’ ACL;
  3. If the host can access the VMDK, it stands a good chance of being backed-up fairly efficiently.

However, VBR6 installs onto a Windows Server and Windows Server has no knowledge of what VMFS looks like nor how to handle VMFS disks. This means Windows disk management needs to be “tweaked” to ignore VMFS targets by disabling “automount” in VBR6 servers and VCB proxies. For most, it also means keeping up with patch management and Windows Update (or appropriate derivative). For active backup servers with a (pre-approved, tested) critical update this might go something like:

  1. Schedule the update with change management;
  2. Stage the update to the server;
  3. Put server into maintenance mode (services and applications disabled);
  4. Apply patch, reboot;
  5. Mitigate patch issues;
  6. Test application interaction;
  7. Rinse, repeat;
  8. Release server back to production;
  9. Update change management.

See the problem? If Windows Server 2008 R2 SP1 is involved you just might have one right around step 5…

And the Wheels Came Off…

Service Pack 1 for Windows Server 2008 R2 requires a BCD update, so existing installations of VCB or VBR5/6 will fail to update. In an environment where there is no VCB or VBR5/6 testing platform, this could result in a resume writing event for the patching guy or the backup administrator if they follow Microsoft’s advice and “fix” SP1. Why?

Fixing the SP1 installation problem is quite simple:

Quick steps to do this in case you forgot are:

1.  Run DISKPART

2.  automount enable

3.  Restart

4.  Install SP1

Technet Blogs, Windows Servicing Guy, SP1 Fails with 0x800f0a12

Done, right? Possibly in more ways than one. By GLOBALLY enabling automount, rebooting Windows Server and installing SP1, you’ve opened-up the potential for Windows to write a signature to the VMFS volumes holding your critical infrastructure. Fortunately, it doesn’t have to end that way.

Avoiding the Avoidable

Veeam’s been around long enough to have some great forum participants from across the administrative spectrum. Fortunately, a member posted a solution method that keeps us well away from VMFS corruption and still solves the SP1 issue in a targeted way: temporarily mounting the “hidden” system partition instead of enabling the global automount feature. Here’s my take on the process (GUI mode):

  1. Inside Server Manager, open Disk Management (or run diskmgt.msc from admin cmd prompt);
  2. Right-click on the partition labled “System Reserved” and select “Change Drive Letter and Paths…”
  3. On the pop-up, click the “Add…” button and accept the default drive letter offered, click “OK”;
  4. Now “try again” the installation of Service Pack 1 and reboot;
  5. Once SP1 is installed, re-run Disk Management;
  6. Right-click on the “System Reserved” partition and select “Change Drive Letter and Paths..”
  7. Click the “Remove” button to unmap the drive letter;
  8. Click “Yes” at the “Are you sure…” prompt;
  9. Click “Yes” at the “Do you want to continue?” prompt;
  10. Reboot (for good measure).

This process assumes that there are no non-standard deployments of the Server 2008 R2 boot volume. Of course, if there is no separate system reserved partition, you wouldn’t encounter the SP1 failure to install issue…

SOLORI’s Take: The takeaway here is “consider your environment” (and the people tasked with maintaining it) before deploying Direct SAN Access mode into a VMware cluster. While it may represent “optimal” backup performance, it is not without its potential pitfalls (as demonstrated herein). Native access to SAN LUNs must come with a heavy dose of respect, caution and understanding of the underlying architecture: otherwise, I recommend Virtual Appliance mode (similar to Data Recovery’s take.)

While no VMFS volumes were harmed in the making of this blog post, the thought of what could have happened in a production environment chilled me into writing this post. Direct access to the SAN layer unlocks tremendous power for modern backup: just be safe and don’t forget to heed Uncle Ben’s advice! If the idea of VMFS corruption scares you beyond your risk tolerance, appliance mode will deliver acceptable results with minimal risk or complexity.

h1

New Security Patches for ESX/ESXi 3.5, 4.0, 4.1 and 5.0 (6/14/2012)

June 14, 2012

New patches are available today for ESX/ESXi 3.5, 4.0, 4.1 and 5.0 to resolve a few known security vulnerabilities. Here’s the run down if you’re running ESXi 5.0 standard image:

VMware Host Checkpoint File Memory Corruption

Certain input data is not properly validated when loading checkpoint files. This might allow an attacker with the ability to load a specially crafted checkpoint file to execute arbitrary code on the host. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2012-3288 to this issue.

The following workarounds and mitigating controls might be available to remove the potential for exploiting the issue and to reduce the exposure that the issue poses.

Workaround: None identified.

Mitigation: Do not import virtual machines from untrusted sources.

VMware Virtual Machine Remote Device Denial of Service

A device (for example CD-ROM or keyboard) that is available to a virtual machine while physically connected to a system that does not run the virtual machine is referred to as a remote device. Traffic coming from remote virtual devices is incorrectly handled. This might allow an attacker who is capable of manipulating the traffic from a remote virtual device to crash the virtual machine.
The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2012-3289 to this issue.

The following workarounds and mitigating controls might be available to remove the potential for exploiting the issue and to reduce the exposure that the issue poses.

Workaround: None identified.

Mitigation: Users need administrative privileges on the virtual machine in order to attach remote devices. Do not attach untrusted remote devices to a virtual machine.

Deployment Considerations

None beyond the required patch bundles and reboot information listed in the table above.

5.0 Patch Release Notes:
ESXi: http://kb.vmware.com/kb/2021031

Patch Release Information for Other Versions

3.5 Patch Release Notes:
ESX: http://kb.vmware.com/kb/2021020
ESXi: http://kb.vmware.com/kb/2021021

4.0 Patch Release Notes:
ESX: http://kb.vmware.com/kb/2021025
ESXi: http://kb.vmware.com/kb/2021027

4.1 Patch Release Notes:
ESX: http://kb.vmware.com/kb/2019065
ESXi: http://kb.vmware.com/kb/2019243

Follow

Get every new post delivered to your Inbox.

Join 48 other followers