Archive for the ‘Operating Systems’ Category

h1

Quick-Take: NexentaStor 4.0.1GA

April 14, 2014

Our open storage partner, Nexenta Systems Inc., hit a milestone this month by releasing NexentaStor 4.0.1 for general availability. This release is significant mainly because it is the first commercial release of NexentaStor based on the Open Source Illumos kernel and not Oracle’s OpenSolaris (now closed source). With this move, NexentaStor’s adhering to the company’s  promise of “open source technology” that enables hardware independence and targeted flexibility.

Some highlights in 4.0.1:

  • Faster Install times
  • Better HA Cluster failover times and “easier” cluster manageability
  • Support for large memory host configurations – up to 512GB of DRAM per head/controller
  • Improved handling of intermittently faulty devices (disks with irregular I/O responses under load)
  • New (read: “not backward compatible”) Auto-Sync replication (user configurable zfs+ssh still available for backward compatibility) with support for replication of HA to/from non-HA clusters
    • Includes LZ4 compression (fast) option
    • Better Control of “Force Flags” from NMV
    • Better Control of Buffering and Connections
  • L2ARC Compression now supported
    • Potentially doubles the effective coverage of L2ARC (for compressible data sets)
    • Supports LZ4 compression (fast)
    • Automatically applied if dataset is likewise compressed
  • Server Message Block v2.1 support for Windows (some caveats for IDMAP users)
  • iSCSI support for Microsoft Server 2012 Cluster and Cluster Shared Volume (CSV)
  • Guided storage pool configuration wizards – Performance, Balanced and Capacity modes
  • Enhanced Support Data and Log Gathering
  • High Availability Cluster plug-in (RSF-1) binaries are now part of the installation image
  • VMware: Much better VMXNET3 support
    • no more log spew
    • MTU settings work from NMV
  • VMware: Install to PVSCSI (boot disk) from ISO no longer requires tricks
  • Upgrade from 3.x is currently “disruptive” – promised “non-disruptive” in next maintenance update
  • Improved DTrace capabilities from NMC shell for
    • COMSTAR/iSCSI/FC
    • general IO
  • Snappier, more stable NMV/GUI
    • Service port changes from 2000 to 8457
    • Multi-NMS default
    • Fast refresh for ZFS containers
    • RSF-1 defaults in “Server” settings
    • Improved iSCSI

See Nexenta’s 4.0.1 Release Notes for additional changes and details.

Note, the 18TB Community Edition EULA is still hampered by the “non-commercial” language, restricting it’s use to home, education and academic (ie. training, testing, lab, etc.) targets. However, the “total amount of Storage Space” license for Community is a deviation from the Enterprise licensing (typical “raw” storage entitlement)

2.2 If You have acquired a Community Edition license, the total amount of Storage Space is limited as specified on the Site and is subject to change without notice. The Community Edition may ONLY be used for educational, academic and other non-commercial purposes expressly excluding any commercial usage. The Trial Edition licenses may ONLY be used for the sole purposes of evaluating the suitability of the Product for licensing of the Enterprise Edition for a fee. If You have obtained the Product under discounted educational pricing, You are only permitted to use the Product for educational and academic purposes only and such license expressly excludes any commercial purposes.

– NexentaStor EULA, Version 4.0; Last updated: March 18, 2014

For those who operate under the Community license, this means your total physical storage is UNLIMITED, provided your space “IN USE” falls short of 18TB (18,432 GB) at all times. Where this is important is in constructing useful arrays with “currently available” disks (SATA, SAS, etc.) Let’s say you needed 16TB of AVAILABLE space using “modern” 3TB disks. The fact that your spinning disks are individually larger than 600GB indicates that array rebuild times might run afoul of failure PRIOR to the completion of the rebuild (encountering data loss) and mirror or raidz2/raidz3 would be your best bet for array configuration.

SOLORI Note: Richard Elling made this concept exceedingly clear back in 2010, and his “ZFS data protection comparison” of 2, 3 and 4-way mirrors to raidz, raidz2 and raidz3 is still a great reference on the topic.

Elling’s MTTDL Comparison by RAID Type

 

Given 16TB in 3-way mirror or raidz2 (roughly equivalent MTTDL predictors), your 3TiB disk count would follow as:

3-way Mirror Disks := RoundUp( 16 * (1024 / 1000)^3 / 70% / ( 3 * (1000 / 1024)^3 )  ) * 3 = 27 disks, or

6-disk Raidz2 Disks := RoundUp( 16 * (1024 / 1000)^3 / 70% / ( 4 * 3 * (1000 / 1024)^3 )  ) * 6 = 18 disks

By “raw” licensing standards, the 3-way mirror would require a 76TB license while the raidz2 volume would require a 51TB license – a difference of 25TB in licensing (around $5,300 retail). However, under the Community License, the “cost” is exactly the same, allowing for a considerable amount of flexibility in array loadout and configuration.

Why do I need 54TiB in disk to make 16TB of “AVAILABLE” storage in Raidz2?

The RAID grouping we’ve chosen is 6-disk raidz2 – that’s akin to 4 data and 2 parity disks in RAID6 (without the fixed stripe requirement or the “write hole penalty.”) This means, on average, one third of the space consumed on-disk will be in the form of parity information. Therefore, right of the top, we’re losing 33% of the disk capacity. Likewise, disk manufacturers make TiB not TB disks, so we lose 7% of “capacity” in the conversion from TiB to TB. Additionally, we like to have a healthy amount of space reserved for new block allocation and recommend 30% unused space as a target. All combined, a 6-disk raidz array is, at best, 43% efficient in terms of capacity (by contrast, 3-way mirror is only 22% space efficient). For an array based on 3TiB disks, we therefore get only 1.3TB of usable storage – per disk – with 6-disk raidz (by contrast, 10-disk raidz nets only 160GB additional “usable” space per disk.)

 SOLORI’s Take: If you’re running 3.x in production, 4.0.1 is not suitable for in-place upgrades (yet) so testing and waiting for the “non-disruptive” maintenance release is your best option. For new installations – especially inside a VM or hypervisor environment as a Virtual Storage Appliance (VSA) – version 4.0.1 presents a better option over it’s 3.x siblings. If you’re familiar with 3.x, there’s not much new on the NMV side outside better tunables and snappier response.

h1

Quick-Take: How Virtual Backup Can Invite Disaster

August 1, 2012

There have always been things about virtualizing the enterprise that have concerned me. Most boil down to Uncle Ben’s admonishment to his nephew, Peter Parker, in Stan Lee’s Spider-Man, “with great power comes great responsibility.” Nothing could be more applicable to the state of modern virtualization today.

Back in “the day” when all this VMware stuff was scary and “complicated,” it carried enough “voodoo mystique” that (often defacto) VMware admins either knew everything there was to know about their infrastructure, or they just left it to the experts. Today, virtualization has reached such high levels of accessibility that I think even my 102 year old Nana could clone a live VM; now that is scary.

Enter Veeam Backup, et al

Case in point is Veeam Backup and Recovery 6 (VBR6). Once an infrastructure exceeds the limits of VMware Data Recovery (VDR), it just doesn’t get much easier to backup your cadre of virtual machines than VBR6. Unlike VDR, VBR6 has three modes of access to virtual machine disks:

  1. Direct SAN  Access – VBR6 backup server/proxy has direct access to the VMFS LUNs containing virtual machine disks – very fast, very low overhead;
  2. Virtual Appliance – VBR6 backup server/proxy, running as a virtual machine, leverages it’s relation to the ESXi host to access virtual machine disks using the ESXi host as a go-between – fast, moderate overhead;
  3. Network – VBR6 backup server/proxy accesses virtual machine disks from ESXi hosts similar in a manner similar to the way the vSphere Client grants access to virtual machine disks across the LAN – slower, with more overhead;

For block-based storage, option (1) appears to be the best way to go: it’s fast with very little overhead in the data channel. For those of us with grey hair, think VMware Consolidated Backup proxy server and you’re on the right track; for everyone else, think shared disk environment. And that, boys and girls, is where we come to the point of today’s lesson…

Enter Windows Server, Updates

For all of its warts, my favorite aspect of VMware Data Recovery is the fact that it is a virtual appliance based on a stripped-down Linux distribution. Those two aspects say “do not tamper” better than anything these days, so admins – especially Windows admins – tend to just install and use as directed. At the very least, the appliance factor offers an opportunity for “special case” handling of updates (read: very controlled and tightly scripted).

The other “advantage” to VMDR is that is uses a relatively safe method for accessing virtual machine disks: something more akin to VBR6’s “virtual appliance” mode of operation. By allowing the ESXi host(s) to “proxy” access to the datastore(s), a couple of things are accomplished:

  1. Access to VMDKs is protocol agnostic – direct attach, iSCSI, AoE, SAS, Fiber Channel and/or NFS all work the same;
  2. Unlike “Direct SAN Access” mode, no additional initiators need to be added to the target(s)’ ACL;
  3. If the host can access the VMDK, it stands a good chance of being backed-up fairly efficiently.

However, VBR6 installs onto a Windows Server and Windows Server has no knowledge of what VMFS looks like nor how to handle VMFS disks. This means Windows disk management needs to be “tweaked” to ignore VMFS targets by disabling “automount” in VBR6 servers and VCB proxies. For most, it also means keeping up with patch management and Windows Update (or appropriate derivative). For active backup servers with a (pre-approved, tested) critical update this might go something like:

  1. Schedule the update with change management;
  2. Stage the update to the server;
  3. Put server into maintenance mode (services and applications disabled);
  4. Apply patch, reboot;
  5. Mitigate patch issues;
  6. Test application interaction;
  7. Rinse, repeat;
  8. Release server back to production;
  9. Update change management.

See the problem? If Windows Server 2008 R2 SP1 is involved you just might have one right around step 5…

And the Wheels Came Off…

Service Pack 1 for Windows Server 2008 R2 requires a BCD update, so existing installations of VCB or VBR5/6 will fail to update. In an environment where there is no VCB or VBR5/6 testing platform, this could result in a resume writing event for the patching guy or the backup administrator if they follow Microsoft’s advice and “fix” SP1. Why?

Fixing the SP1 installation problem is quite simple:

Quick steps to do this in case you forgot are:

1.  Run DISKPART

2.  automount enable

3.  Restart

4.  Install SP1

Technet Blogs, Windows Servicing Guy, SP1 Fails with 0x800f0a12

Done, right? Possibly in more ways than one. By GLOBALLY enabling automount, rebooting Windows Server and installing SP1, you’ve opened-up the potential for Windows to write a signature to the VMFS volumes holding your critical infrastructure. Fortunately, it doesn’t have to end that way.

Avoiding the Avoidable

Veeam’s been around long enough to have some great forum participants from across the administrative spectrum. Fortunately, a member posted a solution method that keeps us well away from VMFS corruption and still solves the SP1 issue in a targeted way: temporarily mounting the “hidden” system partition instead of enabling the global automount feature. Here’s my take on the process (GUI mode):

  1. Inside Server Manager, open Disk Management (or run diskmgt.msc from admin cmd prompt);
  2. Right-click on the partition labled “System Reserved” and select “Change Drive Letter and Paths…”
  3. On the pop-up, click the “Add…” button and accept the default drive letter offered, click “OK”;
  4. Now “try again” the installation of Service Pack 1 and reboot;
  5. Once SP1 is installed, re-run Disk Management;
  6. Right-click on the “System Reserved” partition and select “Change Drive Letter and Paths..”
  7. Click the “Remove” button to unmap the drive letter;
  8. Click “Yes” at the “Are you sure…” prompt;
  9. Click “Yes” at the “Do you want to continue?” prompt;
  10. Reboot (for good measure).

This process assumes that there are no non-standard deployments of the Server 2008 R2 boot volume. Of course, if there is no separate system reserved partition, you wouldn’t encounter the SP1 failure to install issue…

SOLORI’s Take: The takeaway here is “consider your environment” (and the people tasked with maintaining it) before deploying Direct SAN Access mode into a VMware cluster. While it may represent “optimal” backup performance, it is not without its potential pitfalls (as demonstrated herein). Native access to SAN LUNs must come with a heavy dose of respect, caution and understanding of the underlying architecture: otherwise, I recommend Virtual Appliance mode (similar to Data Recovery’s take.)

While no VMFS volumes were harmed in the making of this blog post, the thought of what could have happened in a production environment chilled me into writing this post. Direct access to the SAN layer unlocks tremendous power for modern backup: just be safe and don’t forget to heed Uncle Ben’s advice! If the idea of VMFS corruption scares you beyond your risk tolerance, appliance mode will deliver acceptable results with minimal risk or complexity.

h1

Quick-Take: NexentaStor 3.1.3 New AD Group Feature, Can Break AD Shares

June 12, 2012

The latest update of NexentaStor may not go too smoothly if you are using Windows Server 2008 AD servers and delegating shares via NexentaStor. While the latest update includes a long sought after fix in AD capabilities (see pull quote below) it may require a tweak to the CIFS Server settings to get things back on track.

Domain Group Support

It is now possible to allow Domain groups as members of local groups. When a Windows client authenticates with NexentaStor using a domain account, NexentaStor consults the domain controller for information about that user’s membership in domain groups. NexentaStor also computes group memberships based on its _local_ groups database, adding both local and domain groups based on local group memberships, which are allowed to be indirect. NexentaStor’s computation of group memberships previously did not correctly handle domain groups as members of local groups.

NexentaStor 3.1.3 Release Notes

In the past, some of NexentaStor’s in-place upgrades have reset the “lmauth_level” of the associated SMB share server from its user configured value back to a “default” of four (4). This did not work very well in an AD environment where the servers were Windows Server 2008 and running their native authentication mode. The fix was to change the “lmauth_level” to two (2) via the NMV or NMC (“sharectl set -p lmauth_level=2 smb”) and restart the service. If you have this issue, the giveaway kernel log entries are as follows:

smbd[7501]: [ID 702911 daemon.notice] smbd_dc_update: myad.local: locate failed
smbd[7501]: [ID 702911 daemon.notice] smbd_dc_monitor: domain service not responding

However, the rules have changed in some applications; Nexenta’s new guidance is:

Summary Description CIFS Issue

A recent patch release by Microsoft has necessitated a changed to the CIFS authorization setting. Without changing this setting, customers will see CIFS disconnects or the appliance being unable to join the Active Directory domain. If you experience CIFS disconnects or problems joining your Active Directory domain, please modify the ‘lmauth_level’ setting.

# sharectl set -p lmauth_level=4 smb

– NexentaStor 3.1.3 Release Notes

While this may work for others out there it does not universally work for any of my tested Windows Server 2008 R2, native AD mode servers. Worse, it appears to work with some shares, but not all; this can lead to some confusion about the actual cause (or resolution) of the problem based on the Nexenta release notes. Fortunately (or not, depending on your perspective), the genesis of NexentaStor is clearlyheading toward an intersection with Illumos although the current kernel is still based on Open Solaris (134f), and a post from OpenIndiana points users to the right solution.

(Jonathan Leafty) I always thought it was weird that lmauth_level had to be set to 2 so I
bumped it back to the default of 3 and restarted smb and it worked...
(Gordon Ross) Glad you found that.  I probably should have sent a "heads-up" when the
"extended security outbound" enhancement went in.  People who have
adjusted down lmauth_level should put it back the the default.

– CIFS in Domain Mode (AD 2008), OpenIndiana Discussion Group (openindiana-discuss@openindiana.org)

Following the advice for OpenIndiana re-enabled all previously configured shares. This mode is also the default for Solaris, although NexentaStor continues to use a different one. According to the man pages for smb on Nexenta (‘man smb(4)’) the difference between ‘lmauth_level=3’ and ‘lmauth_level=4’ is as follows:

lmauth_level

Specifies the LAN Manager (LM) authentication level. The LM compatibility level controls the type of user authentication to use in workgroup mode or
domain mode. The default value is 3.

The following describes the behavior at each level.

2 – In Windows workgroup mode, the Solaris CIFS server accepts LM, NTLM, LMv2, and NTLMv2 requests. In domain mode, the SMB redirector on
the Solaris CIFS server sends NTLM requests.

3 – In Windows workgroup mode, the Solaris CIFS server accepts LM, NTLM, LMv2, and NTLMv2 requests. In domain mode, the SMB redirector on
the Solaris CIFS server sends LMv2 and NTLMv2 requests.

4 – In Windows workgroup mode, the Solaris CIFS server accepts NTLM, LMv2, and NTLMv2 requests. In domain mode, the SMB redirector on the
Solaris CIFS server sends LMv2 and NTLMv2 requests.

5 – In Windows workgroup mode, the Solaris CIFS server accepts LMv2 and NTLMv2 requests. In domain mode, the SMB redirector on the Solaris
CIFS server sends LMv2 and NTLMv2 requests.

Manpage for SMB(4)

This illustrates either a continued dependency on LAN Manager (absent in ‘lmauth_level=4’) or a bug as indicated in the OpenIndiana thread. Either way, more testing to determine if this issue is unique to my particular 2008 AD environment or this is a general issue with the current smb/server facility in NexentaStor…

SOLORI’s Take: So while NexentaStor defaults back to ‘lmauth_level=4’ and ‘lmauth_level=2’ is now broken (for my environment), the “default” for OpenIndiana and Solaris (‘lmauth_level=3’) is a winner; as to why – that’s a follow-up question… Meanwhile, proceed with caution when upgrading to NexentaStor 3.1.3 if your appliance is integrated into AD – testing with the latest virtual appliance for the win.

h1

In-the-Lab: NexentaStor and VMware Tools, You Need to Tweak It…

February 24, 2012

While working on an article on complex VSA’s (i.e. a virtual storage appliance with PCIe pass-through SAS controllers) an old issue came back up again: NexentaStor virtual machines still have a problem installing VMware Tools since it branched from Open Solaris and began using Illumos. While this isn’t totally Nexenta’s fault – there is no “Nexenta” OS type in VMware to choose from – it would be nice if a dummy package was present to allow a smooth installation of VMware Tools; this is even the case with the latest NexentaStor release: 3.1.2.

I could not find where I had documented the fix in SOLORI’s blog, so here it is… Note, the NexentaStor VM is configured as an Oracle Solaris 11 (64-bit) virtual machine for the purpose of vCenter/ESXi. This establishes the VM’s relationship to a specific VMware Tools load. Installation of VMware Tools in NexentaStor is covered in detail in an earlier blog entry.

VMware Tools bombs-out at SUNWuiu8 package failure. Illumos-based NexentaStor has no such package.

Instead, we need to modify the vmware-config-tools.pl script directly to compensate for the loss of the SUNWuiu8 package that is explicitly required in the installation script.

Commenting out the SUNWuiu8 related section allows the tools to install with no harm to the system or functionality.

Note the full “if” stanza for where the VMware Tools installer checks for ‘tools-for-solaris’ must be commented out. Since the SUNWuiu8 package does not exist – and more importantly is not needed for Illumos/Nexenta – removing a reference to it is a good thing. Now the installation can proceed as normal.

After the changes, installation completes as normal.

That’s all there is to getting the “Oracle Solaris” version of VMware Tools to work in newer NexentaStor virtual machines – now back to really fast VSA’s with JBOD-attached storage…

SOLORI’s Note: There is currently a long-standing bug that affects NexentaStor 3.1.x running as a virtual machine. Currently there is no known workaround to keep NexentaStor from running up a 50% cpu utilization from ESXi’s perspective. Inside the NexentaStor VM we see very little CPU utilization, but from the performance tab, we see 50% utilization on every configured vCPU allocated to the VM. Nexenta is reportedly looking into the cause of the problem.

I looked through this and there is nothing that stands out other that a huge number of interrupts while idle. I am not sure where those interrupts are coming from. I see something occasionally called volume-check and nmdtrace which could be causing the interrupts.

Nexenta Support

A bug report was reportedly filed a couple of days ago to investigate the issue further.

h1

In-the-Lab: Tweak 2008R2 post-clone for View Transfer Server

April 4, 2011

View Transfer Server supports Server 2008 R2 but does not support the use of the “default” virtual LSI Logic SAS controller. If you’ve already carved-out a cloning template using the LSI Logic SAS template, it is not necessary to create a new template (or fresh installation) just to spool-up a Transfer Server. In fact, it will take you TWO re-boots from clone completion to LSI Logic Parallel replacement.

CAUTION: You must configure the virtual machine that hosts View Transfer Server with an LSI Logic Parallel SCSI controller. You cannot use a SAS or VMware paravirtual controller.

On Windows Server 2008 virtual machines, the LSI Logic SAS controller is selected by default. You must change this selection to an LSI Logic Parallel controller before you install the operating system.

– VMware View Upgrades (EN-000526-00), Page 13

Here’s the process to take you from completed Server 2008/R2 clone with LSI Logic SAS to LSI Logic Parallel – by-passing the Windows blue screen at boot:

  1. Clone your Server 2008/R2 server as normal,
  2. Shutdown clone and edit settings,
    1. Change Options>Advanced>Boot Options to “Force BIOS Setup” on next reboot;
    2. Hardware>Add…>Hard Disk>Create a new virtual disk>4GB, Thin Provisioning>SCSI(1:0)
    3. Hardware>SCSI Controller 1>Change Type…>LSI Logic Parallel
    4. Power-on

      Dropping-in a "dummy" LSI Logic Parallel disk to enable the drive controller for View Transfer Server.

  3. Boot the modified VM and (optionally) confirm new drive and controller
    1. Boot VM
    2. Modify boot order to insure SAS boot priority

      Modify boot order in BIOS to insure that the SAS controller is primary.

    3. (optional) Open Server Manager>Diagnostics>Device Manager
      1. View “Storage controllers”

        Confirming the operational status of both LSI controller types: Parallel and SAS.

    4. Shutdown
  4. Edit settings to modify boot and remove additional disk
    1. Hardware>SCSI Controller 0>Change Type…>LSI Logic Parallel
    2. Hard Disk 2>Remove>Remove from virtual machine and delete files from disk
      1. SCSI Controller 1 will automatically be removed
    3. Save and power-on
  5. Boot disk will now be LSI Logic Parallel

NOTE: In this example, the Server 2008/R2 VM is composed onto a single LSI Logic SAS disk (Hard Disk 1, SCSI controller 0). If your VM template is different, substitute your specific disk and/or controller numbers accordingly.

Nice, simple and now ready to install the View Transfer Server. Now on to the PCoIP Secure Gateway…

h1

In-the-Lab: Default Rights on CIFS Shares

December 6, 2010

Following-up on the last installment of managing CIFS shares, there has been a considerable number of questions as to how to establish domain user rights on the share. From these questions it is apparent that the my explanation about root-level share permissions could have been more clear. To that end, I want to look at default shares from a Windows SBS Server 2008 R2 environment and translate those settings to a working NexentaStor CIFS share deployment.

Evaluating Default Shares

In SBS Server 2008, a number of default shares are promulgated from the SBS Server. Excluding the “hidden” shares, these include:

  • Address
  • ExchangeOAB
  • NETLOGON
  • Public
  • RedirectedFolders
  • SYSVOL
  • UserShares
  • Printers

Therefore, it follows that a useful exercise in rights deployment might be to recreate a couple of these shares on a NexentaStor system and detail the methodology. I have chosen the NETLOGON and SYSVOL shares as these two represent default shares common in all Windows server environments. Here are their relative permissions:

NETLOGON

From the Windows file browser, the NETLOGON share has default permissions that look like this:

NETLOGON Share permissions

Looking at this same permission set from the command line (ICALCS.EXE), the permission look like this:

NETLOGON permissions as reported from icacls
The key to observe here is the use of Windows built-in users and NT Authority accounts. Also, it is noteworthy that some administrative privileges are different depending on inheritance. For instance, the Administrator’s rights are less than “Full” permissions on the share, however they are “Full” when inherited to sub-dirs and files, whereas SYSTEM’s permissions are “Full” in both contexts.

SYSVOL

From the Windows file browser, the NETLOGON share has default permissions that look like this:

SYSVOL network share permissions

Looking at this same permission set from the command line (ICALCS.EXE), the permission look like this:

SYSVOL permissions from ICACLS.EXE
Note that Administrators privileges are truncated (not “Full”) with respect to the inherited rights on sub-dirs and files when compared to the NETLOGON share ACL.

Create CIFS Shares in NexentaStor

On a ZFS pool, create a new folder using the Web GUI (NMV) that will represent the SYSVOL share. This will look something like the following:
Creating the SYSVOL share
Read the rest of this entry ?

h1

Short-Take: New Oracle/Sun ZFS Goodies

November 17, 2010

I wanted to pass-on some information posted by Joerg Moellenkamp at c0t0d0s0.org – some good news for Sun/ZFS users out there about Solaris Express 2010.11 availability, links to details on ZFS encryption features in Solaris 11 Express and clarification on “production use” guidelines. Here’s the pull quotes from his posting:

“Darren (Moffat) wrote three really interesting articles about ZFS encryption: The first one is Introducing ZFS Crypto in Oracle Solaris 11 Express. This blog entry gives you a first overview how to use encryption for ZFS datasets. The second one…”

–  Darren Moffat about ZFS encryption, c0t0d0s0.org, 11-16-2010

“There is a long section in the FAQ about licensing and production use: The OTN license just covers development, demo and testing use (Question 14) . However you can use Solaris 11 Express on your production system as well…”

Solaris 11 Express for production use, c0t0d0s0.org, 11-16-2010

“A lot of changes found their way into the newest release of Solaris, the first release of Oracle Solaris Express 2010.11. The changes are summarized in a lengthy document, however…”

What’s new for the Administrator in Oracle Solaris  Express 2010.11, c0t0d0s0.org, 11-15-2010

Follow the links to Joerg’s blog for more details and links back to the source articles. Cheers!

h1

Short-Take: ZFS version ZPOOL Versions

November 15, 2010

As features are added to ZFS – the ZFS (filesystem) code may change and/or the underlying ZFS POOL code may change. When features are added, older versions of ZFS/ZPOOL will not be able to take advantage of these new features without the ZFS filesystem and/or pool being updated first.

Since ZFS filesystems exist inside of ZFS pools, the ZFS pool may need to be upgraded before a ZFS filesystem upgrade may take place. For instance, in ZFS pool version 24, support for system attributes was added to ZFS. To allow ZFS filesystems to take advantage of these new attributes, ZFS filesystem version 4 (or higher) is required. The proper order to upgrade would be to bring the ZFS pool up to at least version 24, and then upgrade the ZFS filesystem(s) as needed.

Systems running a newer version of ZFS (pool or filesystem) may “understand” an earlier version. However, older versions of ZFS will not be able to access ZFS streams from newer versions of ZFS.

For NexentaStor users, here are the current versions of the ZFS filesystem (see “zfs upgrade -v”):

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties
 5   System attributes

For NexentaStor users, here are the current versions of the ZFS pool (see “zpool upgrade -v”):

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance

As versions change, upgrading the ZFS pool and filesystem is possible using the respective upgrade command. To upgrade all imported ZFS pools, issue the following command as root:

zpool upgrade -a

Likewise, to upgrade the ZFS filesystem(s) inside the pool and all child filesystems, issue the following command as root:

zfs upgrade -r -a

The new ZFS features available to these pool and filesystem version(s) will now be available to the upgraded pools/filesystems.

h1

In-the-Lab: Windows Server 2008 R2 Template for VMware

September 30, 2010

As it turns out, the reasonably simple act of cloning a Windows Server 2008 R2 (insert addition here) has been complicated by the number of editions, changes from 2008 release through 2008 R2 as well as user profile management changes since its release. If you’re like me, you like to tweak your templates to limit customization steps in post-deployment. While most of these customizations can now be setup in group policies from AD, the deployment of non-AD members has become a lot more difficult – especially where custom defaults are needed or required.

Here’s my quick recipe to build a custom image of Windows Server 2008 R2 that has been tested with Standard, Enterprise and Foundation editions.

Create VM, use VMXNET3 as NIC(s), 40GB “thin” disk, using 2008 R2 Wizard

This is a somewhat “mix to taste” step. We use ISO images and encourage their use. The size of the OS volume will end-up being somewhere around 8GB of actual space-on-disk after this step, making 40GB sound like overkill. However, the OS volume will bloat-up to 18-20GB pretty quick after updates, roles and feature additions. Adding application(s) will quickly chew-up the rest.

  • Edit Settings… ->
    • Options -> Advanced -> General -> Uncheck “Enable logging”
    • Hardware -> CD/DVD Drive 1 ->
      • Click “Datastore ISO File”
        • Browse to Windows 2008 R2 ISO image
      • Check “Connect at power on”
    • Options -> Advanced -> Boot Options -> Force BIOS Setup
      • Check “The next time the virtual machine boots, force entry into the BIOS setup screen”
  • Power on VM
  • Install Windows Server 2008 R2

Use Custom VMware Tools installation to disable “Shared Folders” feature:

It is important that VMware Tools be installed next, if for no other reason than to make the rest of the process quicker and easier. The additional step of disabling “Shared Folders” is for ESX/vSphere environments where shared folders are not supported. Since this option is installed by default, it can/should be removed in vSphere installations.

  • VM -> Guest -> Install VMware Tools ->
    • Custom -> VMware Device Drivers -> Disable “Shared Folder” feature
  • Retstart

Complete Initial Configuration Tasks:

Once the initial installation is complete, we need to complete the 2008 R2 basic configuration. If you are working in an AD environment, this is not the time to join the template to the domain as GPO conflicts may hinder manual template defaults. We’ve chosen a minimal package installation based on our typical deployment profile. Some features/roles may differ in your organization’s template (mix to taste).

  • Set time zone -> Date and Time ->
    • Internet Time -> Change Settings… -> Set to local time source
    • Date and Time -> Change time zone… -> Set to local time zone
  • Provide computer name and domain -> Computer name ->
    • Enterprise Edition: W2K8R2ENT-TMPL
    • Standard Edition: W2K8R2STD-TMPL
    • Foundation Edition: W2K8R2FND-TMPL
    • Note: Don’t join to a domain just yet…
  • Restart Later
  • Configure Networking
    • Disable QoS Packet Scheduler
  • Enable automatic updating and feedback
    • Manually configure settings
      • Windows automatic updating -> Change Setting… ->
        • Important updates -> “check for updates but let me choose whether to download and install them”
        • Recommended updates -> Check “Give me recommended updates the same way I receive important updates”
        • Who can install updates -> Uncheck “Allow all users to install updates on this computer”
      • Windows Error Reporting -> Change Setting… ->
        • Select “I don’t want to participate, and don’t ask me again”
      • Customer Experience Improvement Program -> Change Setting… ->
        • Select “No, I don’t want to participate”
  • Download and install updates
    • Bring to current (may require several reboots)
  • Add features (to taste)
    • .NET Framwork 3.5.1 Feautures
      • Check WCF Activation, Non-HTTP Activation
        • Pop-up: Click “Add Required Features”
    • SNMP Services
    • Telnet Client
    • TFTP Client
    • Windows PowerShell Integrated Scripting Environment (ISE)
  • Check for updates after new features
    • Install available updates
  • Enable Remote Desktop
    • System Properties -> Remote
      • Windows 2003 AD
        • Select “Allow connection sfrom computers running any version of Remote Desktop”
      • Windows 2008 AD (optional)
        • Select “Allow connections only from computers runnign Remote Desktop with Network Level Authentication”
  • Windows Firewall
    • Turn Windows Firewall on of off
      • Home or work location settings
        • Turn off Windows Firewall
      • Public network location settings
        • Turn off Windows Firewall
  • Complete Initial Configuration Tasks
    • Check “Do not show this window at logon” and close

Modify and Silence Server Manager

(Optional) Parts of this step may violate your local security policies, however, it’s more than likely that a GPO will ultimately override this configuration. We find it useful to have this disabled for “general purpose” templates – especially in a testing/lab environment where the security measures will be defeated as a matter of practice.

  • Security Information -> Configure IE ESC
    • Select Administrators Off
    • Select Users Off
  • Select “Do not show me this console at logon” and close

Modify Taskbar Properties

Making the taskbar usable for your organization is another matter of taste. We like smaller icons and maximizing desktop utility. We also hate being nagged by the notification area…

  • Right-click Taskbar -> Taskbar and Start Menu Properties ->
    • Taskbar -> Check “Use small icons”
    • Taskbar -> Customize… ->
      • Set all icons to “Only show notifications”
      • Click “Turn system icons on or off”
        • Turn off “Volume”
    • Start Menu -> Customize…
      • Uncheck “Use large icons”

Modify default settings in Control Panel

Some Control Panel changes will help “optimize” the performance of the VM by disabling unnecessary features like screen saver and power management. We like to see our corporate logo on server desktops (regardless of performance implications) so now’s the time to make that change as well.

  • Control Panel -> Power Options -> High Performance
    • Change plan settings -> Turn off the display -> Never
  • Control Panel -> Sound ->
    • Pop-up: “Would you like to enable the Windows Audio Service?” – No
    • Sound -> Sounds -> Sound Scheme: No Sounds
    • Uncheck “Play Windows Startup sound”
  • Control Panel -> VMware Tools -> Uncheck “Show VMware Tools in the taskbar”
  • Control Panel -> Display -> Change screen saver -> Screen Saver -> Blank, Wait 10 minutes
  • Change default desktop image (optional)
    • Copy your desktop logo background to a public folder (i.e. “c:\Users\Public\Public Pictures”)
    • Control Panel -> Display -> Change desktop background -> Browse…
    • Find picture in browser, Picture position stretch

Disable Swap File

Disabling swap will allow the defragment step to be more efficient and will disable VMware’s advanced memory management functions. This is only temporary and we’ll be enabling swap right before committing the VM to template.

  • Computer Properties -> Visual Effects -> Adjust for best performance
  • Computer Properties -> Advanced System Settings ->
    • System Properties -> Advanced -> Performance -> Settings… ->
    • Performance Options -> Advanced -> Change…
      • Uncheck “Automatically manage paging file size for all drives”
      • Select “No paging file”
      • Click “Set” to disable swap file

Remove hibernation file and set boot timeout

It has been pointed out that the hibernation and timeout settings will get re-enabled by the sysprep operation. Removing the hibernation files will help in defragment now. We’ll reinforce these steps in the customization wizard later.

  • cmd: powercfg -h off
  • cmd: bcdedit /timeout 5

Disable indexing on C:

Indexing the OS disk can suck performance and increase disk I/O unnecessarily. Chances are, this template (when cloned) will be heavily cached on your disk array so indexing in the OS will not likely benefit the template. We prefer to disable this feature as a matter of practice.

  • C: -> Properties -> General ->
    • Uncheck “Allow files on this drive to have contents indexed in addition to file properties”
    • Apply -> Apply changes to C:\ only (or files and folders, to taste)

Housekeeping

Time to clean-up and prepare for a streamlined template. The first step is intended to aid the copying of “administrator defaults” to “user defaults.” If this does not apply, just defragment.

Remove “Default” user settings:

  • C:\Users -> Folder Options -> View -> Show hidden files…
  • C:\Users\Default -> Delete “NTUser.*” Delete “Music, Pictures, Saved Games, Videos”

Defragment

  • C: -> Properties -> Tools -> Defragment Now…
    • Select “(C:)”
    • Click “Defragment disk”

Copy Administrator settings to “Default” user

The “formal” way of handling this step requires a third-party utility. We’re giving credit to Jason Samuel for consolidating other bloggers methods because he was the first to point out the importance of the “unattend.xml” file and it really saved us some time. His blog post also includes a link to an example “unattend.xml” file that can be modified for your specific use, as we have.

  • Jason Samuel points out a way to “easily” copy Administrator settings to defaults, by activating the CopyProfile node in an “unattend.xml” file used by sysprep.
  • Copy your “unattend.xml” file to C:\windows\system32\sysprep
  • Edit unattend.xml for environment and R2 version
    • Update offline image pointer to correspond to your virtual CD
      • E.g. wim:d:… -> wim:f:…
    • Update OS offline image source pointer, valid sources are:
      • Windows Server 2008 R2 SERVERDATACENTER
      • Windows Server 2008 R2 SERVERDATACENTERCORE
      • Windows Server 2008 R2 SERVERENTERPRISE
      • Windows Server 2008 R2 SERVERENTERPRISECORE
      • Windows Server 2008 R2 SERVERSTANDARD
      • Windows Server 2008 R2 SERVERSTANDARDCORE
      • Windows Server 2008 R2 SERVERWEB
      • Windows Server 2008 R2 SERVERWEBCORE
      • Windows Server 2008 R2 SERVERWINFOUNDATION
    • Any additional changes necessary
  • NOTE: now would be a good time to snapshot/backup the VM
  • cmd: cd \windows\system32\sysprep
  • cmd: sysprep /generalize /oobe /reboot /unattend:unattend.xml
    • Check “Generalize”
    • Shutdown Options -> Reboot
  • Login
  • Skip Activation
  • Administrator defaults are now system defaults
  • Reset Template Name
    • Computer Properties -> Advanced System Settings -> Computer name -> Change…
      • Enterprise Edition: W2K8R2ENT-TMPL
      • Standard Edition: W2K8R2STD-TMPL
      • Foundation Edition: W2K8R2FND-TMPL
    • If this will be an AD member clone, join template to the domain now
    • Restart
  • Enable Swap files
    • Computer Properties -> Advanced System Settings ->
      • System Properties -> Advanced -> Performance -> Settings… ->
      • Performance Options -> Advanced -> Change…
        • Check “Automatically manage paging file size for all drives”
  • Release IP
    • cmd: ipconfig /release
  • Shutdown
  • Convert VM to template

Convert VM Template to Clone

Use the VMware Customization Wizard to create a re-usable script for cloning the template. Now’s a good time to test that your template will create a usable clone. If it fails, go check the “red letter” items and make sure your setup is correct. The following hints will help improve your results.

  • Remove hibernation related files and reset boot delay to 5 seconds in Customization Wizard
  • Remember that the ISO is still mounted by default. Once VM’s are deployed from the template, it should be removed after the customization process is complete and additional roles/features are added.

That’s the process we have working at SOLORI. It’s not rocket science, but if you miss an important step you’re likely to be visited by an error in “pass [specialize]” that will have you starting over. Note: this also happens when your AD credentials are bad, your license key is incorrect (version/edition mismatch, typo, etc.) or other nondescript issues – too bad the error code is unhelpful…

h1

Short-Take: OpenSolaris mantle assumed by Illumos, OpenIndiana

September 19, 2010

While Oracle is effectively “closed the source” to key Solaris code by making updates available only when “full releases” are distributed, others in the “formerly OpenSolaris” community are stepping-up to carry the mantle for the community. In an internal memo – leaked to the OpenSolaris news group last month – Oracle makes the new policy clear:

We will distribute updates to approved CDDL or other open source-licensed code following full releases of our enterprise Solaris operating system. In this manner, new technology innovations will show up in our releases before anywhere else. We will no longer distribute source code for the entirety of the Solaris operating system in real-time while it is developed, on a nightly basis.

Oracle Memo to Solaris Engineering, Aug, 2010

Frankly, Oracle clearly sees the issue of continuous availability to code updates as a threat to its control over its “best-of-breed” acquisition in Solaris. It will be interesting to see how long Oracle takes to reverse the decision (and whether or not it will be too late…)

However, at least two initiatives are stepping-up to carry the mantle of “freely accessible and open” Solaris code to the community: Illumos and OpenIndiana. Illumos’ goal can be summed-up as follows:

Well the first thing is that the project is designed here to solve a key problem, and that is that not all of OpenSolaris is really open source. And there’s a lot of other potential concerns in the community, but this one is really kind of a core one, and from solving this, I think a lot of other issues can be solved.

– Excerpt, Illumos Announcement Transcript

That said, it’s pretty clear that Illumos will be a distinct fork away from “questionable” code (from a licensing perspective.) We already see a lot of chatter/concerns about this in the news/mail groups.

The second announcement comes from thje OpenIndiana group (part of the Illumos Foundation) and appears to be to Solaris as CentOS is to RedHat Enterprise Server. OpenIndiana’s press release says it like this:

OpenIndiana, an exciting new distribution of OpenSolaris, built by the community, for the community – available for immediate download! OpenIndiana is a continuation of the OpenSolaris legacy and aims to be binary and package compatible with Oracle Solaris 11 and Solaris 11 Express.

OpenIndiana Press Release, September 2010

Does any of this mean that OpenSolaris is going away or being discontinued? Strictly speaking: no – it lives on as Solaris 11 Express, et al. It does means control of code changes will be more tightly controlled by Oracle, and – from the reaction of the developer community – this exertion of control may slow or eliminate open source contribution to the Solaris/OpenSolaris corpus. Further, Solaris 11 won’t be “free for production use”as earlier versions of Solaris were. It also means that distributions and appliance derivatives (like NexentaStor and Nexenta Core) will be able to thrive despite Oracle’s tightening.

Illumous has yet to release a distribution. OpenIndiana has distributions available for download today.