Archive for the ‘Windows Server 2003’ Category

h1

Short-Take: SQL Performance Notes

September 15, 2010

Here are some Microsoft SQL performance notes from discussions that inevitably crop-up when discussing SQL storage:

  1. Where do I find technical resources for the current version of MS SQL?
  2. I’m new to SQL I/O performance, how can I learn the basics?
  3. The basics talk about SQL 2000, but what about performance considerations due to changes in SQL 2005?
  4. How does using SQL Server 6.x versus SQL Server 7.0 and change storage I/O performance assumptions?
  5. How does TEMPDB affect storage (and memory) requirements and architecture?
  6. How does controller and disk caching affect SQL performance and data integrity?
  7. How can I use NAS for storage of SQL database in a test/lab environment?
  8. What additional considerations are necessary to implement database mirroring in SQL Server?
  9. When do SQL dirty cache pages get flushed to disk?
  10. Where can I find Microsoft’s general reference sheet on SQL I/O requirements for more information?

From performance tuning to performance testing and diagnostics:

  1. I’ve heard that SQLIOStress has been replaced by SQLIOSim: where can I find out about SQLIOSim to evaluate my storage I/O system before application testing?
  2. How do I diagnose and detect “unreported” SQL I/O problems?
  3. How do I diagnose stuck/stalled I/O problems in SQL Server?
  4. What are Bufwait and Writelog Timeout messages in SQL Server indicating?
  5. Can I control SQL Server checkpoint behavior to avoid additional I/O during certain operations?
  6. Where can I get the SQLIO benchmark tool to assess the potential of my current configuration?

That should provide a good half-day’s reading for any storage/db admin…

h1

NexentaStor CIFS Shares with Active Directory Authentication

June 15, 2010

Sharing folders in NexentaStor is pretty easy in Workgroup mode, but Active Directory integration takes a few extra steps.  Unfortunately, it’s not (yet) as easy as point-and-click, but it doesn’t have to be too difficult either. (The following assumes/requires that the NexentaStor appliance has been correctly configured-in and joined-to Active Directory.)

Typical user and group permissions for a local hard disk in Windows.

Let’s examine the case where a domain admin group will have “Full Control” of the share, and “Everyone” will have read/execute permissions. This is a typical use case where a single share contains multiple user directories under administrative control. It’s the same configuration as local disks in a Windows environment. For our example, we’re going to mimic this setup using a CIFS share from a NexentaStor CE appliance and create the basic ACL to allow for Windows AD control.

For this process to work, we need to join the NexentaStor appliance to the Active Directory Domain. The best practice is to create the machine account in AD first, assign control user/group rights (if possible) and then attempt to join. It is IMPORTANT that the host name and DNS configuration of the NexentaStor appliance match domain norms, or things will come crashing to a halt pretty quickly.

That said, assuming that your DC is 1.1.1.1 and your BDC is 1.1.1.2 with a “short” domain of “SOLORI” and a FQDN of “SOLORI.MSFT” your NexentaStor’s name server configuration (Settings->Network->Name Servers) would look something like this:

This is important because the AD queries will pull service records from the configured domain name servers. If these point to an “Internet” DNS server, the AD entries may not be reflected in that server’s database and AD authentication (as well as join) will fail.

The other way the NexentaStor appliance knows what AD Domain to look into is by its own host name. For AD authentication to work properly, the NexentaStor host name must reflect the AD domain. For example, if the FQDN of your AD domain is “SOLORI.MSFT” then your domain name on the appliance would be configured like this (Appliance->Basic Settings->Domainname):

The next step is to create the machine account in AD using “Active Directory Users and Computers” administrator’s configuration tool. Find your domain folder and right-click “Computers” – select New->Computer from the menu and enter the computer name (no domain). The default user group assigned to administrative control should be Domain Admins. Since this works for our example, no changes are necessary so click “OK” to complete.

Now it’s time to join the AD domain from NexentaStor. Any user with permissions to join a machine to the domain will do. Armed with that information, drill down to Data Management->Shares->CIFS Server->Join AD/DNS Server and enter the AD/DNS server. AD server, AD user and user password into the configuration box:

If your permissions and credentials are good, your NexentaStor appliance is not now a member of your domain. As such, it can now identify AD users and groups by unique gid and uid data created from AD. This gid and uid information will be used to create our ACLs for the CIFS share.

To uncover the gid for the “Domain Admins” and “Domain Users” groups, we issue the following from the NexentaStor NMC (CLI):

nmc@san01:/$ idmap dump -n | grep "Domain Admins"
wingroup:Domain Admins@solori.msft     ==      gid:3036392745
nmc@san01:/$ idmap dump -n | grep “Domain Users”
wingroup:Domain Users@solori.msft     ==      gid:1238392562

Now we can construct a CIFS share (with anonymous read/write disabled) and apply the Domain Admin gid to an ACL – just click on the share, and then click “(+) Add Permissions for Group”:

Applying administrative permissions with the AD group ID for Domain Admins.

We do similarly with the Domain User gid:

Applying the Domain User gid to CIFS share ACL.

Note that the “Domain Users” group gets only “execute” and “read” permissions while the “Domain Admins” group gets full control – just like the local disk! Now, with CIFS sharing enabled and the ACL suited to our AD authentication, we can access the share from any domain machine provided our user is in the Domain Users or Admins group.

Administrators can now create “personal” folders and assign detailed user rights just as they would do with any shared storage device. The only trick is in creating the initial ACL for the CIFS share – as about – and you’ve successfully integrated your NexentaStor appliance into your AD domain.

NOTE: If you’re running Windows Server 2008 (or SBS 2008) as your AD controller, you will need to update the share mode prior to joining the domain using the following command (from root CLI):

# sharectl set -p lmauth_level=2 smb

NOTE: I’ve also noticed that, upon reboot of the appliance (i.e. after a major update of the kernel/modules) your ephemeral id mapping takes some time to populate during which time authentication failures to CIFS shares can fail. This appears to have something to do with the state of ephemeral-to-SID mapping after re-boot.

To enable the mapping of unresolvable SIDs, do the following:

$ svccfg -s idmap setprop config/unresolvable_sid_mapping = boolean: true
$ svcadm refresh idmap
h1

Microsoft Update Kills vSphere Client

June 11, 2010

Got a problem running vSphere Client today? Seeing the following pop-up when trying to access your VMware stack?

Error parsing the server...Login doesn’t really continue, but in fact, ends with the following error:

The type initializer for...

Your environment has not been hacked! It’s a problem with your most recent Windows Update, introducing a .NET exception that your “old” version of VMware vSphere Client can’t handle. While you can uninstall the offending patch(es) to resolve the problem, the best remedy is to login to VMware’s site and download the latest vSphere Client (VMware KB 1022611).

By the way, if you’re vSphere Client is old enough to be affected (prior to Update 1), you might need to scan your vSphere environment for updates too. If you have SnS, run over to VMware’s download page for vSphere and get the updated packages, starting with the vSphere Client: you can find the installable Client package with the vCenter Server Update 2 downloads.

h1

vSphere, Hardware Version 7 and Hot Plug

December 5, 2009

VMware’s vSphere added hot plug features in hardware version 7 (first introduced in VMware Workstation 6.5) that were not available in the earlier version 4 virtual hardware. Virtual hardware version 7 adds the following new features to VMware virtual machines:

  • LSI SAS virtual device – provides support for Windows Server 2008 fail-over cluster configurations
  • Paravirtual SCSI devicesrecently updated to allow booting, can allow higher-performance (greater throughput and lower CPU utilization) than the standard virtual SCSI adapter – especially in SAN environments where I/O-intensive applications are used. Currently supported in Windows Server 2003/2008 and Red Hat Linux 5 – although any version of Linux could be modified to support PVSCSI.
  • IDE virtual device – useful for older OSes that don’t support SCSI drivers
  • VMXNET 3 – next generation Vmxnet device with enhanced performance and enhanced networking features.
  • Hot plug virtual devices, memory and CPU – supports hot add/remove of virtual devices, memory and CPU for supported OSes.

While the “upgrade” process from version 4 to version 7 is well-known, some of the side effects are not well publicised. The most obvious change after the migration from version 4 to version 7 is the affect hot plug has on the PCI bus adapters – some are now hot plug by default, including the network adapters!

Safe to remove network adapters. Really?

Safe to remove network adapters. Really?

Note that the above example demonstrates also that the updated hardware re-enumerates the network adapters (see #3 and #4) because they have moved to a new PCI bus – one that supports hot plug. Removing the “missing” devices requires a trip to device manager (set devmgr_show_nonpresent_devices=1 in your shell environment first.) This hot plug PCI bus also allows for an administrator to mistakenly remove the device from service – potentially disconnecting tier 1 services from operations (totally by accident, of course.

Devices that can be added while the VM runs with hardware version 4

Devices that can be added while the VM runs with hardware version 4

In virtual hardware version 4, only SCSI devices and hard disks were allowed to be added to a running virtual machine. Now with hardware version 7,

Devices that can be added while the VM runs with hardware version 7

Devices that can be added while the VM runs with hardware version 7

additional devices (USB and Ethernet) are available for hot add. You could change memory and CPU on the fly too, if the OS supports that feature and they are enabled in the virtual machine properties prior to running the VM:

CPU and Memory Hot Plug Properties

CPU and Memory Hot Plug Properties

However, the hot plug NIC issue isn’t discussed in the documentation, but Carlo Costanzo at VMwareInfo.com passes on Chris Hahn’s great tip to disable hot plug behaviour in his blog post complete with visual aids. The key is to add a new “Advanced Configuration Parameter” to the virtual machine configuration: this new parameter is called “devices.hotplug” and its value should be set to “false.” However, adding this parameter requires the virtual machine to be turned-off, so it is currently an off-line fix.

h1

Quick Take: Licensing Benefits of Virtualization

March 17, 2009

For some, the real licensing benefit of virtualization is a hidden entity. This is aided by some rather nebulous language in end user licenses from Microsoft and others.

Steve Kaplan, over at DABCC, has a brief and informative article on how licensing affects the deployment costs of virualized Microsoft products – sometimes offsetting the cost of the hypervisor costs in a VMware environment, for instance.

SOLORI’s 1st take: the virtual data center has new ways to increase costs with equal or better offsets to speed ROI – especially where new initiatives are concerned. When in doubt, talk to your software vendor and press for clear information about implementation licensing costs.

SOLORI’s 2nd take: Steve’s report relies on Gartner’s evaluation which is based on Microsoft policies that are outdated. For instance, Server 2003 R2 is NOT “the only edition that will allow the customer to run one instance in a physical operating system (OS) environment and up to four instances in virtual OS environments for one license fee.” This also applies to Server 2008… (see Microsoft links).

Check out Steve’s evaluation here. Also, see Microsoft’s updated policy here and their current Server 2003 policies here.