Archive for the ‘Quick Take’ Category

h1

Quick-Take: VMware View 4.6 and PCoIP Software Gateway

March 1, 2011

VMware View 4.6 has been released. Andre Leibovici has a nice summary of the PCoIP Software Gateway (PSG) functionality – new in 4.6 – that finally allows PCoIP to be negotiated without external VPN tunnels.

VMware View 4.6 has been just released and as everyone expected this release introduces support for external secure remote access with PCoIP, without requirement for a SSL VPN. This feature is also known as View Secure Gateway Server. VMware’s Mark Benson, in his blog article, does a very good job explaining why tunnelling PCoIP traffic through the Security Server using SSL was never a viable solution because VMware didn’t want to interfere with the advanced performance characteristics of the protocol.

Andre Leibovici – myvirtualcloud.net

Other enhancements in the 4.6 release include:

  • Enhanced USB device compatibility – View 4.6 supports USB redirection for syncing and managing iPhones and iPads with View desktops. This release also includes improvements for using USB scanners, and adds to the list of USB printers that you can use with thin clients. For more information, see the list of View Client resolved issues.
  • Keyboard mapping improvements – Many keyboard-related issues have been fixed. For more information, see the list of View Client resolved issues.
  • New timeout setting for SSO users – With the single-sign-on (SSO) feature, after users authenticate to View Connection Server, they are automatically logged in to their View desktop operating systems. This new timeout setting allows administrators to limit the number of minutes that the SSO feature is valid for.For example, if an administrator sets the time limit to 10 minutes, then 10 minutes after the user authenticates to View Connection Server, the automatic login ability expires. If the user then walks away from the desktop and it becomes inactive, when the user returns, the user is prompted for login credentials. For more information, see the VMware View Administration documentation.
  • VMware View 4.6 includes more than 160 bug fixes – For descriptions of selected resolved issues, see Resolved Issues.
  • Support for Microsoft Windows 7 SP1 operating systems

SOLORI’s Take: The addition of WAN-enabled PCoIP functionality takes VMware’s flagship desktop protocol to the next level. However, considerable tuning at the PCoIP desktop agent is necessary for most WAN configurations. The upside is the solution maintains PCoIP’s UDP basis without tunneling inside TCP.

Since PCoIP has always been AES encrypted by default, this is not really an issue of security but one of performance and delivery. Right-sizing the PCoIP payload for the intended WAN application will be challenging for most, so expect to see PSG use in campus-wide applications where security of PCoIP (UDP) has been difficult.

For a twist on PSG using Internet connections with dynamically assigned IP addresses, check-out Gabe’s Virtual World post – powershell included!

[updated to include links to VMware’s View release notes, and link to Gabe’s post.]

h1

Quick-Take: Merry Christmas and Happy Holidays

December 15, 2010

Merry Christmas and happy holidays from our family to yours! God has truly blessed us this year, and we’ve been privileged to share some of those blessings with you. Read the rest of this entry ?

h1

Quick-Take: Is Your Marriage a Happy One?

November 12, 2010

I came across a recent post by Chad Sakac (VP, VMware Alliance at EMC) discussing the issue of how vendors drive customer specifications down from broader goals to individual features or implementation sets (I’m sure VCE was not in mind at the time.) When it comes to vendors insist on framing the “client argument” in terms of specific features and proprietary approaches, I have to agree that Chad is spot on. Here’s why:

First, it helps when vendors move beyond the “simple thinking” of infrastructure elements as a grid of point solutions and more of an “organic marriage of tools” – often with overlapping qualities. Some marriages begin with specific goals, some develop them along the way and others change course drastically and without much warning. The rigidness of point approaches rarely accommodates growth beyond the set of assumptions that created the it in the first place. Likewise, the “laser focus” on specific features detracts from the overall goal: the present and future value of the solution.

When I married my wife, we both knew we wanted kids. Some of our friends married and “never” wanted kids, only to discover a child on the way and subsequent fulfillment through raising them. Still, others saw a bright future strained with incompatibility and the inevitable divorce. Such is the way with marriages.

Second, it takes vision to solve complex problems. Our church (Church of the Highlands in Birmingham, Alabama) takes a very cautious position on the union between souls: requiring that each new couple seeking a marriage give it the due consideration and compatibility testing necessary to have a real chance at a successful outcome. A lot of “problems” we would encounter were identified before we were married, and when they finally popped-up we knew how to identify and deal with them properly.

Couples that see “counseling” as too obtrusive (or unnecessary) have other options. While the initial investment of money are often equivalent, the return on investment is not so certain. Uncovering incompatibilities “after the sale” provides for difficult and too often a doomed outcome (hence, 50% divorce rate.)

This same drama plays out in IT infrastructures where equally elaborate plans, goals and unexpected changes abound. You date (prospecting and trials), you marry (close) and are either fruitful (happy client), disappointed (unfulfilled promises) or divorce. Often, it’s not the plan that failed but the failure to set/manage expectations and address problems that causes the split.

Our pastor could not promise that our marriage would last forever: our success is left to God and the two of us. But he did help us to make decisions that would give us a chance at a fruitful union. Likewise, no vendor can promise a flawless outcome (if they do, get a second opinion), but they can (and should) provide the necessary foundation for a successful marriage of the technology to the business problem.

Third, the value of good advice is not always obvious and never comes without risk. My wife and I were somewhat hesitant on counseling before marriage because we were “in love” and were happy to be blind to the “problems” we might face. Our church made it easy for us: no counseling, no marriage. Businesses can choose to plot a similar course for their clients with respect to their products (especially the complex ones): discuss the potential problems with the solution BEFORE the sale or there is no sale. Sometimes this takes a lot of guts – especially when the competition takes the route of oversimplification. Too often IT sales see identifying initial problems (with their own approach) as too high a risk and too great an obstacle to the sale.

Ultimately, when you give due consideration to the needs of the marriage, you have more options and are better equipped to handle the inevitable trials you will face. Whether it’s an unexpected child on the way, or an unexpected up-tick in storage growth, having the tools in-hand to deal with the problem lessens its severity. The point is, being prepared is better than the assumption of perfection.

Finally, the focus has to be what YOUR SOLUTION can bring to the table: not how you think your competition will come-up short. In Chad’s story, he’s identified vendors disqualifying one another’s solutions based on their (institutional) belief (or disbelief) in a particular feature or value proposition. That’s all hollow marketing and puffery to me, and I agree completely with his conclusion: vendors need to concentrate on how their solution(s) provide present and future value to the customer and refrain from the “art” of narrowly framing their competitors.

Features don’t solve problems: the people using them do. The presence (or absence) of a feature simply changes the approach (i.e. the fallacy of feature parity). As Chad said, it’s the TOTALITY of the approach that derives value – and that goes way beyond individual features and products. It’s clear to me that a lot of counseling takes place between Sakac’s EMC team and their clients to reach those results. Great job, Chad, you’ve set a great example for your team!

h1

Quick-Take: ZFS and Early Disk Failure

September 17, 2010

Anyone who’s discussed storage with me knows that I “hate” desktop drives in storage arrays. When using SAS disks as a standard, that’s typically a non-issue because there’s not typically a distinction between “desktop” and “server” disks in the SAS world. Therefore, you know I’m talking about the other “S” word – SATA. Here’s a tale of SATA woe that I’ve seen repeatedly cause problems for inexperienced ZFS’ers out there…

When volumes fail in ZFS, the “final” indicator is data corruption. Fortunately, ZFS checksums recognize corrupted data and can take action to correct and report the problem. But that’s like treating cancer only after you’ve experienced the symptoms. In fact, the failing disk will likely begin to “under-perform” well before actual “hard” errors show-up as read, write or checksum errors in the ZFS pool. Depending on the reason for “under-performing” this can affect the performance of any controller, pool or enclosure that contains the disk.

Wait – did he say enclosure? Sure. Just like a bad NIC chattering on a loaded network, a bad SATA device can occupy enough of the available service time for a controller or SAS bus (i.e. JBOD enclosure) to make a noticeable performance drop in otherwise “unrelated” ZFS pools. Hence, detection of such events is an important thing. Here’s an example of an old WD SATA disk failing as viewed from the NexentaStor “Data Sets” GUI:

Disk Statistics showing failing drive

Something is wrong with device c5t84d0...

Device c5t84d0 is having some serious problems. Busy time is 7x higher than counterparts, and its average service time is 14x higher. As a member of a RAIDz group, the entire group is being held-back by this “under-performing” member. From this snapshot, it appears that NexentaStor is giving us some good information about the disk from the “web GUI” but this assumption would not be correct. In fact, the “web GUI” is only reporting “real time” data so long as the disk is under load. In the case of a lightly loaded zpool, the statistics may not even be reported.

However, from the command shell, historic and real-time access to per-device performance is available. The output of “iostat -exn” shows the count of all errors for devices since the last time counters were reset, and average I/O loads for each:

Device statistics from 'iostat' show error and I/O history.

Device statistics from 'iostat' show error and I/O history.

The output of iostat clearly shows this disk has serious hardware problems. It indicates hardware errors as well as transmission errors for the device recognized as ‘c5t84d0’ and the I/O statistics – chiefly read, write and average service time – implicate this disk as a performance problem for the associated RAIDz group. So, if the device is really failing, shouldn’t there be a log report of such an event? Yes, and here’s a snip from the message log showing the error:

SCSI error with ioc_status=0x8048 reported in /var/log/messages

SCSI error with ioc_status=0x8048 reported in /var/log/messages for failing device.

However, in this case, the log is not “full” with messages of this sort. In fact, it only showed-up under the stress of an iozone benchmark (run from the NexentaStor ‘nmc’ console). I can (somewhat safely) conclude this to be a device failure since at least one other disk in this group is of the same make, model and firmware revision of the culprit. The interesting aspect about this “failure” is that it does not result in a read, write or checksum error for the associated zpool. Why? Because the device is only loosely coupled to the zpool as a constituent leaf device, and it also implies that the device errors were recoverable by either the drive or the device driver (mapping around a bad/hard error.)

Since these problems are being resolved at the device layer, the ZFS pool is “unaware” of the problem as you can see from the output of ‘zpool status’ for this volume:

zpool status output for pool with undetected failing device

Problems with disk device as yet undetected at the zpool layer.

This doesn’t mean that the “consumers” of the zpool’s resources are “unaware” of the problem, as the disk error has manifested itself in the zpool as higher delays, lower I/O through-put and subsequently less pool bandwidth. In short, if the error is persistent under load, the drive has a correctable but catastrophic (to performance) problem and will need to be replaced. If, however, the error goes away, it is possible that the device driver has suitably corrected for the problem and the drive can stay in place.

SOLORI’s Take: How do we know if the drive needs to be replaced? Time will establish an error rate. In short, running the benchmark again and watching the error counters for the device will determine if the problem persists. Eventually, the errors will either go away or they wont. For me, I’m hoping that the disk fails to give me an excuse to replace the whole pool with a new set of SATA “eco/green” disks for more lab play. Stay tuned…

SOLORI’s Take: In all of its flavors, 1.5Gbps, 3Gbps and 6Gbps, I find SATA drives inferior to “similarly” spec’d SAS for just about everything. In my experience, the worst SAS drives I’ve ever used have been more reliable than most of the SATA drives I’ve used. That doesn’t mean there are “no” good SATA drives, but it means that you really need to work within tighter boundaries when mixing vendors and models in SATA arrays. On top of that, the additional drive port and better typical sustained performance make SAS a clear winner over SATA (IMHO). The big exception to the rule is economy – especially where disk arrays are used for on-line backup – but that’s another discussion…

h1

Quick-Take: NexentaStor AD Shares in 100% Virtual SMB

July 19, 2010

Here’s a maintenance note for SMB environments attempting 100% virtualization and relying on SAN-based file shares to simplify backup and storage management: beware the chicken-and-egg scenario on restart before going home to capture much needed Zzz’s. If your domain controller is virtualized and it’s VMDK file lives on SAN/NAS, you’ll need to restart SMB services on the NexentaStor appliance before leaving the building.

Here’s the scenario:

  1. An afterhours SAN upgrade in non-HA environment (maybe Auto-CDP for BC/DR, but no active fail-over);
  2. Shutdown of SAN requires shutdown of all dependent VM’s, including domain controllers (AD);
  3. End-user and/or maintenance plans are dependent on CIFS shares from SAN;
  4. Authentication of CIFS shares on NexentaStor is AD-based;

Here’s the typical maintenance plan (detail omitted):

  1. Ordered shutdown of non-critical VM’s (including UpdateManager, vMA, etc.);
  2. Ordered shutdown of application VM’s;
  3. Ordered shutdown of resource VM’s;
  4. Ordered shutdown of AD server VM’s (minus one, see step 7);
  5. Migrate/vMotion remaining AD server and vCenter to a single ESX host;
  6. Ordered shutdown of ESX hosts (minus one, see step 8);
  7. vSphere Client: Log-out of vCenter;
  8. vSphere Client: Log-in to remaining ESX host;
  9. Ordered shutdown of vCenter;
  10. Ordered shutdown of remaining AD server;
  11. Ordered shutdown of remaining ESX host;
  12. Update SAN;
  13. Reboot SAN to update checkpoint;
  14. Test SAN update – restoring previous checkpoint if necessary;
  15. Power-on ESX host containing vCenter and AD server (see step 8);
  16. vSphere Client: Log-in to remaining ESX host;
  17. Power-on AD server (through to VMware Tools OK);
  18. Restart SMB service on NexentaStor;
  19. Power-on vCenter;
  20. vSphere Client: Log-in to vCenter;
  21. vSphere Client: Log-out of ESX host;
  22. Power-on remaining ESX hosts;
  23. Ordered power-on of remaining VM’s;

A couple of things to note in an AD environment:

  1. NexnetaStor requires the use of AD-based DNS for AD integration;
  2. AD-based DNS will not be available at SAN re-boot if all DNS servers are virtual and only one SAN is involved;
  3. Lack of DNS resolution on re-boot will cause a failure for DNS name based NTP service synchronization;
  4. NexentaStor SMB service will fail to properly initialize AD credentials;
  5. VMware 4.1 now pushes AD authentication all the way to ESX hosts, enabling better credential management and security but creating a potential AD dependency as well;
  6. Using auto-startup order on the remaining ESX host for AD and vCenter could automate the process (steps 17 & 19), however, I prefer the “manual” approach after a SAN upgrade in case the upgrade failure is detected only after ESX host is restarted (i.e. storage service interaction in NFS/iSCSI after upgrade).

SOLORI’s Take: This is a great opportunity to re-think storage resources in the SMB as the linchpin to 100% virtualization.  Since most SMB’s will have a tier-2 or backup NAS/SAN (auto-sync or auto-CDP) for off-rack backup, leveraging a shared LUN/volume from that SAN/NAS for a backup domain controller is a smart move. Since tier-2 SAN’s may not have the IOPs to run ALL mission critical applications during the maintenance interval, the presence of at least one valid AD server will promote a quicker RTO, post-maintenance, than coming up cold. [This even works with DAS on the ESX host]. Solution – add the following and you can ignore step 15:

3a. Migrate always-on AD server to LUN/volume on tier-2 SAN/NAS;

24. Migrate always-on AD server from LUN/volume on tier-2 SAN/NAS back to tier-1;

Since even vSphere Essentials Plus has vMotion now (a much requested and timely addition) collapsing all remaining VM’s to a single ESX host is a no brainer. However, migrating the storage is another issue which cannot be resolved without either a shutdown of the VM (off-line storage migration) or Enterprise/Enterprise Plus version of vSphere. That is why the migration of the AD server from tier-2 is reserved for last (step 17) – it will likely need to be shutdown to migrate the storage between SAN/NAS appliances.

h1

Quick Take: Q1 DRAM Price Follow-up, 8GB DDR3 Below Target

March 3, 2010

In September 2009 we predicted that average 8GB DIMM prices (DDR2 and DDR3) would reach $565/stick by year end (with DDR3 being higher than DDR2) and at now we’re seeing the reversal of fortunes for DDR2. At year end, the average price for benchmark DDR2/DDR3 was $591 retail, with promotional pricing pushing that below$550 as predicted. Today, we’re seeing DDR3 begin to overtake DDR2 in the 8GB ECC category, dropping below $510/stick, while DDR2 climbs to $550/stick (promotional, on $625/stick retail.)

In 4GB ECC configurations, DDR2 enjoys only a slight retail advantage (13%) while promotional pricing (likely due to inventory reduction initiatives) are providing a bit better value short term. However, the price gap is only 1/2 the power gap, with DDR3 delivering a greater than 35% reduction in power over its DDR2 equivalent (about $1.25/year/stick at $0.10/kWh). The honeymoon is almost over for DDR2.

Benchmark Server (Spot) Memory Pricing – Dual Rank DDR2 Only
DDR2 Reg. ECC Series (1.8V) Price Jun ’09 Price Sep ’09 Price Dec ’09 Price Mar ’10

KVR800D2D4P6/4G
4GB 800MHz DDR2 ECC Reg with Parity CL6 DIMM Dual Rank, x4
(5.400W operating)
$100.00 $117.00
up 17%
$140.70
up 23% promo
$128.90

($151 retail)

KVR667D2D4P5/4G
4GB 667MHz DDR2 ECC Reg with Parity CL5 DIMM Dual Rank, x4 (5.940W operating)
$80.00 $103.00
up 29%
$97.99
down 5%
$128.74

($149 retail)

KVR667D2D4P5/8G
8GB 667MHz DDR2 ECC Reg with Parity CL5 DIMM Dual Rank, x4 (7.236W operating)
$396.00 $433.00 $433.00 (promo) $550.00
(Promo price, retail $625)
Benchmark Server (Spot) Memory Pricing – Dual Rank DDR3 Only
DDR3 Reg. ECC Series (1.5V) Price Jun ’09 Price Sep ’09 Price Dec ’09 Price Mar ’10

KVR1333D3D4R9S/4G
4GB 1333MHz DDR3 ECC Reg w/Parity CL9 DIMM Dual Rank, x4 w/Therm Sen (3.960W operating)
$138.00 $151.00
up 10%
$135.99
down 10%

$150.74

($170 retail)

KVR1066D3D4R7S/4G
4GB 1066MHz DDR3 ECC Reg w/Parity CL7 DIMM Dual Rank, x4 w/Therm Sen (5.085W operating)
$132.00 $151.00
up 15%
$137.59
down 9% (promo)
$150.74
($170 retail)

KVR1066D3D4R7S/8G
8GB 1066MHz DDR3 ECC Reg w/Parity CL7 DIMM Dual Rank, x4 w/Therm Sen (4.110W operating)
$1035.00 $917.00 down 11.5% $667.00
down 28%
$506.59

(retail $584, avail. 3/15)

KVR1333D3D4R9S/8GHA
8GB 1333MHz DDR3 ECC Reg CL9 DIMM 2R x4 w/TS Server Hynix A (4.635W operating)
$584.00

SOLORI’s Take: With strong DDR3 demand and short-falls in DDR2 supply (according to DRAMeXchange), the only thing keeping DDR3 prices above DDR2 at this point is demand and inventory. As Q2/2010 introduces a rush of new workstation and server products based on DDR3 systems, the DRAM production ramp will eventually stabilize demand somewhere towards the end of Q3/2010. Meanwhile, technology companies like VMware, Microsoft, Intel and AMD are betting on new infrastructure spending on operating systems, virtualization and hardware refresh to drive-up economic market factors. If the global economic crisis deepens, this anticipated spending spree could be short-lived and its impact shallow.

h1

Quick-Take: AMD Dodeca-core Opteron, Real Soon Now

March 3, 2010

In a recent blog, John Fruehe recounted a few highlights from the recent server analyst event at AMD/Austin concerning the upcoming release of AMD’s new 12-core (dodeca) Opteron 6100 series processor – previously knows as Magny-Cours. While not much “new” was officially said outside of NDA privilege, here’s what we’re reading from his post:

1. Unlike previous launches, AMD is planning to have “boots on the ground” this time with vendors and supply alignments in place to be able to ship product against anticipated demand. While it is now well known that Magny-Cours has been shipping to certain OEM and institutional customers for some time, our guess is that 2000/8000 series 6-core HE series have been hard to come by for a reason – and that reason has 12-cores not 6;

Obviously the big topic was the new AMD Opteron™ 6000 Series platforms that will be launching very soon.  We had plenty of party favors – everyone walked home with a new 12-core AMD Opteron 6100 Series processor, code name “Magny-Cours”.

– Fruehe on AMD’s pending launch

2. Timing is right! With Intel’s Nehalem-EX 8-core and Core i7/Nehalem-EP 6-core being demoed about, there is more pressure than ever for AMD to step-up with a competitive player. Likewise, DDR3 is neck-and-neck with DDR2 in affordability and way ahead with low-power variants that more than compensate for power-hungry CPU profiles. AMD needs to deliver mainstream performance in 24-cores and 96GB DRAM within the power envelope of 12-cores and 64GB to be a player. With 1.35V DDR3 parts paired to better power efficiency in the 6100, this could be a possibility;

We demonstrated a benchmark running on two servers, one based on the Six-Core AMD Opteron processor codenamed “Istanbul,” and one 12-core “Magny-Cours”-based platform.  You would have seen that the power consumption for the two is about the same at each utilization level.  However, there is one area where there was a big difference – at idle.  The “Magny-Cours”-based platform was actually lower!

– AMD’s Fruehe on Opteron 6100’s power consumption

3. Performance in scaled virtualization matters – raw single-threaded performance is secondary. In virtual architectures, clusters of systems must perform as one in an orchestrated ballet of performance and efficiency seeking. For some clusters, dynamic load migration to favour power consumption is a priority – relying on solid power efficiency under high load conditions. For other clusters, workload is spread to maximize performance available to key workloads – relying on solid power efficiency under generally light loads. For many environments, multi-generational hardware will be commonplace and AMD is counting on its wider range of migration compatibility to hold-on to customers that have not yet jumped ship for Intel’s Nehalem-EP/EX.

“We demonstrated Microsoft Hyper-V running on two different servers, one based on a Quad-Core AMD Opteron processor codenamed “Barcelona” (circa 2007) and a brand new “Magny-Cours”-based system. …companies might have problems moving a 2010 VM to a 2007 server without limiting the VM features. (For example, in order to move a virtual machine from an Intel  “Nehalem”-based system to a “Harpertown” [or earlier] platform, the customer must not enable nested paging in the “Nehalem” virtual machine, which can reduce the overall performance of the VM.)”

– AMD’s Fruehe, extolling the virtues of Opteron generational compatibility

SOLORI’s Take: It would appear that Magny-Cours has more under the MCM hood than a pair of Istanbul processors (as previously charged.) To manage better idle performance and constant power performance in spite of a two-to-one core ratio and similar 45nm process, AMD’s process and feature set must include much better power management as well, however, core speed is not one of them. With the standard “Maranello” 6100 series coming in at 1.9, 2.1 and 2.2 GHz with an HE variant at 1.7GHz and SE version running at 2.3GHz, finding parity in an existing cluster of 2.4, 2.6 and 2.8 GHz six-core servers may be difficult. Still, Maranello/G34 CPUs will be at 85, 115 and 140W TDP.

That said, Fruehe has a point on virtualization platform deployment and processor speed: it is not necessary to trim-out an entire farm with top-bin parts – only a small portion of the cluster needs to operate with top-band performance marks. The rest of the market is looking for predictable performance, scalability and power efficiency per thread. While SMT makes a good run at efficiency per thread, it does so at the expense of predictable performance. Here’s hoping that AMD’s C1E (or whatever their power-sipping special sauce will be called) does nothing to interfere with predictable performance…

As we’ve said before, memory capacity and bandwidth (as a function of system power and core/thread capacity) are key factors in a CPU’s viability in a virtualization stack. With 12 DIMM slots per CPU (3-DPC, 4-channel), AMD inherits an enviable position over Intel’s current line-up of 2P solutions by being able to offer 50% more memory per cluster node without resorting to 8GB DIMMs. That said, it’s up to OEM’s to deliver rack server designs that feature 12 DIMM per CPU and not hold-back with only 8 DIMM variants. In the blade and 1/2-size market, cramming 8 DIMM per board (effectively 1-DPC for 2P Magny-Cours) can be a challenge let alone 24 DIMMs! Perhaps we’ll see single-socket blades with 12 DIMMs (12-cores, 48/96GB DDR3) or 2P blades with only one 12 DIMM memory bank (one-hop, NUMA) in the short term.

SOLORI’s 2nd Take: It makes sense that AMD would showcase their leading OEM partners because their success will be determined on what those OEM’s bring to market. With VDI finally poised to make a big market impact, we’d expect to see the first systems delivered with 2-DPC configurations (8 DIMM per CPU, economically 2.5GB/core) which could meet both VDI and HPC segments equally. However, with Window7 gaining momentum, what’s good for HPC might not cut it for long in the VDI segment where expectations of 4-6 VM’s per core at 1-2GB/VM are mounting.

Besides the launch date, what wasn’t said was who these OEM’s are and how many systems they’ll be delivering at launch. Whoever they are, they need to be (1) financially stronger than AMD, (2) in an aggressive marketing position with respect to today’s key growth market (server and desktop virtualization), and (3) willing to put AMD-based products “above the fold” on their marketing and e-commerce initiatives. AMD needs to “represent” in a big way before a tide of new technologies makes them yesterday’s news. We have high hopes that AMD’s recent “perfect” execution streak will continue.