Archive for the ‘Ethics and Technology’ Category

h1

Quick-Take: Buying an iPad2 on Friday

March 10, 2011

New iPad2, launcing in White and Black "on day one"

If you’re chomping at the bit to buy an iPad2 on launch day, the question remains: which one to buy? There seems to be many options and ways to go, but ultimately this will end-up being a personal decision. However, there is an economic and functional rational that you should consider before coughing up nearly $1K on an arguably cool device.

Given the choices of models and network options, this should definitely NOT be an impulse buy, and I hope you look at it in a reasoned – if not somewhat giddy – way. Here are my thoughts for personal acquisition (not for businesses – you guys need to run POC for at least 3-6 months!):

Basically, there are three models: WiFi, WiFi+3G/ATT & WiFi+3G/Verizon, with three flash variants of each – 16GB, 32GB & 64GB – and two color variants of those – black & white. That’s a total of 18 different SKU’s for iPad2. So here’s how the process breaks down to me:

  1. Since WiFi/Bluetooth is the same on all models, choose first between 3G (includes aGPS) or WiFi-only (no aGPS);
    1. Choose carrier between ATT & Verizon (either are month-to-month):
      1. ATT offers two options for iPad2:
        1. $15/month for 250MB/month; with automatic charge of $15/250MB overage;
        2. $25/month for 2GB/month; with automatic charge of $10/1GB overage;
      2. Verizon offers four options for iPad2 WiFi-only + MiFi:
        1. $20/month for 1GB/month; with automatic charge of $20/GB overage;
        2. $35/month for 3GB/month; with automatic charge of $10/GB overage;
        3. $50/month for 5GB/month; with automatic charge of $10/GB overage;
        4. $80/month for 10GB/month; with automatic charge of $10/GB overage;
        5. Note: MiFi device is free only with a 2-year contract.
      3. Verizon offers four plans for iPad2 WiFi+3G:
        1. $20/month for 1GB/month; with automatic charge of $20/GB overage;
        2. $35/month for 3GB/month; with automatic charge of $10/GB overage;
        3. $50/month for 5GB/month; with automatic charge of $10/GB overage;
        4. $80/month for 10GB/month; with automatic charge of $10/GB overage;
    2. Choose memory size; 3G Models will cost according to their memory size regardless of carrier:
      1. $630 for 16GB, black or white;
      2. $730 for 32GB, black or white;
      3. $839 for 64GB, black or white;
    3. Choose color;
      1. Black;
      2. White;
  2. For WiFi-only models, you’ll give-up accurate location (no aGPS) but save money (see MiFi above for mobile access):
    1. Choose memory size; 3G Models will cost according to their memory size regardless of carrier:
      1. $630 for 16GB, black or white;
      2. $730 for 32GB, black or white;
      3. $839 for 64GB, black or white;
    2. Choose color;
      1. Black;
      2. White;
  3. Choose how you want to purchase:
    1. In-store (5PM local time):
    2. On-line (1AM PST):
  4. Enjoy iPad2 nirvana!

SOLORI’s Take: Steve Jobs really wants to see you on-camera and in line. Apple made a point to require retailers to coordinate sale starts at 5PM local time to be able to maximize “free” advertising benefits based on local, mobile news feeds from “high demand queues” at retailers. There’s no discount for purchasing after standing in a retail store line, so why queue-up without compensation just to be part of the iPad2 marketing push? Buy from an on-line retailer (or wait) and avoid the lines.

As for the model and plan, economically the 16GB iPad2 makes the most sense. If you need 3G but have no interest in using your iPad as a navigation unit while you drive, get MiFi and get the benefit of being able to use it with up to 5 other devices (laptop, iPad1, Android tablet, etc.) If you’re replacing your 32GB+ iPad and laptop (good luck) in this purchase, you may go all out, but don’t be surprised when buyer’s remorse sets-in a month or so hence. Then it comes down to 3G variant: ATT has more global reach (see link above), but beware of “data roaming” charges, while Verizon has a bit better $/GB rates (see above).

If you choose to que-up and volunteer for Steve Jobs unpaid marketing army, good luck and stay safe. According to BestBuy’s playbook, you’ll get a “ticket” for the model you want in line. There will only be tickets enough for the actual models they have and they’ll likely only know what that list is one to two hours before 5PM local time. When all tickets are gone, they’ll issue standby tickets for the next day, etc. At BestBuy at least, you’ll need to leave a $100 deposit with your standby ticket and it will be issued in the form of a $100 gift card usable towards your iPad purchase.

[Update: SOLORI’s iPad2 ordered for the lab at 2:53AM CST from ATT on-line – black, 16GB WiFi+3G, 2GB/mo. data plan. Verizon, Walmart, Target all show iPad2 as unavailable on-line and in stores at 5PM.]

[Update: On-line supplies of iPad2 started at 2-3 business day promised delivery and had gone to 2-3 week delivery by 9:30AM PST.]

[Update: 15-Mar-2011 – USPS delivered iPad2 – 2 business days achieved.]

h1

Quick-Take: Is Your Marriage a Happy One?

November 12, 2010

I came across a recent post by Chad Sakac (VP, VMware Alliance at EMC) discussing the issue of how vendors drive customer specifications down from broader goals to individual features or implementation sets (I’m sure VCE was not in mind at the time.) When it comes to vendors insist on framing the “client argument” in terms of specific features and proprietary approaches, I have to agree that Chad is spot on. Here’s why:

First, it helps when vendors move beyond the “simple thinking” of infrastructure elements as a grid of point solutions and more of an “organic marriage of tools” – often with overlapping qualities. Some marriages begin with specific goals, some develop them along the way and others change course drastically and without much warning. The rigidness of point approaches rarely accommodates growth beyond the set of assumptions that created the it in the first place. Likewise, the “laser focus” on specific features detracts from the overall goal: the present and future value of the solution.

When I married my wife, we both knew we wanted kids. Some of our friends married and “never” wanted kids, only to discover a child on the way and subsequent fulfillment through raising them. Still, others saw a bright future strained with incompatibility and the inevitable divorce. Such is the way with marriages.

Second, it takes vision to solve complex problems. Our church (Church of the Highlands in Birmingham, Alabama) takes a very cautious position on the union between souls: requiring that each new couple seeking a marriage give it the due consideration and compatibility testing necessary to have a real chance at a successful outcome. A lot of “problems” we would encounter were identified before we were married, and when they finally popped-up we knew how to identify and deal with them properly.

Couples that see “counseling” as too obtrusive (or unnecessary) have other options. While the initial investment of money are often equivalent, the return on investment is not so certain. Uncovering incompatibilities “after the sale” provides for difficult and too often a doomed outcome (hence, 50% divorce rate.)

This same drama plays out in IT infrastructures where equally elaborate plans, goals and unexpected changes abound. You date (prospecting and trials), you marry (close) and are either fruitful (happy client), disappointed (unfulfilled promises) or divorce. Often, it’s not the plan that failed but the failure to set/manage expectations and address problems that causes the split.

Our pastor could not promise that our marriage would last forever: our success is left to God and the two of us. But he did help us to make decisions that would give us a chance at a fruitful union. Likewise, no vendor can promise a flawless outcome (if they do, get a second opinion), but they can (and should) provide the necessary foundation for a successful marriage of the technology to the business problem.

Third, the value of good advice is not always obvious and never comes without risk. My wife and I were somewhat hesitant on counseling before marriage because we were “in love” and were happy to be blind to the “problems” we might face. Our church made it easy for us: no counseling, no marriage. Businesses can choose to plot a similar course for their clients with respect to their products (especially the complex ones): discuss the potential problems with the solution BEFORE the sale or there is no sale. Sometimes this takes a lot of guts – especially when the competition takes the route of oversimplification. Too often IT sales see identifying initial problems (with their own approach) as too high a risk and too great an obstacle to the sale.

Ultimately, when you give due consideration to the needs of the marriage, you have more options and are better equipped to handle the inevitable trials you will face. Whether it’s an unexpected child on the way, or an unexpected up-tick in storage growth, having the tools in-hand to deal with the problem lessens its severity. The point is, being prepared is better than the assumption of perfection.

Finally, the focus has to be what YOUR SOLUTION can bring to the table: not how you think your competition will come-up short. In Chad’s story, he’s identified vendors disqualifying one another’s solutions based on their (institutional) belief (or disbelief) in a particular feature or value proposition. That’s all hollow marketing and puffery to me, and I agree completely with his conclusion: vendors need to concentrate on how their solution(s) provide present and future value to the customer and refrain from the “art” of narrowly framing their competitors.

Features don’t solve problems: the people using them do. The presence (or absence) of a feature simply changes the approach (i.e. the fallacy of feature parity). As Chad said, it’s the TOTALITY of the approach that derives value – and that goes way beyond individual features and products. It’s clear to me that a lot of counseling takes place between Sakac’s EMC team and their clients to reach those results. Great job, Chad, you’ve set a great example for your team!

h1

Quick Take: IBM Tops VMmark, Crushes Record with 4P Nehalem-EX

April 7, 2010

It was merely a matter of time before one of the new core-rich titans – the Intel’s 8-core “Beckton” Nehalem-EX (Xeon 7500) or AMD’s 12-core “Magny-Cours” (Opteron 6100) – was to make a name for itself on VMware’s VMmark benchmark. Today, Intel draws first blood in the form of an 4-processor, 32-core, 64-thread, monster from IBM: the x3850 X5 running four Xeon X7560 (2.266GHz – 2.67GHz w/turbo, 130W TDP, each) and 384GB of DDR3-1066 low-power registered DIMMs. Weighing-in at 70.78@48 tiles, the 4P IBM System x3850 handily beats the next highest system – the 48-core DL785 G5 which set the record of 53.73@35 tiles back in August, 2009 – and bests it by over 30%.

At $3,800+ per socket for the tested Beckton chip, this is no real 2P alternative. In fact, a pair of Cisco UCS B250 M2 blades will get 52 tiles running for much less money. Looking at processor and memory configurations alone, this is a $67K+ enterprise server, resulting in a moderately-high $232/VM price point for the IBM x3850 X5.

SOLORI’s Take: The most interesting aspect of the EX benchmark is its clock-adjusted scaling factor: between 70% and 91% versus a 2P/8-core Nehalem-EP reference (Cisco UCS, B200 M1, 25.06@17 tiles). The unpredictable nature of Intel’s “turbo” feature – varying with thermal loads and per-core conditions – makes an exact clock-for-clock comparison difficult. However, if the scaling factor is 90%, the EX blows away our previous expectations about the platform’s scalability. Where did we go wrong when we predicted a conservative 44@39 tiles? We’re looking at three things: (1) a bad assumption about the effectiveness of “turbo” in the EP VMmark case (setting Ref_EP_Clock to 3.33 GHz), and (2) underestimating EX’s scaling efficiency (assumed 70%), (3) assuming a 2.26GHz clock for EX.

Chosing our minimum QPI/HT3 scalability factor of 75%, the predicted performance was derived this way from HP Proliant BL490 G6 as a baseline:

Est. Tiles = EP_Tiles_per_core( 2.13 ) * 32 cores * Scaling_Efficiency( 75% ) * EX_Clock( 2.26 ) / EP_Clock( 2.93 ) = 39 tiles

Est. Score = Est_Tiles( 40 ) * EP_Score_per_Tile( 1.43 ) * Est_EX_Clock( 2.26 ) / Ref_EP_Clock( 2.93 ) = 44.12

Est. Nehalem-EX VMmark -> 44.12@39 tiles

Correcting for the as-tested clock/turbo numbers, and using AMD’s 2P-to-4P VMmark scaling efficiency of 83%, and shifting to the new UCS baseline (with newer ESX version) the Nehalem-EX prediction factors to:

Est. Tiles = EP_Tiles_per_core( 2.13 ) * 32 cores * Scaling_Efficiency( 83% ) * EX_Clock( 2.67 ) / EP_Clock( 2.93 ) = 51 tiles

Est. Score = Est_Tiles( 51 ) * EP_Score_per_Tile( 1.47 ) * Est_EX_Clock( 2.67 ) / Ref_EP_Clock( 2.93 ) = 68.32

Est. Nehalem-EX VMmark -> 68.3@51 tiles

Clearly, this approach either overestimates the scaling efficiency or underestimates the “turbo” mode. IBM claims that a 2.93 GHz “turbo” setting is viable where Intel suggests 2.67 GHz is the maximum, so there is a source of potential bias. Looking at the tiles-per-core ratio of the VMmark result, the Nehalem-EX drops from 2.13 tiles per core on EP/2P platforms to 1.5 tiles per core on EX/4P platforms – about a 30% drop in per-core loading efficiency. That indicator matches well with our initial 75% scaling efficiency moving from 2P to 4P – something that AMD demonstrated with Istanbul last August. Given the high TDP of EX and IBM’s 2.93 GHz “turbo” specification, it’s possible that “turbo” is adding clock cycles (and power consumption) and compensating for a “lower” scaling efficiency than we’ve assumed. Looking at the same estimation with 2.93GHz “clock” and 71% efficiency (1.5/2.13), the numbers fall in line with VMmark:

Est. Tiles = EP_Tiles_per_core( 2.13 ) * 32 cores * Scaling_Efficiency( 71% ) * EX_Clock( 2.93 ) / EP_Clock( 2.93 ) = 48 tiles

Est. Score = Est_Tiles( 48 ) * EP_Score_per_Tile( 1.47 ) * Est_EX_Clock( 2.93 ) / Ref_EP_Clock( 2.93 ) = 70.56

Est. Nehalem-EX VMmark -> 70.56@48 tiles

This give us a good basis for evaluating 2P vs. 4P Nehalem systems: scaling factor of 71% and capable of pushing clock towards the 3GHz mark within its thermal envelope. Both of these conclusions fit typical 2P-to-4P norms and Intel’s process history.

SOLORI’s 2nd Take: So where does that leave AMD’s newest 12-core chip? To date, no VMmark exists for AMD’s Magny-Cours, and AMD chips tend not to do as well in VMmark as their Intel peers do to the benchmarks SMT-friendly loads. However, we can’t resist using the same analysis against AMD/MC’s 2.4GHz Opteron 6174SE (theoretical) using the 2P HP DL385 G6 as a baseline for core loading and the HP DL785 G6 for tile performance (best of the best cases):

Est. Tiles = HP_Tiles_per_core( 0.92 ) * 48 cores * Scaling_Efficiency( 83% ) * MC_Clock( 2.3 ) / HP_Clock( 2.6 ) = 33 tiles

Est. Score = Est_Tiles( 34 ) * HP_Score_per_Tile( 1.54 ) * Est_MC_Clock( 2.3 ) / Ref_HP_Clock( 2.8 ) = 41.8

Est. 4P Magny-Cours VMmark -> 41.8@33 tiles

That’s nowhere near good enough to top the current 8P, 48-core Istanbul VMmark at 53.73@35 tiles, so we’ll likely have to wait for faster 6100 parts to see any new AMD records. However, assuming AMD’s proposition is still “value 4P” so about 200 VM’s at under $18K/server gets you around $90/VM or less.

h1

UCS and VCE vBlock Type 1 Challenge Top VMmark

January 13, 2010

VMmark "Tile" Loads

Last November we reported on the Fujitsu RX300 S5 rack server taking the top VMmark spot for 8-core systems. Yesterday (January 11, 2010) Cisco’s UCS B200-M1 using VMware ESX 4.0 (build 164009) came within 0.5% of the top spot with a score of 25.06@17 tiles. While falling only slightly short of the mark set by the brute force RX300/DX80 combo, the UCS system did so with a very different solution, unsurprisingly similar to the vBlock Type 1 architecture described by Chad Sakacc in his blog post about the VMware, Cisco and EMC alliance.

Given that VMmark is a single node test harness, the difference between rack server and blade server architectures is a non-issue. However, more than just rack vs. blade is going on in this comparison. The Cisco UCS system is being fed by a pair of 10GE converged network adapters – used both for host network access and Fiber Channel bus access – and a monolithic storage array in the guise of a CLARiiON CX4-240 complete with a complement of 20, 73GB STEC SSD’s – just to sweeten the pot.

VMmark Network Configuration for the UCS B200-M1

While it is clear from past VMmark posts that the network speed (beyond 1Gbps) has little to do with the results, it is nice to see the confidence Cisco has in the CNA approach (Cisco UCS M71KR-Q) to go with the “eggs in a basket” solution. Given the storage demands on the CNA, the VMmark result should remove any doubt about the viability (performance) of high-capacity tandems (we’ll leave the physical link security concerns for another day.)

However, where the “rubber meets the road” in this contest is storage I/O and this solution – in our opinion is just plain showing off. With just 41 disks to build from, the CX4-240 has been configured to deliver 37 LUNs – nearly one LUN per unit disk. Before any awards are given out for storage of the year, we need to consider that 36 of those LUNs are RAID0 – yielding a testing platform with no real-world analog (hence “showing off”.)

CLARiiON CX4-240 Storage Build-out for UCS B200-M1 VMmark

Given the ease at which RAID0 can be replaced by RAID1+0, it may be safe to assume that the same results could have been obtained by using 77 disks instead of 41 – at which point the CX4-240 would still be less than half the size of the top VMmark’s 172-disk solution. The reason is clear: SSD’s accelerate I/O loads incredibly well in architectures that support them. If anything, this “runner-up” proves that SSD adoption is on the verge of becoming mainstream.

But what does this test show about UCS? Firstly, it shows that Cisco’s platform can compete with the best solutions out there on CPU and I/O performance (what’s a half a percentage point across 102 virtual machines?) It’s not really a surprise given that the UCS platform was designed to do just that – and within a neatly managed framework. Secondly, it shows that the choice of EMC as a partner was an excellent one. As Martin Glassborow commented on his Storagebod’s Blog, EMC’s involvement in VMware has energized the storage vendor to take bold and innovative steps towards Cloud Computing solutions that it might not have done otherwise (like the RAID0 SSD array). Thirdly and most importantly, it underscores the importance of predictable performance in a virtualization solution. Given the UCS/vBlock approach to systems organization, it can be very difficult not to draw solid parallels between the benchmarks and expectations for net new builds based on the criterion.

h1

NEC Offers “Dunnington” Liposuction, Tops 64-Core VMmark

November 19, 2009

NEC’s venerable Express5800/A1160 is back at the top VMmark chart, this time establishing the brand-new 64-core category with a score of 48.23@32 tiles – surpassing its 48-core 3rd place posting by over 30%. NEC’s new 16-socket, 64-core, 256GB “Dunnington” X7460 Xeon-based score represents a big jump in performance over its predecessor with a per tile ratio of 1.507 – up 6% from the 48-core ratio of 1.419.

To put this into perspective, the highest VMmark achieved, to date, is the score of 53.73@35 tiles (tile ratio 1.535) from the 48-core HP DL785 G6 in August, 2009. If you are familiar with the “Dunnington” X7460, you know that it’s a 6-core, 130W giant with 16MB L2 cache and a 1000’s price just south of $3,000 per socket. So that raises the question: how does 6-cores X 16-sockets = 64? Well, it’s not pro-rationing from the Obama administration’s “IT fairness” czar. NEC chose to disable the 4th and 6th core of each socket to reduce the working cores from 96 to 64.

At $500/core, NEC’s gambit may represent an expensive form of “core liposuction” but it was a necessary one to meet VMware’s “logical processor per host” limitation of 64. That’s right, currently VMware’s vSphere places a limit on logical processors based on the following formula:

CPU_Sockets X Cores_Per_Socket X Threads_Per_Core =< 64

According to VMware, the other 32 cores would have been “ignored” by vSphere had they been enabled. Since “ignored” is a nebulous term (aka “undefined”), NEC did the “scientific” thing by disabling 32 cores and calling the system a 64-core server. The win here: a net 6% improvement in performance per tile over the 6-core configuration – ostensibly from the reduced core loading on the 16MB of L3 cache per socket and reduction in memory bus contention.

Moving forward to 2010, what does this mean for vSphere hardware configurations in the wake of 8-core, 16-thread Intel Nehalem-EX and 12-core, 12-thread AMD Magny-Cours processors? With a 4-socket Magny-Cours system limitation, we won’t be seeing any VMmarks from the boys in green beyond 48-cores. Likewise, the boys in blue will be trapped by a VMware limitation (albeit, a somewhat arbitrary and artificial one) into a 4-socket, 64-thread (HT) configuration or an 8-socket, 64-core (HT-disabled) configuration for their Nehalem-EX platform – even if using the six-core variant of EX. Looks like VMware will need to lift the 64-LCPU embargo by Q2/2010 just to keep up.

h1

Fujistu RX300 S5 Rack Server Takes 8-core VMmark Lead

November 11, 2009

Fujitsu’s RX300 S5 rack server takes the top spot in VMware’s VMmark for 8-core systems today with a score of 25.16@17 tiles. Loaded with two of Intel’s top-bin 3.33GHz, 130W Nehalem-EP processors (W5590, turbo to 3.6GHz per core) and 96GB of DDR3-1333 R-ECC memory, the RX300 bested the former champ – the HP ProLiant BL490c G6 blade – by only about 2.5%.

With 17 tiles and 102 virtual machines on a single 2U box, the RX300 S5 demonstrates precisely how well vSphere scales on today’s x86 commodity platforms. It also appears to demonstrate both the value and the limits of Intel’s “turbo mode” in its top-bin Nehalem-EP processors – especially in the virtualization use case – we’ll get to that later. In any case, the resulting equation is:

More * (Threads + Memory + I/O) = Dense Virtualization

We could have added “higher execution rates” to that equation, however, virtualization is a scale-out applications where threads, memory pool and I/O capabilities dominate the capacity equation – not clock speed. Adding 50% more clock provides less virtualization gains than adding 50% more cores, and reducing memory and context latency likewise provides better gains that simply upping the clock speed. That’s why a dual quad-core Nehalem 2.6GHz processor will crush a quad dual-core 3.5GHz (ill-fated) Tulsa.

Speaking of Tulsa, unlike Tulsa’s rather anaemic first-generation hyper-threading, Intel’s improved SMT in Nehalem “virtually” adds more core “power” to the Xeon by contributing up to 100% more thread capacity. This is demonstrated by Nehalem-EP’s 2 tiles per core contributions to VMmark where AMD’s Istanbul quad-core provides only 1 tile per core. But exactly what is a VMmark tile and how does core versus thread play into the result?

Single-Tile-Relationship

The Illustrated VMmark "Tile" Load

As you can see, a “VMmark Tile” – or just “tile” for short – is composed of 6 virtual machines, half running Windows, half running SUSE Linux. Likewise, half of the tiles are running in 64-bit mode while the other half runs in 32-bit mode. As a whole, the tile is composed of 10 virtual CPUs, 5GB of RAM and 62GB of storage. Looking at how the parts contribute to the whole, the tile is relatively balanced:

Operating System / Mode 32-bit 64-bit Memory vCPU Disk
Windows Server 2003 R2 67% 33% 45% 50% 58%
SUSE Linux Enterprise Server 10 SP2 33% 67% 55% 50% 42%
32-bit 50% N/A 30% 40% 58%
64-bit N/A 50% 70% 60% 42%

If we stop here and accept that today’s best x86 processors from AMD and Intel are capable of providing 1 tile for each thread, we can look at the thread count and calculate the number of tiles and resulting memory requirement. While that sounds like a good “rule of thumb” approach, it ignores specific use case scenarios where synthetic threads (like HT and SMT) do not scale linearly like core threads do where SMT accounts for only about 12% gains over single-threaded core, clock-for-clock. For this reason, processors from AMD and Intel in 2010 will feature more cores – 12 for AMD and 8 for Intel in their Magny-Cours and Nehalem-EX (aka “Beckton”), respectively.

Learning from the Master

If we want to gather some information about a specific field, we consult an expert, right? Judging from the results, Fujitsu’s latest dual-processor entry has definitely earned the title ‘Master of VMmark” in 2P systems – at least for now. So instead of the usual VMmark $/VM analysis (which are well established for recent VMmark entries), let’s look at the solution profile and try to glean some nuggets to take back to our data centers.

It’s Not About Raw Speed

First, we’ve noted that the processor used is not Intel’s standard “rack server” fare, but the more workstation oriented W-series Nehalem at 130W TDP. With “turbo mode” active, this CPU is capable of driving the 3.33GHz core – on a per-core basis – up to 3.6GHz. Since we’re seeing only a 2.5% improvement in overall score versus the ProLiant blade at 2.93GHz, we can extrapolate that the 2.93GHz X5570 Xeon is spending a lot of time at 3.33GHz – its “turbo” speed – while the power-hungry W5590 spends little time at 3.6GHz. How can we say this? Looking at the tile ratio as a function of the clock speed.

We know that the X5570 can run up to 3.33GHz, per core, according to thermal conditions on the chip. With proper cooling, this could mean up to 100% of the time (sorry, Google). Assuming for a moment that this is the case in the HP test environment (and there is sufficient cause to think so) then the ratio of the tile score to tile count and CPU frequency is 0.433 (24.54/17/3.33). If we examine the same ratio for the W5590, assuming the clock speed of 3.33GHz, we get 0.444 – a difference of 2.5%, or the contribution of “turbo” in the W5590. Likewise, if you back-figure the “apparent speed” of the X5570 using the ratio of the clock-locked W5590, you arrive at 3.25GHz for the W5570 (an 11% gain over base clock). In either case, it is clear that “turbo” is a better value at the low-end of the Nehalem spectrum as there isn’t enough thermal headroom for it to work well for the W-series.

VMmark Equals Meager Network Use

Second, we’re not seeing “fancy” networking tricks out of VMmark submissions. In the past, we’ve commented on the use of “consumer grade” switches in VMmark tests. For this reason, we can consider VMmark’s I/O dependency as related almost exclusively to storage. With respect to networking, the Fujitsu team simply interfaced three 1Gbps network adapter ports to the internal switch of the blade enclosure used to run the client-side load suite and ran with the test. Here’s what that looks like:

ESX-Network-Configuration

Networking Simplified: The "leaders" simple virtual networking topology.

Note that the network interfaces used for the VMmark trial are not from the on-board i82575EB network controller but from the PCI-Express quad-port adapter using its older cousin – the i82571EB. What is key here is that VMmark is tied to network performance issues, and it is more likely that additional network ports might increase the likelihood of IRQ sharing and reduced performance more so than the “optimization” of network flows.

Keeping Storage “Simple”

Third, Fujitsu’s approach to storage is elegantly simple: several “inexpensive” arrays with intelligent LUN allocation. For this, Fujistu employed eight of its ETERNUS DX80 Disk Storage Systems with 7 additional storage shelves for a total of 172 working disks and 23 LUNs. For simplicity, Fujistu used a pair of 8Gbps FC ports to feed ESX and at least one port per DX80 – all connected through a Brocade 5100 fabric switch. The result looked something like this:

ESX-Storage-Configuration

Fujitsu's VMmark Storage Topology: 8 Controllers, 7 Shelves, 172 Disks and 23 LUNs.

And yes, the ESX server is configured to boot from SAN, using no locally attached storage. Note that the virtual machine configuration files, VM swap and ESX boot/swap are contained in a separate DX80 system. This “non-default” approach allows the working VMDKs of the virtual machines to be isolated – from a storage perspective – from the swap file overhead, about 5GB per tile. Again, this is a benchmark scenario, not an enterprise deployment, so trade-offs are in favour of performance, not CAPEX or OPEX.

Even if the DX80 solution falls into the $1K/TB range, to say that this approach to storage is “economic” requires a deeper look. At 33 rack units for the solution – including the FC switch but not including the blade chassis – this configuration has a hefty datacenter footprint. In contrast to the old-school server/blade approach, 1 rack at 3 servers per U is a huge savings over the 2 racks of blades or 3 racks of 1U rack servers. Had each of those servers of blades had a mirror pair, we’d be talking about 200+ disks spinning in those racks versus the 172 disks in the ETERNUS arrays, so that still represents a savings of 15.7% in storage-related power/space.

When will storage catch up?

Compared to a 98% reduction in network ports, a 30-80% reduction server/storage CAPEX (based on $1K/TB SAN), a 50-75% reduction in overall datacenter footprint, why is a 15% reduction in datacenter storage footprint acceptable? After all, storage – in the Fujitsu VMmark case – now represents 94% of the datacenter footprint. Even if the load were less aggressively spread across five ESX servers (a conservative 20:1 loading), the amount of space taken by storage only falls to 75%.

How can storage catch up to virtualization densities. First, with 2.5″ SAS drives, a bank of 172 disks can be made to occupy only 16U with very strong performance. This drops storage to only 60% of the datacenter footprint – 10U for hypervisor, 16U for storage, 26U total for this example. Moving from 3.5″ drives to 2.5″ drives takes care of the physical scaling issue with acceptable returns, but results in only minimal gains in terms of power savings.

Saving power in storage platforms is not going to be achieved by simply shrinking disk drives – shrinking the NUMBER of disks required per “effective” LUN is what’s necessary to overcome the power demands of modern, high-performance storage. This is where non-traditional technology like FLASH/SSD is being applied to improve performance while utilizing fewer disks and proportionately less power. For example, instead of dedicating disks on a per LUN basis, carving LUNs out of disk pools accelerated by FLASH (a hybrid storage pool) can result in a 30-40% reduction in disk count – when applied properly – and that means 30-40% reduction in datacenter space and power utilization.

Lessons Learned

Here are our “take aways” from the Fujitsu VMmark case:

1) Top-bin performance is at the losing end of diminishing returns. Unless your budget can accommodate this fact, purchasing decisions about virtualization compute platforms need to be aligned with $/VM within an acceptable performance envelope. When shopping CPU, make sure the top-bin’s “little brother” has the same architecture and feature set and go with the unit priced for the mainstream. (Don’t forget to factor memory density into the equation…) Regardless, try to stick within a $190-280/VM equipment budget for your hypervisor hardware and shoot for a 20-to-1 consolidation ratio (that’s at least $3,800-5,600 per server/blade).

2) While networking is not important to VMmark, this is likely not the case for most enterprise applications. Therefore, VMmark is not a good comparison case for your network-heavy applications. Also, adding more network ports increases capacity and redundancy but does so at the risk of IRQ-sharing (ESX, not ESXi) problems, not to mention the additional cost/number of network switching ports. This is where we think 10GE will significantly change the equation in 2010. Remember to add up the total number of in use ports – including out-of-band management – when factoring in switch density. For net new instalments, look for a switch that provides 10GE/SR or 10GE/CX4 options and go with !0GE/SR if power savings are driving your solution.

3) Storage should be simple, easy to manage, cheap (relatively speaking), dense and low-power. To meet these goals, look for storage technologies that utilize FLASH memory, tiered spindle types, smart block caching and other approaches to limit spindle count without sacrificing performance. Remember to factor in at least the cost of DAS when approximating your storage budget – about $150/VM in simple consolidation cases and $750/VM for more mission critical applications (that’s a range of $9,000-45,000 for a 3-server virtualization stack). The economies in managed storage come chiefly from the administration of the storage, but try to identify storage solutions that reduce datacenter footprint including both rack space and power consumption. Here’s where offerings from Sun and NexentaStor are showing real gains.

We’d like to see VMware update VMmark to include system power specifications so we can better gage – from the sidelines – what solution stack(s) perform according to our needs. VMmark served its purpose by giving the community a standard from which different platforms could be compared in terms of the resultant performance. With the world’s eyes on power consumption and the ecological impact of datacenter choices, adding a “power utilization component” to the “server-side” of the VMmark test would not be that significant of a “tweak.” Here’s how we think it can be done:

  1. Require power consumption of the server/VMmark related components be recorded, including:
    1. the ESX platform (rack server, blade & blade chassis, etc.)
    2. the storage platform providing ESX and test LUN(s) (all heads, shelves, switches, etc.)
    3. the switching fabric (i.e. Ethernet, 10GE, FC, etc.)
  2. Power delivered to the test harness platforms, client load machines, etc. can be ignored;
  3. Power measurements should be recorded at the following times:
    1. All equipment off (validation check);
    2. Start-up;
    3. Single tile load;
    4. 100% tile capacity;
    5. 75% tile capacity;
    6. 50% tile capacity;
  4. Power measurements should be recorded using a time-power data-logger with readings recorded as 5-minute averages;
  5. Notations should be made concerning “cache warm-up” intervals, if applicable, where “cache optimized” storage is used.

Why is this important? In the wake of the VCE announcement, solution stacks like VCE need to be measured against each other in an easy to “consume” way. Is VCE the best platform versus a component solution provided by your local VMware integrator? Given that the differentiated VCE components are chiefly UCS, Cisco switching and EMC storage, it will be helpful to have a testing platform that can better differentiate “packaged solutions” instead of uncorrelated vendor “propaganda.”

Let us know what your thoughts are on the subject, either on Twitter or on our blog…

h1

Quick Take: Nehalem/Istanbul Comparison at AnandTech

October 7, 2009

Johan De Gelas and crew present an interesting comparison of Dunnington, Shanghai, Istanbul and Nehalem in a new post at AnandTech this week. In the test line-up are the “top bin” parts from Intel and AMD in 4-core and 6-core incarnations:

  • Intel Nehalem-EP Xeon, X5570 2.93GHz, 4-core, 8-thread
  • Intel “Dunnington” Xeon, X7460, 2.66GHz, 6-core, 6-thread
  • AMD “Shanghai” Opteron 2389/8389, 2.9GHz, 4-core, 4-thread
  • AMD “Istanbul” Opteron 2435/8435, 2.6GHz, 6-core, 6-thread

Most importantly for virtualization systems architects is how the vCPU scheduling affects “measured” performance. The telling piece comes from the difference in comparison results where vCPU scheduling is equalized:

AnandTech's Quad Sockets v. Dual Sockets Comparison. Oct 6,  2009.

AnandTech's Quad Sockets v. Dual Sockets Comparison. Oct 6, 2009.

When comparing the results, De Gelas hits on the I/O factor which chiefly separates VMmark from vAPUS:

The result is that VMmark with its huge number of VMs per server (up to 102 VMs!) places a lot of stress on the I/O systems. The reason for the Intel Xeon X5570’s crushing VMmark results cannot be explained by the processor architecture alone. One possible explanation may be that the VMDq (multiple queues and offloading of the virtual switch to the hardware) implementation of the Intel NICs is better than the Broadcom NICs that are typically found in the AMD based servers.

Johan De Gelas, AnandTech, Oct 2009

This is yet another issue that VMware architects struggle with in complex deployments. The latency in “Dunnington” is a huge contributor to its downfall and why the Penryn architecture was a dead-end. Combined with 8 additional threads in the 2P form factor, Nehalem delivers twice the number of hardware execution contexts than Shanghai, resulting in significant efficiencies for Nehalem where small working data sets are involved.

When larger sets are used – as in vAPUS – the Istanbul’s additional cores allows it to close the gap to within the clock speed difference of Nehalem (about 12%). In contrast to VMmark which implies a 3:2 advantage to Nehalem, the vAPUS results suggest a closer performance gap in more aggressive virtualization use cases.

SOLORI’s Take: We differ with De Gelas on the reduction in vAPUS’ data set to accommodate the “cheaper” memory build of the Nehalem system. While this offers some advantages in testing, it also diminishes one of Opteron’s greatest strengths: access to cheap and abundant memory. Here we have the testing conundrum: fit the test around the competitors or the competitors around the test. The former approach presents a bias on the “pure performance” aspect of the competitors, while the latter is more typical of use-case testing.

We do not construe this issue as intentional bias on AnandTech’s part, however it is another vector to consider in the evaluation of the results. De Gelas delivers a report worth reading in its entirety, and we view this as a primer to the issues that will define the first half of 2010.