Posts Tagged ‘nehalem’

h1

Quick Take: Year-end DRAM Price Follow-up, Thoughts on 2010

December 30, 2009

Looking at memory prices one last time before the year is out and prices of our “benchmark” Kingston DDR3 server DIMMs are on the decline. While the quad rank 8G DDR3/1066 DIMMs are below the $565 target price (at $514) we predicted back in August, the dual rank equivalent (on our benchmark list) are still hovering around $670 each. Likewise, while retail price on the 8G DDR2/667 parts continue to rise, inventory and promotional pricing has managed to keep them flat at $433 each, giving large foot print DDR2 systems a $2,000 price advantage (based on 64GB systems).

Benchmark Server (Spot) Memory Pricing – Dual Rank DDR2 Only
DDR2 Reg. ECC Series (1.8V) Price Jun ’09 Price Sep ’09 Price
Dec ’09

KVR800D2D4P6/4G
4GB 800MHz DDR2 ECC Reg with Parity CL6 DIMM Dual Rank, x4
(5.400W operating)
$100.00 $117.00
up 17%
$140.70
up 23%

(Promo price, retail $162)

KVR667D2D4P5/4G
4GB 667MHz DDR2 ECC Reg with Parity CL5 DIMM Dual Rank, x4 (5.940W operating)
$80.00 $103.00
up 29%
$97.99
down 5%

(retail $160)

KVR667D2D4P5/8G
8GB 667MHz DDR2 ECC Reg with Parity CL5 DIMM Dual Rank, x4 (7.236W operating)
$396.00 $433.00 $433.00

(Promo price, retail $515)
Benchmark Server (Spot) Memory Pricing – Dual Rank DDR3 Only
DDR3 Reg. ECC Series (1.5V) Price Jun ’09 Price Sep ’09 Price
Dec ’09

KVR1333D3D4R9S/4G
4GB 1333MHz DDR3 ECC Reg w/Parity CL9 DIMM Dual Rank, x4 w/Therm Sen (3.960W operating)
$138.00 $151.00
up 10%

$135.99

down 10%

KVR1066D3D4R7S/4G
4GB 1066MHz DDR3 ECC Reg w/Parity CL7 DIMM Dual Rank, x4 w/Therm Sen (5.09W 5.085W operating)
$132.00 $151.00
up 15%
$137.59
down 9%(retail $162)

KVR1066D3D4R7S/8G
8GB 1066MHz DDR3 ECC Reg w/Parity CL7 DIMM Dual Rank, x4 w/Therm Sen (6.36W 4.110W operating)
$1035.00 $917.00
down 11.5%
$667.00
down 28%

(avail. 1/10)

As the year ends, OEMs are expected to “pull up inventory,” according to DRAMeXchange, in advance of a predicted market short fall somewhere in Q2/2010. Demand for greater memory capacities are being driven by Windows 7 and 64-bit processors with 4GB as the well established minimum system foot print ending 2009. With Server 2008 systems demanding 6GB+ and increased shift towards large memory foot print virtualization servers and blades, the market price for DDR3 – just turning the corner in Q1/2010 versus DDR2 – will likely flatten based on growing demand.

SOLORI’s Take: With Samsung and Hynix doubling CAPEX spending in 2010, we’d be surprised to see anything more than a 30% drop in retail 4GB and 8GB server memory by Q3/2010 given the anticipated demand. That puts 8G DDR3/10666 at $470/stick versus $330 for 2x 4GB and on track with August 2009 estimates. The increase in compute, I/O and memory densities in 2010 will be market changing and memory demand will play a small (but significant) role in that development.

In the battle to “feed” the virtualization servers of 2H/2010, the 4-channel “behemoth” Magny-Cours system could have a serious memory/price advantage with 8 (2-DPC) or 12 (3-DPC) configurations of 64GB (2.6GB/thread) and 96GB (3.9GB/thread) DDR3/1066 using only 4GB sticks (assumes 2P configuration). Similar GB/thread loads on Nehalem-EP6 “Gulftown” (6-core/12-thread) could be had with 72GB DDR3/800 (18x 4GB, 3-DPC) or 96GB DDR3/1066 (12x 8GB, 2-DPC), providing the solution architect with a choice between either a performance (memory bandwidth) or price (about $2,900 more) crunch. This means Magny-Cours could show a $2-3K price advantage (per system) versus Nehalem-EP6 in $/VM optimized VDI implementations.

Where the rubber starts to meet the road, from a virtualization context, is with (unannounced) Nehalem-EP8 (8-core/16-thread) which would need 96GB (12x 8GB, 2-DPC) to maintain 2.6GB/thread capacity with Magny-Cours. This creates a memory-based price differential – in Magny-Cours’ favor – of about $3K per system/blade in the 2P space. At the high-end (3.9GB/thread), the EP8 system would need a full 144GB (running DDR3/800 timing) to maintain GB/thread parity with 2P Magny-Cours – this creates a $5,700 system price differential and possibly a good reason why we’ll not actually see an 8-core/16-thread variant of Nehalem-EP in 2010.

Assuming that EP8 has 30% greater thread capacity than Magny-Cours (32-threads versus 24-threads, 2P system), a $5,700 difference in system price would require a 2P Magny-Cours system to cost about $19,000 just to make it an even value proposition. We’d be shocked to see a MC processor priced above $2,600/socket, making the target system price in the $8-9K range (24-core, 2P, 96GB DDR3/1066). That said, with VDI growth on the move, a 4GB/thread baseline is not unrealistic (4 VM/thread, 1GB per virtual desktop) given current best practices. If our numbers are conservative, that’s a $100 equipment cost per virtual desktop – about 20% less than today’s 2P equivalents in the VDI space. In retrospect, this realization makes VMware’s decision to license VDI per-concurrent-user and NOT per socket a very forward-thinking one!

Of course, we’re talking about rack servers and double-size and non-standard blades here: after all, where can we put 24 DIMM slots (2P, 3-DPC, 4-channel memory) on a SFF blade? Vendors will have a hard enough time with 8-DIMM per processor (2P, 2-DPC, 4-channel memory) configurations today. Plus, all that dense compute and I/O will need to get out of the box somehow (10GE, IB, etc.) It’s easy to see that HPC and virtualization platforms demands are converging, and we think that’s good for both markets.

SOLORI’s 2nd Take: Why does 8GB of DRAM require less than 4GB at the same speed and voltage??? The 4GB stick is based on 36x 256M x 4-bit DDR3-1066 FBGA’s (60nm) and the 8GB stick is based on 36x 512M x 4-bit DDR3-1066 FBGA’s (likely 50nm). According to SAMSUNG, the smaller feature size offers nearly 40% improvement in power consumption (per FBGA). Since the sticks use the same number of FBGA components (1Gb vs 2Gb), the 20% power savings seems reasonable.

The prospect of lower power at higher memory densities will drive additional market share to modules based on 2Gb DRAM modules. The gulf between DDR2 will continue to expand as tooling shifts to majority-DDR3 production and the technology. While minority leader Hynix announced a 50nm 2Gb DDR2 part earlier this year (2009), the chip giant Samsung continues to use 60-nm for its 2Gb DDR2. Recently, Hynix announced a successful validation of its 40-nm class 2Gb DDR3 module operating at 1333MHz and saving up to 40% power from the 50nm design. Similarly, Samsung’s leading the DRAM arms race with 30nm, 4Gb DDR3 production which will show-up in 1.35V, 16GB UDIMM and RDIMM in 2010 offering additional power saving benefits over 40-50nm designs. Meanwhile, Samsung has all but abandoned advances on DDR2 feature sizes.

The writing is on the wall for DDR2 systems: unit costs are rising, demand is shrinking, research is stagnant and a new wave of DDR3-based hardware is just over the horizon (1H/2010). While these things will show the door to DDR2-based systems (which enjoyed a brief resurgence in 2009 due to DDR3 supply problems and marginal power differences) as demand and DDR3 advantages heat-up in later 2010, it’s kudos to AMD for calling the adoption curve, spot on!

h1

Quick Take: Nehalem/Istanbul Comparison at AnandTech

October 7, 2009

Johan De Gelas and crew present an interesting comparison of Dunnington, Shanghai, Istanbul and Nehalem in a new post at AnandTech this week. In the test line-up are the “top bin” parts from Intel and AMD in 4-core and 6-core incarnations:

  • Intel Nehalem-EP Xeon, X5570 2.93GHz, 4-core, 8-thread
  • Intel “Dunnington” Xeon, X7460, 2.66GHz, 6-core, 6-thread
  • AMD “Shanghai” Opteron 2389/8389, 2.9GHz, 4-core, 4-thread
  • AMD “Istanbul” Opteron 2435/8435, 2.6GHz, 6-core, 6-thread

Most importantly for virtualization systems architects is how the vCPU scheduling affects “measured” performance. The telling piece comes from the difference in comparison results where vCPU scheduling is equalized:

AnandTech's Quad Sockets v. Dual Sockets Comparison. Oct 6,  2009.

AnandTech's Quad Sockets v. Dual Sockets Comparison. Oct 6, 2009.

When comparing the results, De Gelas hits on the I/O factor which chiefly separates VMmark from vAPUS:

The result is that VMmark with its huge number of VMs per server (up to 102 VMs!) places a lot of stress on the I/O systems. The reason for the Intel Xeon X5570’s crushing VMmark results cannot be explained by the processor architecture alone. One possible explanation may be that the VMDq (multiple queues and offloading of the virtual switch to the hardware) implementation of the Intel NICs is better than the Broadcom NICs that are typically found in the AMD based servers.

Johan De Gelas, AnandTech, Oct 2009

This is yet another issue that VMware architects struggle with in complex deployments. The latency in “Dunnington” is a huge contributor to its downfall and why the Penryn architecture was a dead-end. Combined with 8 additional threads in the 2P form factor, Nehalem delivers twice the number of hardware execution contexts than Shanghai, resulting in significant efficiencies for Nehalem where small working data sets are involved.

When larger sets are used – as in vAPUS – the Istanbul’s additional cores allows it to close the gap to within the clock speed difference of Nehalem (about 12%). In contrast to VMmark which implies a 3:2 advantage to Nehalem, the vAPUS results suggest a closer performance gap in more aggressive virtualization use cases.

SOLORI’s Take: We differ with De Gelas on the reduction in vAPUS’ data set to accommodate the “cheaper” memory build of the Nehalem system. While this offers some advantages in testing, it also diminishes one of Opteron’s greatest strengths: access to cheap and abundant memory. Here we have the testing conundrum: fit the test around the competitors or the competitors around the test. The former approach presents a bias on the “pure performance” aspect of the competitors, while the latter is more typical of use-case testing.

We do not construe this issue as intentional bias on AnandTech’s part, however it is another vector to consider in the evaluation of the results. De Gelas delivers a report worth reading in its entirety, and we view this as a primer to the issues that will define the first half of 2010.

h1

Quick Take: Dell/Nehalem Take #2, 2P VMmark Spot

September 9, 2009

The new 1st runner-up spot for VMmark in the “8 core” category was taken yesterday by Dell’s R710 – just edging-out the previous second spot HP ProLiant BL490 G6 by 0.1% – a virtual dead heat. Equipped with a pair of Xeon X5570 ($1386/ea, bulk list) and 96GB registered DDR3/1066 (12x8GB), the 2U, rack mount R710 weighs-in with a tile ratio of 1.43 over 102 VMs. :

  • Dell R710 w/redundant high-output power supply, ($18,209)
  • 2 x Intel Xeon X5570 Processors (included)
  • 96GB ECC DDR3/1066 (12×8GB) (included)
  • 2 x Broadcom NexXtreme II 5709 dual-port GigabitEthernet w/TOE (included)
  • 1 x Intel PRO 1000VT quad-port GigabitEthernet (1x PCIe-x4 slot, $529)
  • 3 x QLogic QLE2462 FC HBA (1x PCIe slot, $1,219/ea)
  • 1 x LSI1078 SAS Controller (on-board)
  • 8 x 15K SAS OS drive, RAID10 (included)
  • Required ProSupport package ($2,164)
  • Total as Configured: $24,559 ($241/VM, not including storage)

Three Dell/EMC CX3-40f arrays were used as the storage backing of the test. The storage system included 8GB cache, 2 enclosures and 15, 15K disks per array delivering 19 LUNs at about 300GB each. Intel’s Hyper-Threading and  “Turbo Boost” were enabled for 8-thread, 3.33GHz core clocking as was VT; however embedded SATA and USB were disabled as is common practice.

At about $1,445/tile ($241/VM) the new “second dog” delivers its best at a 20% price premium over Lenovo’s “top dog” – although the non-standard OS drive configuration makes-up a half of the difference, with Dell’s mandatory support package making-up the remainder. Using a simple RAID1 SAS and eliminating the support package would have droped the cost to $20,421 – a dead heat with Lenovo at $182/VM.

Comparing the Dell R710 the 2P, 12-core benchmark HP DL385 G6 Istanbul system at 15.54@11 tiles:

  • HP DL385 G6  ($5,840)
  • 2 x AMD 2435 Istanbul Processors (included)
  • 64GB ECC DDR2/667 (8×8GB) ($433/DIMM)
  • 2 x Broadcom 5709 dual-port GigabitEthernet (on-board)
  • 1 x Intel 82571EB dual-port GigabitEthernet (1x PCIe slot, $150/ea)
  • 1 x QLogic QLE2462 FC HBA (1x PCIe slot, $1,219/ea)
  • 1 x HP SAS Controller (on-board)
  • 2 x SAS OS drive (included)
  • $10,673/system total (versus $14,696 complete from HP)

Direct pricing shows Istanbul’s numbers at $1,336/tile ($223/VM) which is  a 7.5% savings per-VM over the Dell R710. Going to the street – for memory only – changes the Istanbul picture to $970/tile ($162/VM) representing a 33% savings over the R710.

SOLORI’s Take: Istanbul continues to offer a 20-30% CAPEX value proposition against Nehalem in the virtualization use case – even without IOMMU and higher memory bandwidth promised in upcoming Magny-Cours. With the HE parts running around $500 per processor, the OPEX benefits are there for Istanbul too. It is difficult to understand why HP wants to charge $900/DIMM for 8GB PC-5300 sticks when they are available on the street for 50% less – that’s a 100% markup. Looking at what HP charges for 8GB DDR3/1066 – $1,700/DIM – they are at least consistent. HP’s memory pricing practice makes one thing clear – customers are not buying large memory configurations from their system vendors…

On the contrary, Dell appears to be happy to offer decent prices on 8GB DDR3/1066 with their R710 at approximately $837/DIMM – almost par with street prices.  Looking to see if this parity held up with Dell’s AMD offerings, we examined the prices offered with Dell’s R805: while – at $680/DIMM – Dell’s prices were significantly better than HP’s, they still exceeded the market by 50%. Still, we were able to configure a Dell R805 with AMD 2435’s for much less than the equivalent HP system:

  • Dell R805 w/redundant power ($7,214)
  • 2 x AMD 2435 Istanbul Processors (included)
  • 64GB ECC DDR2/667 (8×8GB) ($433/ea, street)
  • 4 x Broadcom 5708 GigabitEthernet (on-board)
  • 1 x Intel PRO 100oPT dual-port GigabitEthernet (1x PCIe slot, included)
  • 1 x QLogic QLE2462 FC HBA (1x PCIe slot, included)
  • 1 x Dell PERC SAS Controller (on-board)
  • 2 x SAS OS drive (included)
  • $10,678/system total (versus $12,702 complete from Dell)

This offering from Dell should be able to deliver equivalent performance with HP’s DL385 G6 and likewise savings/VM compared to the Nehalem-based R710. Even at the $12,702 price as delivered from Dell, the R805 represents a potential $192/VM price point – about $50/VM (25%) savings over the R710.

h1

Shanghai Economics 101 – Conclusion

May 6, 2009

In the past entries, we’ve looked only at the high-end processors as applied to system prices, and we’ll continue to use those as references through the end of this one. We’ll take a look at other price/performance tiers in a later blog, but we want to finish-up on the same footing as we began; again, with an eye to how these systems play in a virtualization environment.

We decided to finish this series with an analysis of  real world application instead of just theory. We keep seeing 8-to-1, 16-to-1 and 20-to-1 consolidation ratios (VM-to-host) being offered as “real world” in today’s environment so we wanted to analyze what that meant from an economic side.

The Fallacy of Consolidation Ratios

First, consolidation ratios that speak in terms of VM-to-host are not very informative. For instance, a 16-to-1 consolidation ratio sounds good until you realize it was achieved on an $16,000 4Px4C platform. This ratio results in a $1,000-per-VM cost to the consolidator.

In contrast, let’s take the same 16-to-1 ratio on a $6,000 2Px4C platform and it results in a $375-per-VM cost to the consolidator: a savings of nearly 60%. The key to the savings is in vCPU-to-Core consolidation ratio (provided sufficient memory exists to support it). In the first example that ratio was 1:1, but in the last example the ratio is 2:1. Can we find 16:1 vCPU-to-Core ratios out there? Sure, in test labs, but in the enterprise we think the valid range of vCPU-to-Core consolidation ratios is much more conservative, ranging from 1:1 to 8:1 with the average (or sweet spot) falling somewhere between 3:1 and 4:1.

Second, we must note that memory is a growing aspect of the virtualization equation. Modern operating systems no longer “sip” memory and 512MB for a Windows or Linux VM is becoming more an exception than a rule. That puts pressure on both CPU and memory capacity as driving forces for consolidation costs. As operating system “bloat” increases, administrative pressure to satisfy their needs will mount, pushing the “provisioned” amount of memory per VM ever higher.

Until “hot add” memory is part of DRS planning and the requisite operating systems support it, system admins will be forced to either over commit memory, purchase memory based on peak needs or purchase memory based on average memory needs and trust DRS systems to handle the balancing act. In any case, memory is a growing factor in systems consolidation and virtualization.

Modeling the Future

Using data from the Univerity of Chicago and as a baseline and extrapolating forward through 2010, we’ve developed a simple model to predict vMEM and vCPU allocation trends. This approach establishes three key metrics (already used in previous entries) that determine/predict system capacity: Average Memory/VM (vMVa), Average vCPU/VM (vCVa) and Average vCPU/Core (vCCa).

Average Memory per VM (vMVa)

Average memory per VM is determined by taking the allocated memory of all VM’s in a virtualized system – across all hosts – and dividing that by the total number of VM’s in the system (not including non-active templates.) This number is assumed to grow as virtualization moves from consolidation to standardized deployment. Read the rest of this entry ?

h1

Clarification: Nehalem-EP and DDR3

April 29, 2009

I have seen a lot of contrasting comments about Nehalem-EP and memory speed on the community groups – especially in the area of supported speed ratings: often in the context of comparison to Opteron’s need to reduce supported DIMM speed ratings based on slot population. While it is true Nehalem’s 3-channel design allows for a mixture of performance (800/1066/1333) and capacity, it does not allow for both.

Here are the rules (from Intel’s “Intel Xeon Processor 5500 Series Datasheet, Volume 2“) based on DIMM per Channel (DPC):

  • 1-DPC = Support DDR3-1333 (if DIMM supports DDR3-1333)
    • KVR1333D3D4R9S/4G – $169/ea
    • 12GB/CPU max. @ $507/CPU (24GB/system max.)
  • 2-DPC = Support DDR3-1066 (if all DIMMs are rated DDR3-1066 or higher)
    • KVR1066D3D4R7S/4G – $138/ea
    • 24GB/CPU max. @ $828/CPU (48GB/system max.)
    • KVR1066D3Q4R7S/8G – $1,168/ea
    • 48GB/CPU max. @ $7,008/CPU (96GB/system max.)
    • “96GB Memory (12x8GB), 1066MHz Dual Ranked RDIMMs for 2 Processors,Optimized [add $15,400]” – Dell
  • 3-DPC = Support DDR3-800 only (if all DIMMs are rated DDR3-800 or higher)
    • KVR1066D3D4R7S/4G – $138/ea
    • 36GB/CPU max. @ $1,242/CPU (72GB/system max.)
    • “144GB Memory (18x8GB), 800MHz Dual Ranked RDIMMs for 2 Processors,Optimized [add $22,900]” – Dell

When the IMC detects the presence of 1, 2 or 3 DIMMs, these speed limits are imposed, regardless of the capabilities of the DIMM. A couple of other notable exceptions exist:

  • When one 4-rank DIMM is used, it must be populated in DIMM slot0 of a given channel (farthest from CPU);
  • Mixing of 4-rank DIMMs in one channel and 3-DIMMs in other channel (3-DPC) on the same CPU socket is not allowed – forcing BIOS to disable on the 4-rank channel;
  • RDIMM
    • Single-rank DIMM: 1-DPC, 2-DPC or 3-DPC
    • Dual-rank DIMM: 1-DPC, 2-DPC or 3-DPC
    • Quad-rank DIMM: 1-DPC or 2-DPC
  • UDIMM
    • Single-rank DIMM: 1-DPC or 2-DPC
    • Dual-rank DIMM: 1-DPC or 2-DPC
    • Quad-rank DIMM: n/a

Speed freaks be warned!