Posts Tagged ‘x7460’

h1

NEC Offers “Dunnington” Liposuction, Tops 64-Core VMmark

November 19, 2009

NEC’s venerable Express5800/A1160 is back at the top VMmark chart, this time establishing the brand-new 64-core category with a score of 48.23@32 tiles – surpassing its 48-core 3rd place posting by over 30%. NEC’s new 16-socket, 64-core, 256GB “Dunnington” X7460 Xeon-based score represents a big jump in performance over its predecessor with a per tile ratio of 1.507 – up 6% from the 48-core ratio of 1.419.

To put this into perspective, the highest VMmark achieved, to date, is the score of 53.73@35 tiles (tile ratio 1.535) from the 48-core HP DL785 G6 in August, 2009. If you are familiar with the “Dunnington” X7460, you know that it’s a 6-core, 130W giant with 16MB L2 cache and a 1000’s price just south of $3,000 per socket. So that raises the question: how does 6-cores X 16-sockets = 64? Well, it’s not pro-rationing from the Obama administration’s “IT fairness” czar. NEC chose to disable the 4th and 6th core of each socket to reduce the working cores from 96 to 64.

At $500/core, NEC’s gambit may represent an expensive form of “core liposuction” but it was a necessary one to meet VMware’s “logical processor per host” limitation of 64. That’s right, currently VMware’s vSphere places a limit on logical processors based on the following formula:

CPU_Sockets X Cores_Per_Socket X Threads_Per_Core =< 64

According to VMware, the other 32 cores would have been “ignored” by vSphere had they been enabled. Since “ignored” is a nebulous term (aka “undefined”), NEC did the “scientific” thing by disabling 32 cores and calling the system a 64-core server. The win here: a net 6% improvement in performance per tile over the 6-core configuration – ostensibly from the reduced core loading on the 16MB of L3 cache per socket and reduction in memory bus contention.

Moving forward to 2010, what does this mean for vSphere hardware configurations in the wake of 8-core, 16-thread Intel Nehalem-EX and 12-core, 12-thread AMD Magny-Cours processors? With a 4-socket Magny-Cours system limitation, we won’t be seeing any VMmarks from the boys in green beyond 48-cores. Likewise, the boys in blue will be trapped by a VMware limitation (albeit, a somewhat arbitrary and artificial one) into a 4-socket, 64-thread (HT) configuration or an 8-socket, 64-core (HT-disabled) configuration for their Nehalem-EX platform – even if using the six-core variant of EX. Looks like VMware will need to lift the 64-LCPU embargo by Q2/2010 just to keep up.

h1

NEC Adds Top 48-Core, Dell Challenges 24-Core in VMmark Race

July 29, 2009

NEC’s venerable Express5800/A1160 tops the 48-core VMmark category today with a score of 34.05@24 tiles to wrest the title away from IBM who established the category back in June, 2009. NEC’s new “Dunnington” X7460 Xeon-based score represents a performance per tile ratio of 1.41 and a tile to core efficiency of 50% using 128GB of ECC DDR2 RAM.

Compared to the leading 24-core “Dunnington” results – held by IBM’s x3850 M2 at 20.41@14 tiles – the NEC benchmark sets a scalability factor of 85.7% when moving from 4-socket to 8-socket systems. Both servers from NEC and IBM are scalable systems allowing for multiple chassis to be interconnected to achieve greater CPU-per-system numbers – each scaling in 4-CPU increments – ostensibly for OLTP advantages. The NEC starts at around $70K for 128GB and 48-cores resulting in a $486/VM cost to VMmark.

Also released today, Dell’s PowerEdge R905 – with 24 2.8GHz Istanbul cores (8439 SE) and 128GB of ECC DDR2 RAM – secures the number two slot in the 24-category with a posting of 29.51@20 tiles. This represents a tile ratio of 1.475 and tile efficiency of 83.3% for the $29K rack server from Dell at about $240/VM. Compared to its 12-core counterpart, this represents a 91% scalability factor.

If AMD’s Istanbul scales to 8-socket at least as efficiently as Dunnington, we should be seeing some 48-core results in the 43.8@30 tile range in the next month or so from HP’s 785 G6 with 8-AMD 8439 SE processors. You might ask: what virtualization applications scale to 48-cores when $/VM is doubled at the same time? We don’t have that answer, and judging by Intel and AMD’s scale-by-hub designs coming in 2010, that market will need to be created at the OEM level.

Based on the performance we’re seeing in 8-socket systems relative to 4-socket and the upcoming “massively mult-core” processors in 2010, the law of diminishing returns seems to favor the 4-socket system as the limit for anything but massive OLTP workloads. Even then, we expect to see 48-core in a “4-way” box more efficient than the same number of cores in an 8-way box. The choice in virtualization will continue to be workload biased, with 2P systems offering the best “small footprint” $/VM solution and 4P systems offering the best “large footprint” $/VM solution.