Archive for the ‘Grid Computing’ Category

h1

Red Hat Enterprise Virtualization Strategy

June 26, 2009

Red Hat’s recently updated virtualization strategy has resulted in an “oversubscribed” beta program. The world leader in open source solutions swings a big stick with its kernel-based virtualization products. Some believe one of the keys to successful large scale cloud initiatives is an open source hypervisor, and with Xen going commercial, turning to the open source veteran Red Hat seems a logical move. You may recall that Red Hat – using KVM – was the first to demonstrate live migration between AMD and Intel hosts.

“We are very pleased by the welcome we have received from enterprise companies all over the world who are looking to adopt virtualization pervasively and value the benefits of our open source solutions. Our Beta program is oversubscribed. We are excited to be in a position to deliver a flexible, comprehensive and cost-effective virtualization portfolio in which products will share a consistent hardware and software certification portfolio. We are in a unique position to deliver a comprehensive portfolio of virtualization solutions, ranging from a standalone hypervisor to a virtualized operating system to a comprehensive virtualization management product suite.”

Scott Crenshaw, vice president, Platform Business Unit at Red Hat

Red Hat sees itself as an “agent of change” in the virtualization landscape and wants to deliver a cost effective “boxed” approach to virtualization and virtualization management. All of this is hinged on Red Hat’s new KVM-based approach – enabled through their acquisition of Qumranet in September 2008 – which delivers the virtualization and management layers to Red Hat’s Enterprise Linux and its kernel.

Along with Qumranet came Solid ICE and SPICE. Solid ICE is the VDI component running on KVM consisting of a virtual desktop server and controller front end. Solid ICE allows Red Hat to rapidly enter the VDI space without disrupting its Eco-System. Additionally, the SPICE protocol (Simple Protocol for Independent Computing Environments) enables an standardized connection protocol alternative to RDP with enhancements for the VDI user experience.

Red Hat’s SPICE claims to offer the following features in the enterprise:

  • Superior graphics performance (e.g. flash)
  • video quality (30+ frames per second)
  • bi-directional audio (for soft-phones/IP phones)
  • bi-directional video (for video telephony/ video conferencing)
  • No specialized hardware. Software only client that can be automatically installed via Active-X and a browser on the client machine

Red Hat’s virtualization strategy reveals more of it’s capabilities and depth in accompanying blogs and white papers. Adding to the vendor agnostic migration capabilities, Red Hat’s KVM is slated to support VM hosts to 96 cores and 1TB of memory with guests scaling to 16 vCPUs and 64GB of memory. Additional features include high availabitily, live migration, global system scheduler, global power saving (through migration and power down), memory page sharing, thin storage provisioning and SELinux security.

h1

Quick Take: Vyatta Takes Virtual Networking to Cloud

June 22, 2009

Earlier this month, Vyatta announced completion of its Series C round of financing resulting in US$10M in new capital led primarily by new partner Citrix. Vyatta provides an open source alternative to traditional networking vendors like Cisco – providing software and hardware solutions targeted at the same routing, firewall and VPN market otherwise served by Cisco’s 2800, 7200 and ASA line of devices. Its software is certified to run in Xen and VMware environments.

In a related announcement, Citrix has certified Vyatta’s products for use with its Citrix Cloud Center (C3) product family to “make it as easy as possible for service providers and enterprises to use Vyatta with Citrix products such as XenDesktop, XenApp, XenServer and NetScaler.” With the addition of Citrix Delivery Systems Division GM Gordon Payne to the Vyatta board of directors, the now “closer coupling” of Citrix with Vyatta could accelerate the adoption of Vyatta in virtual infrastructures.

SOLORI’s Take: We’ve been using Vyatta’s software in lab and production applications for some time – primarily in HA routing applications where automatic routing protocols like OSPF or BGP are needed. Virtualizing Vyatta provides additional HA capabilities to cloud environments by extending infrastructure migration from the application layer all the way down to layer-3. In applications where it is a good fit, Vyatta provides an excellent solution component for the 100% virtualized environment.

h1

Quick Take: PC Pro Recommends 4-node-in-2U Platform

June 17, 2009

Boston Limited UK has recently received a “recommended” rating from PC Pro UK for its 4-node-in-2U platform with AMD’s Istanbul processor on-board. Dubbed the “Boston Quattro 6000GP” and following-up on the 2-node-in-1U “Boston 3000GP” platform, this systems allows for 4-nodes with 2x AMD Istanbul processors per node. This formula yields 8 processors (48 cores) in 2U resulting in a core density of over 1,000 cores per standard 42U rack.

Computational density like this is bound for virtualization and HPC clusters. Judging from the recent reports on Istanbul’s virtualization potential and HPL performance, this combination offers a compelling platform alternative to blade computing. In its review, PC Pro UK touched on the platform’s power consumption, saying:

“In idle we saw one, two, three and four nodes draw a total of 234W, 349W, 497W and 630W. Under pressure these figures rose to 345W, 541W, 802W and 1026W respectively. Even if you could find an application that pushed the cores this hard you’ll find each server node draws a maximum of 256W – not bad for a 12-core system. Dell’s PowerEdge R900, reviewed in our sister title IT Pro, has four 130W X7450 six-core processors and that consumes 778W under heavy load.”

PC Pro, UK

Read the rest of this entry ?

h1

Advanced Clustering HPL Comparison: Instanbul vs Nehalem

June 17, 2009

Advanced Clustering Technologies, based in Kansas City, KS and specializing in HPC solutions, has just released a High-Performance Linpack (HPL) performance report comparing “equivalent” Xeon X5550 and Opteron 2435 systems. According to Advanced Clustering, their goal was “to show the peak performance in terms of GFLOPS (billion floating point operations per second)” of the comparison systems.

In their tests, Advance Clustering attempted to keep platform specifications as uniform as possible (OS, power supply, hard drive). Due to Nehalem’s tripple-channel memory, differing amounts of memory were used in the comparison and Advanced Clustering compensated by making adjustments to the problem size in an attempt to utilize 100% of available systems RAM accordingly.

The results showed that AMD’s Istanbul delivers 15% more GFLOPS  at a 30% savings in effective system cost ($/GFLOP). Advanced Clustering comments that while Istanbul delivered a higher GFLOP rating than Nehalem, it did so at only 79% of its theoretical potential due to the weaker memory bandwidth of the Socket-F system. From our conversation with AMD’s Mike Goddard, we are told that a lot of Istanbul’s potential – including much higher memory bandwidth – will realizable only in its sockted G34 incarnation. By that time, the comparison will likely be between Intel’s 8-core Nehalem-EX and AMD’s 12-core Magny-Cours product.

h1

Quick Take: Oracle to Buy Virtual Iron

May 14, 2009

Oracle extended its spring buying spree by announcing the purchase of Virtual Iron Software, Inc (Virtual Iron) on May 13, 2009. Citing Virtual Iron’s “dynamic resource and capacity management” capabilities as the reason in their press release, Oracle intends to fill gaps in its Xen-based Oracle VM product (available as a free download).

Ironically, Virtual Iron’s product focus is SMB. According to a Butler Group technology audit, Virtual Iron “has one limitation that [they] believe will impact potential customers: the management console currently can only manage 120 nodes.” However, Virtual Iron’s “VI-Center” – the management piece cited as the main value proposition by Butler and Oracle – is based on a client-server Java application, making it a “good fit” with the recent Oracle acquisition of Sun Microsystems.

Oracle has not announced plans for Virtual Iron, pending the conclusion of the deal. Oracle’s leading comment:

“Industry trends are driving demand for virtualization as a way to reduce operating expenses and support green IT strategies without sacrificing quality of service,” said Wim Coekaerts, Oracle Vice President of Linux and Virtualization Engineering. “With the addition of Virtual Iron, Oracle expects to enable customers to more dynamically manage their server capacity and optimize their power consumption. The acquisition is consistent with Oracle’s strategy to provide comprehensive enterprise software management and will facilitate more efficient management of application service levels.”

SOLORI’s take: If the deal goes through, Oracle has found an immediate job for its newly acquired Sun Java engineers – getting VI-Cener ready for enterprise computing. Currently, Oracle VM is a “barebones” product with very little value beyond its intrinsic functionality. With the acquisition of Virtual Iron and its management piece, Oracle/Sun could produce a self-sufficient virtualization eco-system with OracleVM augmented by Virtual Iron, Sun Storage, choice of Oracle or MySQL databases, and commodity (or Sun) hardware – all vetted for Oracle’s application stack.

Virtual Iron was supposedly working on Hyper-V and KVM (RedHat’s choice of virtualization) management features. Though we doubt that Oracle VM will evolve into a truly “virtualization agnostic” product, the promise of such a capability is the stuff of “cloud computing.” Sun’s VDI and xVM server group will have a lot of work to do this summer…

h1

Discover IOV and VMware NetQueue on a Budget

April 28, 2009

While researching advancements in I/O virtualization (VMware) we uncovered a “low cost” way to explore the advantages of IOV without investing in 10GbE equipment: the Intel 82576 Gigabit Network Controller which supports 8-receive queues per port. This little gem comes in a 2-port by 1Gbps PCI-express package (E1G142ET) for around $170/ea on-line. It also comes in a 4-port by 1Gbps package (full or half-height, E1G144ET) for around $450/ea on-line.

Enabling VMDq/NetQueue is straightforward:

  1. Enable NetQueue in VMkernel using VMware Infrastructure 3 Client:
    1. Choose Configuration > Advanced Settings > VMkernel.
    2. Select VMkernel.Boot.netNetqueueEnabled.
  2. Enable the igb module in the service console of the ESX Server host:# esxcfg-module -e igb
  3. Set the required load option for igb to turn on VMDq:
    The option IntMode=3 must exist to indicate loading in VMDq mode. A value of 3 for the IntMode parameter specifies using MSI-X and automatically sets the number of receive queues to the maximum supported (devices based on the 82575 Controller enable 4 receive queues per port; devices based on the 82576 Controller enable 8 receive queues per port). The number of receive queues used by the igb driver in VMDq mode cannot be changed. Read the rest of this entry ?
h1

Quick Take: VMware – Shanghai vs. Nehalem-EP

April 26, 2009

Johan De Gelas at AnandTech has an interesting article comparing a 2P Shanghai (2384, 2.7GHz) vs. 2P Nehalem-EP (X5570, 2.93GHz) and the comparison in VMark is stunning… until you do you do your homework and reference the results. Johan is comparing the VMmark of a 64GB configured 2P Opteron running ESX3.5-Update 3 against a 72GB configured 2P Nehalem-EP running vSphere (ESX v4.0).

When I see benchmarks like these quoted by AnandTech I start to wonder why they consider the results “analytical…” In any case, there are significant ramifications to larger memory pools and higher clock speeds in VMmark, and these results show that fact. Additionally, the results also seem to indicate:

  • VMware vSphere (ESX v4.0) takes serious advantage of the new hyperthreading in Nehalem-EP
  • Nehalem-EP’s TurboBoost Appears to render the value proposition in favor of the X5570 over the W5580, all things considered

Judging from the Supermicro VMmark score, the Nehalem-EP (adjusted for differences in processor speed) turns-in about a 6% performance advantage over the Shanghai with comparable memory footprints. Had the Opteron been given additional memory, perhaps the tile and benchmark scores would have better illustrated this conclusion. It is unclear whether or not vSphere is significantly more efficient at resource scheduling, but the results seem to indicate that – at least with Nehalem’s new hyperthreading – it is more efficient.

Platform Memory VMware Version VMmark Score Rating
(Raw/
Clock Adj.)
Per Tile
HP ProLiant
385G5p

(2xOpteron 2384, 2.7GHz)
64GB DDR2/533 ESX v3.5.0 Update 3 11.28
@8 tiles
100%/
100%
100%
Supermicro
6026-NTR+
(2xX5570, 2.93GHz w/3.2GHz TurboBoost)
72GB DDR3/1066 ESX v3.5.0 Update 4 BETA 14.22
@10 tiles
126%/
106%
101%
Dell PowerEdge
M610

(2xX5570, 2.93GHz w/3.3GHz TurboBoost)
96GB DDR3/1066 ESX v4.0 23.90
@17tiles
212%/
174%
100%
HP ProLiant
DL370 G6

(2xW5580, 3.2GHz w/3.3GHz TurboBoost)
96GB DDR3/1066 ESX v4.0 23.96
@16tiles
213%/
172%
106%
HP ProLiant
DL585 G5
(4x8386SE, 2.8GHz)
128GB DDR2/667 ESX v3.5.0 Update 3 20.43
@14 tiles
181%/
174%
104%
HP ProLiant
DL585 G5
(4x8393SE, 3.1GHz)
128GB DDR2/667 ESX v4.0 22.11
@15 tiles
196%/
171%
105%

One things is clear from these VMmark examples: Nehalem-EP is a huge step in the right direction for Intel, and it potentially blurs the line between 2P and 4P systems. AMD will not have much breathing room with Istanbul in the 2P space against Nehalem-EP for system refreshes unless it can show similar gains and scalability. Where Istanbul will shine is in its drop-in capability in existing 2P, 4P and 8P platforms.

SOLORI’s take: These are exciting times for those just getting into virtualization. VMmark would seem to indicate that consolidation factors unlocked by Nehalem-EP come close to rivaling 4P platforms at about 75% of the cost. If I were buying a new system today, I would be hard-pressed to ignore Nehalem as a basis for my Eco-system. However, the socket-F Opteron systems still has about 8-12 months of competitive life in it, at which point it becomes just another workhorse. Nehalem-EP still does not provide enough incentive to shatter an established Eco-system.

SOLORI’s 2nd take: AMD has a lot of ground to cover with Istanbul and Magny-Cours in the few short months that remain in 2009. The “hearts and minds” of system refresh and new entrants into virtualization are at stake and Nehalem-EP offers some conclusive value to those entering the market.

With entrenched customers, AMD needs to avoid making them feel “left behind” before the market shifts definitively. AMD could do worse than getting some SR5690-based Istanbul platforms out on the VMmark circuit – especially with its HP and Supermicro partners. We’d also like to see some Magny-Cours VMmarks prior to the general availability of the G34 systems.