Posts Tagged ‘Cloud’

h1

Red Hat Enterprise Virtualization Strategy

June 26, 2009

Red Hat’s recently updated virtualization strategy has resulted in an “oversubscribed” beta program. The world leader in open source solutions swings a big stick with its kernel-based virtualization products. Some believe one of the keys to successful large scale cloud initiatives is an open source hypervisor, and with Xen going commercial, turning to the open source veteran Red Hat seems a logical move. You may recall that Red Hat – using KVM – was the first to demonstrate live migration between AMD and Intel hosts.

“We are very pleased by the welcome we have received from enterprise companies all over the world who are looking to adopt virtualization pervasively and value the benefits of our open source solutions. Our Beta program is oversubscribed. We are excited to be in a position to deliver a flexible, comprehensive and cost-effective virtualization portfolio in which products will share a consistent hardware and software certification portfolio. We are in a unique position to deliver a comprehensive portfolio of virtualization solutions, ranging from a standalone hypervisor to a virtualized operating system to a comprehensive virtualization management product suite.”

Scott Crenshaw, vice president, Platform Business Unit at Red Hat

Red Hat sees itself as an “agent of change” in the virtualization landscape and wants to deliver a cost effective “boxed” approach to virtualization and virtualization management. All of this is hinged on Red Hat’s new KVM-based approach – enabled through their acquisition of Qumranet in September 2008 – which delivers the virtualization and management layers to Red Hat’s Enterprise Linux and its kernel.

Along with Qumranet came Solid ICE and SPICE. Solid ICE is the VDI component running on KVM consisting of a virtual desktop server and controller front end. Solid ICE allows Red Hat to rapidly enter the VDI space without disrupting its Eco-System. Additionally, the SPICE protocol (Simple Protocol for Independent Computing Environments) enables an standardized connection protocol alternative to RDP with enhancements for the VDI user experience.

Red Hat’s SPICE claims to offer the following features in the enterprise:

  • Superior graphics performance (e.g. flash)
  • video quality (30+ frames per second)
  • bi-directional audio (for soft-phones/IP phones)
  • bi-directional video (for video telephony/ video conferencing)
  • No specialized hardware. Software only client that can be automatically installed via Active-X and a browser on the client machine

Red Hat’s virtualization strategy reveals more of it’s capabilities and depth in accompanying blogs and white papers. Adding to the vendor agnostic migration capabilities, Red Hat’s KVM is slated to support VM hosts to 96 cores and 1TB of memory with guests scaling to 16 vCPUs and 64GB of memory. Additional features include high availabitily, live migration, global system scheduler, global power saving (through migration and power down), memory page sharing, thin storage provisioning and SELinux security.

h1

AMD’s New Opteron

April 23, 2009

AMD’s announcement yesterday came with some interesting technical tidbits about its new server platform strategy that will affect its competitiveness in the virtualization marketplace. I want to take a look at the two new server platforms and contrast them with what is available today and see what that means for our AMD-based eco-systems in the months to come.

Initially, the introduction of more cores to the mix is good for virtualization allowing us to scale more gracefully and confidently as compared to hyper-threading. While hyper-threading is reported to increase scheduling efficiency in vSphere, it is not effectively a core. Until Nehalem-EX is widely available and we can evaluate 4P performance of hyper-threading in loaded virtual environments I’m comfortable awarding hyper-threading a 5% performance bonus – all things being equal.

AMD's Value Shift

AMD's Value Shift

What’s Coming?

That said, where is AMD going with Opteron in the near future and how will that affect Opteron-based eco-systems? At least one thing is clear: compatibility is assured and performance – at the same thermal footprint – will go up. So let’s look at the ramifications of the new models/sockets and compare them to our well-known 2000/8000 series to glimpse the future.

A fundamental shift away from DDR2 and towards DDR3 for the new sockets is a major difference. Like the Phenom II, Core i7 and Nehalem processors, the new Opteron will be a DDR3 specimen. Assuming DDR3 pricing continues to trend down and the promise of increased memory bandwidth is realized in the HT3/DCA2 and Opteron, DDR3 will deliver solid performance in 4000 and 6000 configurations.

Opteron 6000: Socket G34

From the announcement, G34 is analogous to the familiar 8000-series line with one glaring exception: no 8P on the road-map. In the 2010-2011 time frame, we’ll see 8-core, 12-core and 16-core variants with a new platform being introduced in 2012. Meanwhile, the 6000-series will support 4-channels of “unbuffered” or “registered” DDR3 across up to 12DIMMs per socket (3 banks by 4 channels). Assuming 6000 will support DDR3-1600, the theoretical bandwidth of a 4 channel design would yield memory bandwidths in the 40-50GB/sec range per link (about twice Istanbul’s).

AMD 2010-2013 Road-map

AMD 2010-2013 Road-map

With a maximum module density of 16GB, a 12-DIMM by 4-socket system could theoretically contain 768GB of DDR3 memory. In 2011, that equates to 12GB/core in a 4-way, 64-core server. At 4:1 consolidation ratios for typical workloads, that’s 256 VM/host at 3GB/VM (4GB/VM with page sharing) and an average of 780MB/sec of memory bandwidth per VM. I think the math holds-up pretty well against today’s computing norms and trends. Read the rest of this entry ?