Posts Tagged ‘Virtualization’

h1

Quick Take – VMware PartnerExchange 2010: Day 3

February 10, 2010

With about 2.5 hours of sleep and the VCP410 test looming, Tuesday took on a different tone than the previous two days. My calendar was full:

  • 5:30 – Wake-up and check e-mail/blog/systems back in CST
  • 7:00am – Breakfast at the VMware Experience Hall
  • 8:30am – Keynote with Carl Eschenbach, EVP Worldwide Sales & Field Ops, VMware
  • 10:00am – ESXi Convergence Roadmap Session
  • 11:15am – View4 Workload Sizing and Testing
  • 12:15pm – Lunch in the VMware Alumni Lounge
  • 1:30pm – vSphere4 Advanced Configuration Topics
  • 3:45pm – VCP410 Test
  • 5:15pm – View Composer Tips, Tricks and Best Practices
  • 6:45pm – Check-in at home
  • 7:00pm – Update blog and check/respond to e-mail

First, the keynote with Carl and the gang was awesome! VMware took a really aggressive attitude towards the competition, including Citrix (virtual desktop) and Microsoft (virtual data center). To sum up the conversation: VMware’s intent on carrying on the Q4/09 momentum through 2011, extending its lead in virtual data center and cloud computing capabilities. But Carl’s not happy with just the traditional server market, and VMware wants to own the virtual desktop space – virtually putting the squeeze on Citrix as the walls close-in around them.

With over 60-70% of net new Citrix VDI builds being deployed on VMware’s ESX servers, it make me wonder why Citrix would drive its XenApp customers to VDI – in the form of XenDesktop4 – by offering a 2-for-1 trade-in program. Isn’t this like asking their own clients to reconsider the value proposition of XenApp – in essence turning vendor-locked accounts into a new battleground with VMware? If the momentum shifts towards VMware View4.x and VMware accelerates the pace on product features (including management and integration) as suggested by Carl’s aggressive tone today, where does that leave XenDesktop and Citrix?

The VMware Express Trailer

The VMware Express Mobile Data Center

The VMware Express: Coming to a City Near You!

VMware introduced its “data center on wheels” to the PartnerExchange 2010 audience today and I got a chance to get on-board and take a look. The build-out was clean and functional with 60+ Gbps of external interconnects waiting for a venue. Inside the Express was a data center, a conference room and several demonstration stations showing VMware vSphere and View4 demos.

6 ton Mobile A/C for Express' Data Center

6 ton Mobile A/C for Express' Data Center

VMware rolled-out the red carpet to the PartnerExchange 2010 attendees. To the right – above the 5-th wheel – is the conference room. All the way in the back (to the left) is the data center portion. In-between the data center and the conference room lies the demonstration area with external plasma screens in kick-panels displaying slide deck and demonstration materials.

The Express' Diesel Generator - Capable of Powering Things for Nearly 2-days

The Express' Diesel Generator - Capable of Powering Things for Nearly 2-days

Up front – behind the cab – rests 6-tons of air conditioning mounted to the front of the trailer. This keeps the living area inside habitable and the data center (about 70-80 sq. ft.) cool enough to run some serious virtualization equipment. Mounted directly behind the driver’s cabin is the diesel generator – capable of powering the entire operation for better than 40-hours when external power is unavailable. Today, however, the VMware Express was taking advantage of “house power” provided by Mandalay Bay’s conference center.

Where the rubber met the road was inside the data center: currently occupied by an EMC/Cisco rack and a rack powered by MDS/NetApp/Xsigo. Both featured 12TB of raw storage and high-density Nehalem-EP solutions. In the right corner, the heavyweight EMC/Cisco bundle was powered by Cisco’s UCS B-series platform featuring eight Nehalem 2P blades per 6U chassis fed by a pair of Cisco 4900-series converged switches. In the left corner, the super middleweight MDS Micro QUADv-series”mini-blade” chassis featuring an eight Nehalem 2P blades per two 2U chassis fed by a pair of Xsigo I/O directors delivering converged network and SAN tunneled over infiniband interconnects.

Still More Capacity for Additional Hardware Sponsors

Two-of-Three Racks are Currently Occupied by EMC/Cisco and MDS/NetApp/Xsigo

It will be interesting to see how the drive arrays survive the journey as the VMware Express travels across the country over the next year. Meanwhile, this tractor trailer is packing 60-blades worth of serious virtualization hardware destined for a town near you. VMware is currently looking for additional sponsors from the partner community to expand its tour; and access to the VMware Express will be prioritized based on partner status and/or sponsorship.

VCP410 Test Passed – Waiting for Official Notification

With the VCP410 test in the books, I’m now waiting for official notification from VMware of my VCP4 status. According to my “Examination Score Report” I should receive notice from VMware within 30-days having met all of the requirements for “VMware Certified Professional on vSphere 4 Certification” and testing above the Certified Instructor minimums.

As a systems and network architect, I found the “interface related questions” somewhat more challenging than the “design and configure” related fare. However, the test was pretty well balanced and left me with well over 25 minutes to go back over questions I’d checked for review and finalize those answers. I logged-out of the exam with 18 minutes left on the clock. My recommendation to those looking to pass the VCP410:

  1. Work with vSphere in a hands-on capacity for several days before taking the test, making good mental notes related to interface operations inside and outside of vCenter
  2. Know the minimums and maximums for ESX and vCenter configurations
  3. Understand storage zoning, masking and configuration
  4. Go over the VCP blueprint on your own before seeking additional assistance
  5. Remember the test is on GA release and not “current” release so “correct” answers may differ slightly from “reality”
  6. Get more than 2.5 hours sleep the night before you take the exam
  7. Schedule the exam in the morning – while you’re fresh – not the afternoon following meetings, etc.
  8. Dig into topics on the VCP Forum online

That about does it for day number three in Las Vegas, Nevada. It’s time to shuffle-up and deal!

h1

Red Hat Enterprise Virtualization Strategy

June 26, 2009

Red Hat’s recently updated virtualization strategy has resulted in an “oversubscribed” beta program. The world leader in open source solutions swings a big stick with its kernel-based virtualization products. Some believe one of the keys to successful large scale cloud initiatives is an open source hypervisor, and with Xen going commercial, turning to the open source veteran Red Hat seems a logical move. You may recall that Red Hat – using KVM – was the first to demonstrate live migration between AMD and Intel hosts.

“We are very pleased by the welcome we have received from enterprise companies all over the world who are looking to adopt virtualization pervasively and value the benefits of our open source solutions. Our Beta program is oversubscribed. We are excited to be in a position to deliver a flexible, comprehensive and cost-effective virtualization portfolio in which products will share a consistent hardware and software certification portfolio. We are in a unique position to deliver a comprehensive portfolio of virtualization solutions, ranging from a standalone hypervisor to a virtualized operating system to a comprehensive virtualization management product suite.”

Scott Crenshaw, vice president, Platform Business Unit at Red Hat

Red Hat sees itself as an “agent of change” in the virtualization landscape and wants to deliver a cost effective “boxed” approach to virtualization and virtualization management. All of this is hinged on Red Hat’s new KVM-based approach – enabled through their acquisition of Qumranet in September 2008 – which delivers the virtualization and management layers to Red Hat’s Enterprise Linux and its kernel.

Along with Qumranet came Solid ICE and SPICE. Solid ICE is the VDI component running on KVM consisting of a virtual desktop server and controller front end. Solid ICE allows Red Hat to rapidly enter the VDI space without disrupting its Eco-System. Additionally, the SPICE protocol (Simple Protocol for Independent Computing Environments) enables an standardized connection protocol alternative to RDP with enhancements for the VDI user experience.

Red Hat’s SPICE claims to offer the following features in the enterprise:

  • Superior graphics performance (e.g. flash)
  • video quality (30+ frames per second)
  • bi-directional audio (for soft-phones/IP phones)
  • bi-directional video (for video telephony/ video conferencing)
  • No specialized hardware. Software only client that can be automatically installed via Active-X and a browser on the client machine

Red Hat’s virtualization strategy reveals more of it’s capabilities and depth in accompanying blogs and white papers. Adding to the vendor agnostic migration capabilities, Red Hat’s KVM is slated to support VM hosts to 96 cores and 1TB of memory with guests scaling to 16 vCPUs and 64GB of memory. Additional features include high availabitily, live migration, global system scheduler, global power saving (through migration and power down), memory page sharing, thin storage provisioning and SELinux security.

h1

First 12-core VMmark for Istanbul Appears

June 10, 2009

VMware has posted the VMmark score for the first Istanbul-based system and it’s from HP: the ProLiant DL385 G6. While it’s not at the top of the VMmark chart at 15.54@11 tiles (technically it is at the top of the 12-core benchmark list), it still shows a compelling price-performance picture.

Comparing Istanbul’s VMmark Scores

For comparison’s sake, we’ve chosen the HP DL385 G5 and HP DL380 G6 as they were configured for their VMmark tests. In the case of the ProLiant DL380 G6, we could only configure the X5560 and not the X5570 as tested so the price is actually LOWER on the DL380 G6 than the “as tested” configuration. Likewise, we chose the PC-6400 (DDR2/667, 8x8GB) memory for the DL 385 G5 versus the more expensive PC-5300 (533) memory as configured in 2008.

As configured for pricing, each system comes with processor, memory, 2-SATA drives and VMware Infrastructure Standard for 2-processors. Note that in testing, additional NIC’s, HBA, and storage are configured and such additions are not included herein. We have omitted these additional equipment features as they would be common to a deployment set and have no real influence on relative pricing.

Systems as Configured for Pricing Comparison

System Processor Speed Cores Threads Memory Speed Street
HP ProLiant DL385 G5 Opteron 2384 2.7 8 8 64 667 $10,877.00
HP ProLiant DL385 G6 Opteron 2435 2.6 12 12 64 667 $11,378.00
HP ProLiant DL380 G6 Xeon X5560* 2.93 8 16 96 1066 $30,741.00

Here’s some good news: 50% more cores for only 5% more (sound like an economic stimulus?) The comparison Nehalem-EP is nearly 3x the Istanbul system in price.

Read the rest of this entry ?

h1

AMD G34 Motherboards Spotted

June 3, 2009

Charlie Demerjian was first to post a couple of “spy photos” of two G34 motherboards on his site “SemiAccurate.” Both of the boards sighted are dual-socket G34 and appear to carry on-board 10GE ports.

The G34 from Quanta appears to be 2-DPC (16 DIMM slots) with what could be Broadcom’s BCM84812 100/1G/10G chipset, while the other looks to be 3-DPC (24 DIMM slots). The larger of the two, from Inventec, looks to be designed for low-profile/HPC applications and appears to support 10GE with a pair of on-board SFP+ connectors – possibly using Intel’s 82599ES for 100/1G/10G compatibility.

As we know from our briefings with AMD on the subject of G34, these motherboards should be able to run quad-channel DDR3/1333 at 2-DPC. This allows AMD/G34 to offer more than twice the amount of DDR3/1333 than Nehalem-EP (8-DIMMs/CPU on G34 versus 3-DIMMs/CPU on Nehalem). With 2-sockets yielding 24-cores, 64GB RAM (16 x 4GB DDR3/1333) and on-board, redundant 10GE, AMD’s IOMMU and these G34 boards, as Charlie puts it, “scream HPC and heavy-load virtualization.”

Based on what we can see, expect to see these boards in the $375 to $450 price range in Q1/2010.

h1

AMD Istanbul Launch: Shipping Today

June 1, 2009
AMD Opteron "Istanbul" 6-core processor die

AMD Opteron "Istanbul" 6-core processor die

June 1, 2009 – Today, AMD is announcing the general availability of its new single-die, 6-core Opteron processor code named “Istanbul.” We have weighed-in on the promised benefits of Istanbul based on pre-release material that was not under non-disclosure protections. Now, we’re able to disclose the rest of the story.

First, we got a chance to talk to Mike Goddard, AMD Server Products CTO, to discuss Istanbul and how G34/C32 platforms are shaping-up. According to Goddard,”things went really well with Istanbul; it’s no big secret that the silicon we’re using in Istanbul is the same silicon we’re using in Magny-Cours.” Needless to say, there are many more forward-thinking capabilities in Istanbul than can be supported in Socket-F’s legacy chipsets.

“We had always been planning a refresh to Socket-F with 5690,” says Goddard, “but Istanbul got pulled-in beyond our ability to pull-in the chipset.” Consequently, while there could be Socket-F platforms based on the next-generation 5690/5100 chipset, Goddard suggests that “most OEM’s will realign their platform development around [G34/C32, Q1/2010].”

In common parlance, Istanbul is a “genie in a bottle,” and we won’t see its true potential until it resurfaces in its Magny-Cours/G34 configuration. However, at few of these next-generation tweaks will trickle-down to Socket-F systems:

  • AMD PowerCap Manager (via BIOS extensions)
  • Enhanced AMD PowerNow! Technology
  • AMD CoolCore Technology extended to L3 cache
  • HT Assist (aka probe filter) for increase memory bandwidth
  • HT 3.0 with increase to 4.8GT/sec and IMC improvements
  • 5 new part SKUs
  • Better 2P Performance Parity with Nehalem-EP

That’s in addition to 50% more cores in the same power envelope: not an insignificant improvement. In side-by-side comparisons to “Shanghai” quad-core at the same clock frequency, Istanbul delivers 2W lower idle power and 34% better SPECpower ssj_2008 (1,297 overall) results using identical systems with just a processor swap. In fact, the only time Istanbul exceeded Shanghai’s average power envelope was at 80% actual load and beyond – remaining within 5% of the Shanghai even at 100% load. Read the rest of this entry ?

h1

Quick Take: Oracle to Buy Virtual Iron

May 14, 2009

Oracle extended its spring buying spree by announcing the purchase of Virtual Iron Software, Inc (Virtual Iron) on May 13, 2009. Citing Virtual Iron’s “dynamic resource and capacity management” capabilities as the reason in their press release, Oracle intends to fill gaps in its Xen-based Oracle VM product (available as a free download).

Ironically, Virtual Iron’s product focus is SMB. According to a Butler Group technology audit, Virtual Iron “has one limitation that [they] believe will impact potential customers: the management console currently can only manage 120 nodes.” However, Virtual Iron’s “VI-Center” – the management piece cited as the main value proposition by Butler and Oracle – is based on a client-server Java application, making it a “good fit” with the recent Oracle acquisition of Sun Microsystems.

Oracle has not announced plans for Virtual Iron, pending the conclusion of the deal. Oracle’s leading comment:

“Industry trends are driving demand for virtualization as a way to reduce operating expenses and support green IT strategies without sacrificing quality of service,” said Wim Coekaerts, Oracle Vice President of Linux and Virtualization Engineering. “With the addition of Virtual Iron, Oracle expects to enable customers to more dynamically manage their server capacity and optimize their power consumption. The acquisition is consistent with Oracle’s strategy to provide comprehensive enterprise software management and will facilitate more efficient management of application service levels.”

SOLORI’s take: If the deal goes through, Oracle has found an immediate job for its newly acquired Sun Java engineers – getting VI-Cener ready for enterprise computing. Currently, Oracle VM is a “barebones” product with very little value beyond its intrinsic functionality. With the acquisition of Virtual Iron and its management piece, Oracle/Sun could produce a self-sufficient virtualization eco-system with OracleVM augmented by Virtual Iron, Sun Storage, choice of Oracle or MySQL databases, and commodity (or Sun) hardware – all vetted for Oracle’s application stack.

Virtual Iron was supposedly working on Hyper-V and KVM (RedHat’s choice of virtualization) management features. Though we doubt that Oracle VM will evolve into a truly “virtualization agnostic” product, the promise of such a capability is the stuff of “cloud computing.” Sun’s VDI and xVM server group will have a lot of work to do this summer…

h1

The Cost of Benchmarks

May 8, 2009

We’ve been challenged to backup our comparison of Nehalem-EP systems to Opteron Shanghai in price performance based on prevailing VMmark scores available on VMware’s site. In earlier posts, our analysis predicted “comparable” price-performance results between Shanghai and Nehalem-EP systems based on the economics of today’s memory and processors availability:

So what we’ve done here is taken the on-line configurations of some of the benchmark competitors. To make things very simple, we’ve just configured memory and CPU as tested – no HBA or 10GE cards to skew the results. The only exception – as pointed out by our challenger – is that we’ve taken the option of using “street price” memory where “street price” is better than the server manufacturer’s memory price.

Here’s our line-up:

System Processor Qty. Speed (GHz) Speed (GHz, Opt) Memory Configuration Street Price
Inspur NF5280 X5570 2 2.93 3.2 96GB (12x8GB) DDR3 1066 $18,668.58
Dell PowerEdge R710 X5570 2 2.93 3.2 96GB (12x8GB) DDR3 1066 $16,893.00
IBM System x 3650M2 X5570 2 2.93 3.2 96GB (12x8GB) DDR3 1066 $21,546.00
Dell PowerEdge M610 X5570 2 2.93 3.2 96GB (12x8GB) DDR3 1066 $21,561.00
HP ProLiant DL370 G6 W5580 2 3.2 3.2 96GB (12x8GB) DDR3 1066 $18,636.00
Dell PowerEdge R710 X5570 2 2.93 3.2 96GB (12x8GB) DDR3 1066 $16,893.00
Dell PowerEdge R805 2384 2 2.7 2.7 64GB (8x8GB) DDR2 533 $6,955.00
Dell PowerEdge R905 8384 4 2.7 2.7 128GB (16x8GB) DDR2 667 $11,385.00

Here we see Dell offering very aggressive DDR3/1066 pricing [for the R710] allowing us to go with on-line configurations, and HP offering overly expensive DDR2/667 memory prices (factor of 2) forcing us to go with 3rd party memory. In fact, IBM did not allow us to configure their memory configuration – as tested [with the 3650M2] – with their on-line configuration tool [neither did Dell with the M610] so we had to apply street memory prices. [Note: the So here’s how they rank with respect to VMmark:

System VMware Version Vmmark Score Vmmark Tiles Score/Tile Cost/Tile
Inspur NF5280 ESX Server 4.0 build 148592 23.45 17 1.38 $1,098.15
Dell PowerEdge R710 ESX Server 4.0 build 150817 23.55 16 1.47 $1,055.81
IBM System x 3650M2 ESX Server 4.0 build 148592 23.89 17 1.41 $1,267.41
Dell PowerEdge M610 ESX Server 4.0 23.9 17 1.41 $1,273.59
HP ProLiant DL370 G6 ESX Server 4.0 build 148783 23.96 16 1.50 $1,164.75
Dell PowerEdge R710 ESX Server 4.0 24 17 1.41 $993.71
Dell PowerEdge R805 ESX Server 3.5 U4 build 120079 11.22 8 1.40 $869.38
Dell PowerEdge R905 ESX Server 3.5 U3 build 120079 20.35 14 1.45 $813.21

As you can easily see, the cost-per-tile (analogous to $/VM) favors the Shanghai systems. In fact, the one system that we’ve taken criticism for including in our previous comparisons – the Supermicro 6026T-NTR+ with 72GB of DDR3/1066 (running at DDR3/800) – actually leads the pack in Nehalem-EP $/tile, but we’ve excluded it from our tables since it has been argued to be a “sub-optimal” configuration and out-lier. Again, the sweet spot for price-performance for Nehalem, Shanghai and Istanbul is in the 48GB to 80GB range with inexpensive memory: simple economics.

Please note, that not one of the 2P VMmark scores listed on VMware’s official VMmark results tally carry the Opteron 2393SE version of the processor (3.1GHz) or HT3-enabled motherboards. It is likely that we’ll not see HT3-enabled scores nor 2P ESX 4.0 scores until Istanbul’s release in the coming month. Again, if Shanghai’s $/tile is competitive with Nehalem’s today (again, in the 48GB to 80GB configurations), Istanbul – with the same memory and system costs – will be even more so.

Update: AMD’s Margaret Lewis has a similar take with comparison prices for AMD using DDR2/533 configurations. Her numbers – like our previous posts – resolve to $/VM, however she provides some good “street prices” for more “mainstream” configurations of Intel Nehalem-EP and AMD Shanghai systems. See her results and conclusions on AMD’s blog.

h1

Shanghai Economics 101 – Conclusion

May 6, 2009

In the past entries, we’ve looked only at the high-end processors as applied to system prices, and we’ll continue to use those as references through the end of this one. We’ll take a look at other price/performance tiers in a later blog, but we want to finish-up on the same footing as we began; again, with an eye to how these systems play in a virtualization environment.

We decided to finish this series with an analysis of  real world application instead of just theory. We keep seeing 8-to-1, 16-to-1 and 20-to-1 consolidation ratios (VM-to-host) being offered as “real world” in today’s environment so we wanted to analyze what that meant from an economic side.

The Fallacy of Consolidation Ratios

First, consolidation ratios that speak in terms of VM-to-host are not very informative. For instance, a 16-to-1 consolidation ratio sounds good until you realize it was achieved on an $16,000 4Px4C platform. This ratio results in a $1,000-per-VM cost to the consolidator.

In contrast, let’s take the same 16-to-1 ratio on a $6,000 2Px4C platform and it results in a $375-per-VM cost to the consolidator: a savings of nearly 60%. The key to the savings is in vCPU-to-Core consolidation ratio (provided sufficient memory exists to support it). In the first example that ratio was 1:1, but in the last example the ratio is 2:1. Can we find 16:1 vCPU-to-Core ratios out there? Sure, in test labs, but in the enterprise we think the valid range of vCPU-to-Core consolidation ratios is much more conservative, ranging from 1:1 to 8:1 with the average (or sweet spot) falling somewhere between 3:1 and 4:1.

Second, we must note that memory is a growing aspect of the virtualization equation. Modern operating systems no longer “sip” memory and 512MB for a Windows or Linux VM is becoming more an exception than a rule. That puts pressure on both CPU and memory capacity as driving forces for consolidation costs. As operating system “bloat” increases, administrative pressure to satisfy their needs will mount, pushing the “provisioned” amount of memory per VM ever higher.

Until “hot add” memory is part of DRS planning and the requisite operating systems support it, system admins will be forced to either over commit memory, purchase memory based on peak needs or purchase memory based on average memory needs and trust DRS systems to handle the balancing act. In any case, memory is a growing factor in systems consolidation and virtualization.

Modeling the Future

Using data from the Univerity of Chicago and as a baseline and extrapolating forward through 2010, we’ve developed a simple model to predict vMEM and vCPU allocation trends. This approach establishes three key metrics (already used in previous entries) that determine/predict system capacity: Average Memory/VM (vMVa), Average vCPU/VM (vCVa) and Average vCPU/Core (vCCa).

Average Memory per VM (vMVa)

Average memory per VM is determined by taking the allocated memory of all VM’s in a virtualized system – across all hosts – and dividing that by the total number of VM’s in the system (not including non-active templates.) This number is assumed to grow as virtualization moves from consolidation to standardized deployment. Read the rest of this entry ?

h1

AMD’s New Opteron

April 23, 2009

AMD’s announcement yesterday came with some interesting technical tidbits about its new server platform strategy that will affect its competitiveness in the virtualization marketplace. I want to take a look at the two new server platforms and contrast them with what is available today and see what that means for our AMD-based eco-systems in the months to come.

Initially, the introduction of more cores to the mix is good for virtualization allowing us to scale more gracefully and confidently as compared to hyper-threading. While hyper-threading is reported to increase scheduling efficiency in vSphere, it is not effectively a core. Until Nehalem-EX is widely available and we can evaluate 4P performance of hyper-threading in loaded virtual environments I’m comfortable awarding hyper-threading a 5% performance bonus – all things being equal.

AMD's Value Shift

AMD's Value Shift

What’s Coming?

That said, where is AMD going with Opteron in the near future and how will that affect Opteron-based eco-systems? At least one thing is clear: compatibility is assured and performance – at the same thermal footprint – will go up. So let’s look at the ramifications of the new models/sockets and compare them to our well-known 2000/8000 series to glimpse the future.

A fundamental shift away from DDR2 and towards DDR3 for the new sockets is a major difference. Like the Phenom II, Core i7 and Nehalem processors, the new Opteron will be a DDR3 specimen. Assuming DDR3 pricing continues to trend down and the promise of increased memory bandwidth is realized in the HT3/DCA2 and Opteron, DDR3 will deliver solid performance in 4000 and 6000 configurations.

Opteron 6000: Socket G34

From the announcement, G34 is analogous to the familiar 8000-series line with one glaring exception: no 8P on the road-map. In the 2010-2011 time frame, we’ll see 8-core, 12-core and 16-core variants with a new platform being introduced in 2012. Meanwhile, the 6000-series will support 4-channels of “unbuffered” or “registered” DDR3 across up to 12DIMMs per socket (3 banks by 4 channels). Assuming 6000 will support DDR3-1600, the theoretical bandwidth of a 4 channel design would yield memory bandwidths in the 40-50GB/sec range per link (about twice Istanbul’s).

AMD 2010-2013 Road-map

AMD 2010-2013 Road-map

With a maximum module density of 16GB, a 12-DIMM by 4-socket system could theoretically contain 768GB of DDR3 memory. In 2011, that equates to 12GB/core in a 4-way, 64-core server. At 4:1 consolidation ratios for typical workloads, that’s 256 VM/host at 3GB/VM (4GB/VM with page sharing) and an average of 780MB/sec of memory bandwidth per VM. I think the math holds-up pretty well against today’s computing norms and trends. Read the rest of this entry ?

h1

Virtualizing Microsoft Small Business Server 2008

February 23, 2009

Microsoft Small Business Server 2008 appears to be a good option for “Microsoft shops” looking to consolidated Active Directory Domain Controller, Windows Update Services, Security and Virus Protection, SharePoint and Exchange functions on a single server. In practice, however, it takes a really “big” server to support this application – even for a very small office environment.

According to Microsoft’s system requirements page a target system will need a 2GHz core, 4GB of RAM and 60GB of disk space to be viable (minimum). As with past experiences with Microsoft’s products, this “minimum system specification” is offered as a “use this only if you want to under-perform horribly as to be unusable” configuration.

In reality, a single-user, clean installation with two 2.3GHz cores and 4GB of RAM committed to the installation, the server struggles with the minimum amount of memory. The “real” minimum should have been specified as 6GB instead of the optimistic 4GB. Where does the memory go?

It appears that Microsoft Exchange-related memory utilization weighs in at over 1.2GB (including 600MB for “store.exe”). Another 1.2GB is consumed by IIS, SharePoint and SQL. Forefront Client Security-related memory usage is around 1GB all by itself. That’s 3.4GB of “application space” drag in Windows Small Business Server 2008. Granted, the “killer app” for SBS is Exchange 2007 and some might argue SharePoint integration as well, so the use of the term “drag” may not be appropriate. However, let’s just agree that 4GB is a ridiculously low (optimistic) minimum.