h1

VMware PartnerExchange2010 – Day 1-2

February 9, 2010

View of the Mandalay Bay from VMware's Alumni Lounge

It’s my second day at the beautiful Mandalay Bay in Las Vegas, Nevada and VMware PartnerExchange 2010. Yesterday was filled with travel and a generous “Tailgate Party” with burgers, dogs, beverages and lots of VMware geeks! I managed to catch the last quarter of the game from the Mandalay Bay Poker Room where I added to my chip stack at the 1/2 No-Limit Texas Hold ‘Em tables. Then it was early to bed – about 9PM PST – where I studied for the upcoming VCP410 exam.

Today (Monday) was occupied with a partners-only VMware Certified Professional, Version 4, Preparation Course which outlined the VCP4 Blueprint, question examples and test-taking strategies. The “best answer,” multiple-choice format of the VCP410 exam promises to offer me some challenges as I apply black-and-white logic to a few shades-of-grey questions. The best strategy to overcome such an obstacle: read the question in its entirety, eliminate all wrong answers, then choose the answer(s) that best satisfy the entire question. A key example is this from the on-line “mock-up” exam:

What is the maximum number of vNetwork switch ports per ESX host and vCenter Server instance?

a.  4,088 for vNetwork standard switches; 4,096 for vNetwork Distributed switches

b.  4,096 for both types of switches

c.  4,088 for vNetwork standard switches; 6,000 for vNetwork distributed switches

d.  512 for both types of virtual switches

Well, it might have been obvious that “c” is the “correct” answer, but “a” is right off of Page 6 of the vSphere Configuration Maximums guide. Both are solidly “correct” answers, it’s just that “c” speaks to both the ESX question and the vCenter question making it more correct. However, neither is completely correct since vDS ports are bound by vCenter and ESX host, while vSS ports are bound only by ESX host. Since neither answer “a” or “c” specifies which limitation they are answering – host or vCenter – it is left to subjective reasoning to infer the intent. According to Jon Hall (VMware, Florida) the most ports any vNetwork switch can have in any one host is 4,088 – regardless of type. Therefore, to reach the “total virtual network ports per host (vDS and vSS ports) at least one switch of each type must exist. Alone, they can only reach 4,088 ports, however the Configuration Maximums document never spells this out for the vNetwork Distributed Switch. Hopefully this exception will be foot-noted in the next revision of the document. [Note: the additional information about vDS type vNetwork switches that  Jon logically invalidates “a” as a response.]

Following the VCP4 Prep Course, I “recharged” in the Alumni Lounge. VMware had snacks and drinks to quell the appetite and lots of power outlets to restore my iPhone and laptop. While I waited, I contacted the wife and got the 4-1-1 on our baby, checked e-mail and ran through the “mock-up” exam a couple of times. Then it was off to the Welcome Reception at the VMware Experience Hall where sponsors and exhibitors had their wares on display.

iPhone Screen Capture of the ESX Host Running Nehalem-EX, 4P/16C/32T

iPhone Screen Capture of the ESX Host Running Nehalem-EX, 4P/32C/64T

Just inside the Hall – across from the closest beverage station – was Intel’s booth and the boys in blue were demonstrating vMotion over 10GE NICs. Yes, it was fast (as you’d expect) but the real kick was the “upcoming” 10GE Base-T adapters to challenge the current price-performance leader: the 10GE Base-CR (also supporting SFP+). At under $400/port for 10GE, it’s hard to remember a reason for using 1Gbps NICs… Oh yes, the prohibitive per-port cost of 10GE switches. AristaNetworks to the rescue???

Intel was also showing their “modular server” system. Unfortunately, the current offering doesn’t allow for SAS JBOD expansion in a meaningful way (read: running NexentaStor on one/two of the “blades”), but after discussing the issue of SAS/love with the guys in the blue booth, interests were peaked. Evan, expect a call from Intel’s server group… Seriously, with 14x 2.5″ drives in a SAS Expander interconnected chassis, NexentaStor + SSD + 15K SAS would rock!

Last but not least, Intel was proudly showing their 4P, Nehalem-EX running VMware ESX with 512GB of RAM (DDR3) and demonstrating 64active threads (pictured.) This build-out offers lots of virtualization goodness at a hereto unknown price point. Suffice to say, at 1.8GHz it’s not a screamer, but the RAS features are headed in the right direction. When you rope 64-threads (about 125-250 VM’s) and 1TB worth of VM’s (yes, 1TB RAM – about $250K worth using “on-loan Samsung parts”) you are talking about a lot of “eggy in the basket.” By enhancing the RAS capabilities of these giant systems, component failure mitigation is becoming less catastrophic  – eventually allowing only a few VM’s to be impacted by a point failure instead of ALL running VM’s on the box.

vCenter ESX Host Status Showing 512GB of RAM

In case you haven’t seen an ESX host with 512GB of available RAM, check-out this screen capture (excuse the iPhone quality) to the right. That’s about $33K worth of DDR3 memory sitting in that box and assuming that the EX processors run $2K a piece and giving $6K for the remainder of the system, that’s nearly $6K/VM in this demo: fantastically decadent! Of course – and in all due fairness to the boys in blue – VM density was not the goal in this demonstration: RAS was, and the 2-bit error scrubbing – while painful as watching paint dry – is pretty cool and soon to be needed (as indicated above) for systems with this capacity.

Other vendors visited were Wyse and Xsigo. The boys in yellow (Wyse) were pimping their thin/zero clients with some compelling examples of PCoIP (Wyse 20p) and MMR (Wyse r90lew). The PCoIP demos featured end-to-end hardware Teradici cards displaying clips from Avatar, while the MMR demo featured 720p movie clips from an iMAX cut of dog fight training. While the PCoIP was impressive and flawless, the upcoming MMR enhancements – while flawed in the beta I saw – were nothing short of impressive.

No, that's not Xsigo's secret sauce: it's the chocolate fountain at VMware's Welcome Reception.

Considering that the MMR-capable thin client was running a 1.5GHz AMD Semperon, the 720p Windows Media stream looked all the better. Looking back at the virtual machine from the ESX console, only about 10-15% of a core was being consumed to “render” the video. But that’s the beauty of MMR: redirect the processor intensive decoding to the end-point and just send the stream un-decoded. While PCoIP is a win in LANs with knowledge workers and call center applications, the MMR-based thin clients look pretty good for Education and YouTube-happy C-level employees looking to catch-up on their Hulu…

I managed to catch the Xsigo boys as the night wound down and they insured my that “mom’s cooking” back at the HQ. “Very soon” we should be hearing about a Xsigo I/O Director option that is a better fit for ROBO and SME deployments. The best part about Xsigo’s I/O virtualization technology in VMware applications: it delivers without a proprietary blade or server requirement! I’m really looking forward to getting some Xsigo into the SOLORI lab this summer…

%d bloggers like this: