Tuesday, February 9, 2010

Quick Take - VMware PartnerExchange 2010: Day 3

With about 2.5 hours of sleep and the VCP410 test looming, Tuesday took on a different tone than the previous two days. My calendar was full:

  • 5:30 - Wake-up and check e-mail/blog/systems back in CST

  • 7:00am - Breakfast at the VMware Experience Hall

  • 8:30am - Keynote with Carl Eschenbach, EVP Worldwide Sales & Field Ops, VMware

  • 10:00am - ESXi Convergence Roadmap Session

  • 11:15am - View4 Workload Sizing and Testing

  • 12:15pm - Lunch in the VMware Alumni Lounge

  • 1:30pm - vSphere4 Advanced Configuration Topics

  • 3:45pm - VCP410 Test

  • 5:15pm - View Composer Tips, Tricks and Best Practices

  • 6:45pm - Check-in at home

  • 7:00pm - Update blog and check/respond to e-mail


First, the keynote with Carl and the gang was awesome! VMware took a really aggressive attitude towards the competition, including Citrix (virtual desktop) and Microsoft (virtual data center). To sum up the conversation: VMware's intent on carrying on the Q4/09 momentum through 2011, extending its lead in virtual data center and cloud computing capabilities. But Carl's not happy with just the traditional server market, and VMware wants to own the virtual desktop space - virtually putting the squeeze on Citrix as the walls close-in around them.

With over 60-70% of net new Citrix VDI builds being deployed on VMware's ESX servers, it make me wonder why Citrix would drive its XenApp customers to VDI - in the form of XenDesktop4 - by offering a 2-for-1 trade-in program. Isn't this like asking their own clients to reconsider the value proposition of XenApp - in essence turning vendor-locked accounts into a new battleground with VMware? If the momentum shifts towards VMware View4.x and VMware accelerates the pace on product features (including management and integration) as suggested by Carl's aggressive tone today, where does that leave XenDesktop and Citrix?

[caption id="attachment_1439" align="alignright" width="150" caption="The VMware Express Mobile Data Center"]The VMware Express Trailer[/caption]

The VMware Express: Coming to a City Near You!


VMware introduced its "data center on wheels" to the PartnerExchange 2010 audience today and I got a chance to get on-board and take a look. The build-out was clean and functional with 60+ Gbps of external interconnects waiting for a venue. Inside the Express was a data center, a conference room and several demonstration stations showing VMware vSphere and View4 demos.

[caption id="attachment_1438" align="alignleft" width="113" caption="6 ton Mobile A/C for Express' Data Center"]6 ton Mobile A/C for Express' Data Center[/caption]

VMware rolled-out the red carpet to the PartnerExchange 2010 attendees. To the right - above the 5-th wheel - is the conference room. All the way in the back (to the left) is the data center portion. In-between the data center and the conference room lies the demonstration area with external plasma screens in kick-panels displaying slide deck and demonstration materials.

[caption id="attachment_1444" align="alignright" width="92" caption="The Express' Diesel Generator - Capable of Powering Things for Nearly 2-days"]The Express' Diesel Generator - Capable of Powering Things for Nearly 2-days[/caption]

Up front - behind the cab - rests 6-tons of air conditioning mounted to the front of the trailer. This keeps the living area inside habitable and the data center (about 70-80 sq. ft.) cool enough to run some serious virtualization equipment. Mounted directly behind the driver's cabin is the diesel generator - capable of powering the entire operation for better than 40-hours when external power is unavailable. Today, however, the VMware Express was taking advantage of "house power" provided by Mandalay Bay's conference center.

Where the rubber met the road was inside the data center: currently occupied by an EMC/Cisco rack and a rack powered by MDS/NetApp/Xsigo. Both featured 12TB of raw storage and high-density Nehalem-EP solutions. In the right corner, the heavyweight EMC/Cisco bundle was powered by Cisco's UCS B-series platform featuring eight Nehalem 2P blades per 6U chassis fed by a pair of Cisco 4900-series converged switches. In the left corner, the super middleweight MDS Micro QUADv-series"mini-blade" chassis featuring an eight Nehalem 2P blades per two 2U chassis fed by a pair of Xsigo I/O directors delivering converged network and SAN tunneled over infiniband interconnects.

[caption id="attachment_1441" align="alignleft" width="300" caption="Two-of-Three Racks are Currently Occupied by EMC/Cisco and MDS/NetApp/Xsigo"]Still More Capacity for Additional Hardware Sponsors[/caption]

It will be interesting to see how the drive arrays survive the journey as the VMware Express travels across the country over the next year. Meanwhile, this tractor trailer is packing 60-blades worth of serious virtualization hardware destined for a town near you. VMware is currently looking for additional sponsors from the partner community to expand its tour; and access to the VMware Express will be prioritized based on partner status and/or sponsorship.

VCP410 Test Passed - Waiting for Official Notification


With the VCP410 test in the books, I'm now waiting for official notification from VMware of my VCP4 status. According to my "Examination Score Report" I should receive notice from VMware within 30-days having met all of the requirements for "VMware Certified Professional on vSphere 4 Certification" and testing above the Certified Instructor minimums.

As a systems and network architect, I found the "interface related questions" somewhat more challenging than the "design and configure" related fare. However, the test was pretty well balanced and left me with well over 25 minutes to go back over questions I'd checked for review and finalize those answers. I logged-out of the exam with 18 minutes left on the clock. My recommendation to those looking to pass the VCP410:

  1. Work with vSphere in a hands-on capacity for several days before taking the test, making good mental notes related to interface operations inside and outside of vCenter

  2. Know the minimums and maximums for ESX and vCenter configurations

  3. Understand storage zoning, masking and configuration

  4. Go over the VCP blueprint on your own before seeking additional assistance

  5. Remember the test is on GA release and not "current" release so "correct" answers may differ slightly from "reality"

  6. Get more than 2.5 hours sleep the night before you take the exam

  7. Schedule the exam in the morning - while you're fresh - not the afternoon following meetings, etc.

  8. Dig into topics on the VCP Forum online


That about does it for day number three in Las Vegas, Nevada. It's time to shuffle-up and deal!

Monday, February 8, 2010

[caption id="attachment_1422" align="alignleft" width="134" caption="View of the Mandalay Bay from VMware's Alumni Lounge"][/caption]

It's my second day at the beautiful Mandalay Bay in Las Vegas, Nevada and VMware PartnerExchange 2010. Yesterday was filled with travel and a generous "Tailgate Party" with burgers, dogs, beverages and lots of VMware geeks! I managed to catch the last quarter of the game from the Mandalay Bay Poker Room where I added to my chip stack at the 1/2 No-Limit Texas Hold 'Em tables. Then it was early to bed - about 9PM PST - where I studied for the upcoming VCP410 exam.

Today (Monday) was occupied with a partners-only VMware Certified Professional, Version 4, Preparation Course which outlined the VCP4 Blueprint, question examples and test-taking strategies. The "best answer," multiple-choice format of the VCP410 exam promises to offer me some challenges as I apply black-and-white logic to a few shades-of-grey questions. The best strategy to overcome such an obstacle: read the question in its entirety, eliminate all wrong answers, then choose the answer(s) that best satisfy the entire question. A key example is this from the on-line "mock-up" exam:
What is the maximum number of vNetwork switch ports per ESX host and vCenter Server instance?

a.  4,088 for vNetwork standard switches; 4,096 for vNetwork Distributed switches

b.  4,096 for both types of switches

c.  4,088 for vNetwork standard switches; 6,000 for vNetwork distributed switches

d.  512 for both types of virtual switches

Well, it might have been obvious that "c" is the "correct" answer, but "a" is right off of Page 6 of the vSphere Configuration Maximums guide. Both are solidly "correct" answers, it's just that "c" speaks to both the ESX question and the vCenter question making it more correct. However, neither is completely correct since vDS ports are bound by vCenter and ESX host, while vSS ports are bound only by ESX host. Since neither answer "a" or "c" specifies which limitation they are answering - host or vCenter - it is left to subjective reasoning to infer the intent. According to Jon Hall (VMware, Florida) the most ports any vNetwork switch can have in any one host is 4,088 - regardless of type. Therefore, to reach the "total virtual network ports per host (vDS and vSS ports) at least one switch of each type must exist. Alone, they can only reach 4,088 ports, however the Configuration Maximums document never spells this out for the vNetwork Distributed Switch. Hopefully this exception will be foot-noted in the next revision of the document. [Note: the additional information about vDS type vNetwork switches that  Jon logically invalidates "a" as a response.]

Following the VCP4 Prep Course, I "recharged" in the Alumni Lounge. VMware had snacks and drinks to quell the appetite and lots of power outlets to restore my iPhone and laptop. While I waited, I contacted the wife and got the 4-1-1 on our baby, checked e-mail and ran through the "mock-up" exam a couple of times. Then it was off to the Welcome Reception at the VMware Experience Hall where sponsors and exhibitors had their wares on display.

[caption id="attachment_1424" align="alignright" width="123" caption="iPhone Screen Capture of the ESX Host Running Nehalem-EX, 4P/32C/64T"]iPhone Screen Capture of the ESX Host Running Nehalem-EX, 4P/16C/32T[/caption]

Just inside the Hall - across from the closest beverage station - was Intel's booth and the boys in blue were demonstrating vMotion over 10GE NICs. Yes, it was fast (as you'd expect) but the real kick was the "upcoming" 10GE Base-T adapters to challenge the current price-performance leader: the 10GE Base-CR (also supporting SFP+). At under $400/port for 10GE, it's hard to remember a reason for using 1Gbps NICs... Oh yes, the prohibitive per-port cost of 10GE switches. AristaNetworks to the rescue???

Intel was also showing their "modular server" system. Unfortunately, the current offering doesn't allow for SAS JBOD expansion in a meaningful way (read: running NexentaStor on one/two of the "blades"), but after discussing the issue of SAS/love with the guys in the blue booth, interests were peaked. Evan, expect a call from Intel's server group... Seriously, with 14x 2.5" drives in a SAS Expander interconnected chassis, NexentaStor + SSD + 15K SAS would rock!

Last but not least, Intel was proudly showing their 4P, Nehalem-EX running VMware ESX with 512GB of RAM (DDR3) and demonstrating 64active threads (pictured.) This build-out offers lots of virtualization goodness at a hereto unknown price point. Suffice to say, at 1.8GHz it's not a screamer, but the RAS features are headed in the right direction. When you rope 64-threads (about 125-250 VM's) and 1TB worth of VM's (yes, 1TB RAM - about $250K worth using "on-loan Samsung parts") you are talking about a lot of "eggy in the basket." By enhancing the RAS capabilities of these giant systems, component failure mitigation is becoming less catastrophic  - eventually allowing only a few VM's to be impacted by a point failure instead of ALL running VM's on the box.

[caption id="attachment_1425" align="alignleft" width="150" caption="vCenter ESX Host Status Showing 512GB of RAM"][/caption]

In case you haven't seen an ESX host with 512GB of available RAM, check-out this screen capture (excuse the iPhone quality) to the right. That's about $33K worth of DDR3 memory sitting in that box and assuming that the EX processors run $2K a piece and giving $6K for the remainder of the system, that's nearly $6K/VM in this demo: fantastically decadent! Of course - and in all due fairness to the boys in blue - VM density was not the goal in this demonstration: RAS was, and the 2-bit error scrubbing - while painful as watching paint dry - is pretty cool and soon to be needed (as indicated above) for systems with this capacity.

Other vendors visited were Wyse and Xsigo. The boys in yellow (Wyse) were pimping their thin/zero clients with some compelling examples of PCoIP (Wyse 20p) and MMR (Wyse r90lew). The PCoIP demos featured end-to-end hardware Teradici cards displaying clips from Avatar, while the MMR demo featured 720p movie clips from an iMAX cut of dog fight training. While the PCoIP was impressive and flawless, the upcoming MMR enhancements - while flawed in the beta I saw - were nothing short of impressive.

[caption id="attachment_1428" align="alignright" width="225" caption="No, that's not Xsigo's secret sauce: it's the chocolate fountain at VMware's Welcome Reception."][/caption]

Considering that the MMR-capable thin client was running a 1.5GHz AMD Semperon, the 720p Windows Media stream looked all the better. Looking back at the virtual machine from the ESX console, only about 10-15% of a core was being consumed to "render" the video. But that's the beauty of MMR: redirect the processor intensive decoding to the end-point and just send the stream un-decoded. While PCoIP is a win in LANs with knowledge workers and call center applications, the MMR-based thin clients look pretty good for Education and YouTube-happy C-level employees looking to catch-up on their Hulu...

I managed to catch the Xsigo boys as the night wound down and they insured my that "mom's cooking" back at the HQ. "Very soon" we should be hearing about a Xsigo I/O Director option that is a better fit for ROBO and SME deployments. The best part about Xsigo's I/O virtualization technology in VMware applications: it delivers without a proprietary blade or server requirement! I'm really looking forward to getting some Xsigo into the SOLORI lab this summer...