Monday, September 28, 2009

In Part 4 of this series we created two vSphere virtual machines - one running ESX and one running ESXi - from a set of master images we can use for rapid deployment in case we want to expand the number of ESX servers in our lab. We showed you how to use NexentaStor to create snapshots of NFS and iSCSI volumes and create ZFS clone images from them. We then showed you how to stage the startup of the VSA and ESX hosts to "auto-start" the lab on boot-up.

In this segment, Part 5, we will create a VMware Virtual Center (vCenter) virtual machine and place the ESX and ESXi machines under management. Using this vCenter instance, we will complete the configuration of ESX and ESXi using some of the new features available in vCenter.

Part 5, Managing our ESX Cluster-in-a-Box


With our VSA and ESX servers purring along in the virtual lab, the only thing stopping us from moving forward with vMotion is the absence of a working vCenter to control the process. Once we have vCenter installed, we have 60-days to evaluate and test vSphere before the trial license expires.

Prepping for vCenter Server for vSphere


We are going to install Microsoft Windows Server 2003 STD for the vCenter Server operating system. We chose Server 2003 STD since we have limited CPU and memory resources to commit to the management of the lab and because our vCenter has no need of 64-bit resources in this use case.

Since one of our goals is to have a fully functional vMotion lab with reasonable performance, we want to create a vCenter virtual machine with at least the minimum requirements satisfied. In our 24GB lab server, we have committed 20GB to ESX, ESXi and the VSA (8GB, 8GB and 4GB, respectively). Our base ESXi instance consumes 2GB, leaving only 2GB for vCenter - or does it?

Memory Use in ESXi


VMware ESX (and ESXi) does a good job of conserving resources by limiting commitments for memory and CPU. This is not unlike any virtual memory capable system that puts a premium on "real" memory by moving less frequently used pages to disk. With a lot of idle virtual machines, this ability alone can create significant over-subscription possibilities for VMware; this is why it could be possible to run 32GB worth of VM's to run on a 16-24GB host.

Do we really want this memory paging to take place? The answer - for the consolidation use cases - is usually "yes." This is because consolidation is born out of the need to aggregate underutilized systems in a more resource efficient way. Put another way, administrators tend to provision systems based on worst case versus average use, leaving 70-80% of those resources idle in off-peak times. Under ESX's control those underutilized resources can be re-tasked to another VM without impacting the performance of either one.

On the other hand, our ESX and VSA virtual machines are not the typical use case. We intend to fully utilized their resources and let them determine how to share them in turn. Imagine a good number of virtual machines running on our virtualized ESX hosts: will they perform well with the added hardship of memory paging? Also, when begin to use vMotion those CPU and memory resources will appear on BOTH virtualized ESX servers at the same time.

It is pretty clear that if all of our lab storage is committed to the VSA, we do not want to page its memory. Remember that any additional memory not in use by the SAN OS in our VSA is employed as ARC cache for ZFS to increase read performance. Paging memory that is assumed to be "high performance" by NexentaStor would result in poor storage throughput. The key to "recursive computing" is knowing how to anticipate resource bottlenecks and deploy around them.

This brings the question: how much memory is left after reserving 4GB for the VSA? To figure that out, let's look at what NexentaStor uses at idle with 4GB provisioned:

[caption id="attachment_1169" align="aligncenter" width="374" caption="NexentaStor's RAM footprint with 4GB provisioned, at idle."]NexentaStor's RAM footprint with 4GB provisioned, at idle.[/caption]

As you can see, we have specified a 4GB reservation which appears as "4233 MB" of Host Memory consumed (4096MB+137MB). Looking at the "Active" memory we see that - at idle - the NexentaStor is using about 2GB of host RAM for OS and to support the couple of file systems mounted on the host ESXi server (recursively).

Additionally, we need to remember that each VM has a memory overhead to consider that increases with the vCPU count. For the four vCPU ESX/ESXi servers, the overhead is about 220MB each; the NexentaStor VSA consumes an additional 140MB with its two vCPU's. Totaling-up the memory plus overhead identifies a commitment of at least 21,828MB of memory to run the VSA and both ESX guests - that leaves a little under 1.5GB for vCenter if we used a 100% reservation model.

Memory Over Commitment


The same concerns about memory hold true for our ESX and ESXi hosts - albeit in a less obvious way. We obviously want to "reserve" memory for required by the VMM - about 2.8GB and 2GB for ESX and ESXi respectively. Additionally, we want to avoid over subscription of memory on the host ESXi instance - if at all possible - since it will already be working running our virtual ESX and ESXi machines.

Friday, September 25, 2009

Quick Take: HP Blade Tops 8-core VMmark w/OC'd Memory

HP's ProLiant BL490c G6 server blade now tops the VMware VMmark table for 8-core systems - just squeaking past rack servers from Lenovo and Dell with a score of 24.54@17 tiles: a new 8-core record. The half-height blade was equipped with two, quad-core Intel Xeon X5570 (Nehalem-EP, 130W TDP) and 96GB ECC Registered DDR3-1333 (12x 8GB, 2-DIMM/channel) memory.

In our follow-up, we found that HP's on-line configuration tool does not allow for DDR3-1333 memory so we went to the street for a comparison. For starters, we examined the on-line price from HP with DDR3-1066 memory and the added QLogic QMH2462 Fiber Channel adapter ($750) and additional NC360m dual-port Gigabit Ethernet controller ($320) which came to a grand total of $28,280 for the blade (about $277/VM, not including Blade chassis or SAN storage).

Stripping memory from the build-out results in a $7,970 floor to the hardware, sans memory. Going to the street to find 8GB sticks with DDR3-1333 ratings and HP support yielded the Kingston KTH-PL313K3/24G kit (3x 8GB DIMMs) of which we would need three to complete the build-out.  At $4,773 per kit, the completed system comes to $22,289 (about $218/VM, not including chassis or storage) which may do more to demonstrate Kingston's value in the market place rather than HP's penchant for "over-priced" memory.

Now, the interesting disclosure from HP's testing team is this:

[caption id="attachment_1203" align="aligncenter" width="450" caption="Notes from HP's VMmark submission."]Notes from HP's VMmark submission.[/caption]

While this appears to boost memory performance significantly for HP's latest run (compared to the 24.24@17 tiles score back in May, 2009) it does so at the risk of running the Nehalem-EP memory controller out of specification - essentially, driving the controller beyond the rated load. It is hard for us to imagine that this specific configuration would be vendor supported if used in a problematic customer installation.

SOLORI's Take:Those of you following closely may be asking yourselves: "Why did HP choose to over-clock the  memory controller in this run by pushing a 1066MHz, 2DPC limit to 1333MHz?"  It would appear the answer is self-evident: the extra 6% was needed to put them over the Lenovo machine. This issue raises a new question about the VMmark validation process: "Should out of specification configurations be allowed in the general benchmark corpus?" It is our opinion that VMmark should represent off-the-shelf, fully-supported configurations only - not esoteric configuration tweaks and questionable over-clocking practices.

Should there be as "unlimited" category in the VMmark arena? Who knows? How many enterprises knowingly commit their mission critical data and processes to systems running over-clocked processors and over-driven memory controllers? No hands? That's what we thought... Congratulations anyway to HP for clawing their way to the top of the VMmark 8-core heap...

Monday, September 21, 2009

AMD Chipsets Launched: Fiorano and Kroner Platforms to Follow

The Channel Register is reporting on the launch of AMD's motherboard chipsets which will drive new socket-F based Fiorano and Kroner platforms as well as the socket G34 and C32 based Maranello and San Marino platforms. The Register also points out that no tier one PC maker is announcing socket-F solutions based on the new chipsets today. However, motherboard and "barebones" maker Supermicro is also announcing new A+ server, blade and workstation variants using the new AMD SR5690 and SP5100 chipsets, enabling:

  • GPU-optimized designs: Support up to four double-width GPUs along with two CPUs and up to 3 additional high-performance add-on cards.

  • Up to 10 quad-processor (MP) or dual-processor (DP) Blades in a 7U enclosure: Industry-leading density and power efficiency with up to 240 processor cores and 640GB memory per 7U enclosure.

  • 6Gb/s SAS 2.0 designs: Four-socket and two-socket server and workstation solutions with double the data throughput of previous generation storage architectures.

  • Universal I/O designs: Provide flexible I/O customization and investment protection.

  • QDR InfiniBand support option: Integrated QDR IB switch and UIO add-on card solution for maximum I/O performance.

  • High memory capacity: 16 DIMM models with high capacity memory support to dramatically improve memory and virtualization performance.

  • PCI-E 2.0 Slots plus Dual HT Links (HT3) to CPUs: Enhance motherboard I/O bandwidth and performance. Optimal for QDR IB card support.

  • Onboard IPMI 2.0 support: Reduces remote management costs.


Eco-Systems based on Supermicro's venerable AS2021M - based on the NVidia nForce Pro 3600 chipset - can now be augmented with the Supermicro AS2021A variant based on AMD's SR5690/SP5100 pairing. Besides offering HT3.0 and on-board Winbond WPCM450 KVM/IP BMC module, the new iteration includes support for the SR5690's IOMMU function (experimentally supported by VMware), 16 DDR2 800/667/533 DIMMs, and four PCI-E 2.0 slots - all in the same, familiar 2U chassis with eight 3.5" hot-swap bays.

AMD's John Fruehe outlines AMD's market approach for the new chipsets in his "AMD at Work" blog today. Based on the same basic logic/silicon, the SR5690, SR5670 and SR5650 all deliver PCI-E 2.0 and HT3.0 but at differing levels of power consumption and PCI Express lanes to their respective platforms. Paired with appropriate "power and speed" Opteron variant, these platforms offer system designers, virtualization architects and HPC vendors greater control over price-performance and power-performance constraints that drive their respective environments.

AMD chose the occasion of the Embedded Systems Conference in Boston to announce its new chipset to the world. Citing performance-per-watt advantages that could enhance embedded systems in the telecom, storage and security markets, AMD's press release highlighted three separate vendors with products ready to ship based on the new AMD chipsets.

Monday, September 14, 2009

Quick Take: DRAM Price Follow-Up

As anticipated, global DRAM prices have continued their upward trend through September, 2009. We reported on August 4, 2009 about the DDR3 and DDR2 price increases that - coupled with a short-fall in DDR3 production - have caused a temporary shift of the consumer market towards DDR2-based designs.

Last week, the Inquirer also reported that DRAM prices were on the rise and that the trend will result in parity between DDR2 and DDR3 prices. MaximumPC ran the Inquirer's story urging its readers to buy now as the tide rises on both fronts. DRAMeXchange is reporting a significant revenue gain to the major players in the DRAM market as a result of this well orchestrated ballet of supply and demand. The net result for consumers is higher prices across the board as the DDR2/DDR3 production cross-over point is reached.

2Q2009-WW-DRAM-revenue


SOLORI's Take: DDR2 is a fading bargain in the server markets, and DIMM vendors like Kingston are working to maintain a stable source of DDR2 components through the end of 2009. While still Looking at our benchmark tracking components, we project 8GB DIMMs to average $565/DIMM by the end of 2009. In the new year, expect 8GB/DDR2 to hit $600/DIMM by the end of H2/2010 with lower pricing on 8GB/DDR3-1066 - in the $500/DIMM range (if supply can keep up with new system demands created by continued growth in the virtualization market.)






































Benchmark Server Memory Pricing
DDR2 Series (1.8V)Price Jun '09Price Sep '09DDR3 Series (1.5V)Price Jun '09Price Sep '09







4GB 800MHz DDR2 ECC Reg with Parity CL6 DIMM Dual Rank, x4 (5.4W)

$100.00$117.00

up 17%







4GB 1333MHz DDR3 ECC Reg w/Parity CL9 DIMM Dual Rank, x4 w/Therm Sen (3.96W)

$138.00
$151.00


up 10%







4GB 667MHz DDR2 ECC Reg with Parity CL5 DIMM Dual Rank, x4 (5.94W)

$80.00$103.00

up 29%







4GB 1066MHz DDR3 ECC Reg w/Parity CL7 DIMM Dual Rank, x4 w/Therm Sen (5.09W)

$132.00$151.00

up 15%







8GB 667MHz DDR2 ECC Reg with Parity CL5 DIMM Dual Rank, x4 (7.236W)

$396.00$433.00

up 9%







8GB 1066MHz DDR3 ECC Reg w/Parity CL7 DIMM Dual Rank, x4 w/Therm Sen (6.36W)

$1035.00$917.00

down 11.5%

SOLORI's 2nd Take: Samsung has been driving the DRAM roller coaster in an effort to dominate the market. With Samsung's 40-nm 2Gb DRAM production ramping by year end, the chip maker's infulence could create a disruptive position in the PC and server markets by driving 8GB/DDR3 prices into the sub-$250/DIMM range by 2H/2010. Meanwhile Hynix, the #2 market leader, chases with 40-nm 1Gb DDR3 giving Samsung the opportunity to repeat its 2008/2009 gambit in 2010 making it increasingly harder for competitors to get a foot-hold in the DDR3 market.

Samsung has their eye on the future with 16GB and 32GB DIMMs already exhibited with 50-nm 2Gb parts claiming a 20% power savings over the current line of memory. With 40-nm 2Gb parts, Samsung is claiming up to 30% additional power savings. To put this into perspective, eight 32GB DIMMs would could about 60% of the power consumed by 32 8GB DIMMs (requiring a 4P+ server). In a virtualization context, this is enough memory to enable 100 virtual machines with 2.5GB of memory each without over subscription. Realistically, we expect to see 16GB DDR3 DIMMs at $1,200/DIMM by 2H/2010 - if everything goes according to plan.

Sunday, September 13, 2009

Quick Take: Magny-Cours Spotted, Pushed to 3GHz for wPrime

Andreas Galistel at NordicHardware posted an article showing a system running a pair of engineering samples of the Magny-Cours processor running at 3.0GHz. Undoubtedly these images were culled from a report "leaked" on XtremeSystems forums showing a "DINAR2" motherboard with SR5690 chipset - in single and dual processor installation - running Magny-Cours at the more typical pre-release speed of 1.7GHz.

We know that Magny-Cours is essentially a MCM of Istanbul delivered in the rectangular socket G34 package. One thing illuminating about the two posts is the reported "reduction" in L3 cache from 12MB (6MB x 2 in MCM) to 10MB (2 x 5MB in MCM). Where did the additional cache go? That 's easy: since a 2P Magny-Cours installation is essentially a 4P Istanbul configuration, these processors have the new HT Assist feature enabled - giving 1MB of cache from each chip in the MCM to HT Assist.
"wPrime uses a recursive call of Newton's method for estimating functions, with f(x)=x2-k, where k is the number we're sqrting, until Sgn(f(x)/f'(x)) does not equal that of the previous iteration, starting with an estimation of k/2. It then uses an iterative calling of the estimation method a set amount of times to increase the accuracy of the results. It then confirms that n(k)2=k to ensure the calculation was correct. It repeats this for all numbers from 1 to the requested maximum."

- wPrime site



Another thing intriguing about the XtremeSystems post in particular is the reported wPrime 32M and 1024M completion times. Compared to the hyper-threading-enabled 2P Xeon W5590 (130W TDP) running wPrime 32M at 3.33GHz (3.6GHz turbo)  in 3.950 seconds, the 2P 3.0GHz Magny-Cours completed wPrime 32M in an unofficial 3.539 seconds - about 10% quicker while running a 10% slower clock. From the myopic lens of this result, it would appear AMD's choice of "real cores" versus hyper-threading delivers its punch.

SOLORI's Take: As a "reality check" we can compared the reigning quad-socked, quad-core Opteron 8393 SE result in wPrime 32M and wPrime 1024M at 3.90 and 89.52  seconds, respectively. Adjusted for clock and core count versus its Shanghai cousin, the Magny-Cours engineering samples - at 3.54 and 75.77 seconds, respectively - turned-in times about 10% slower than our calculus predicted. While still "record breaking" for 2P systems, we expected the Magny-Cours/Istanbul cores to out-perform Shanghai clock-per-clock - even at this stage of the game.

Due to the multi-threaded nature of the wPrime benchmark, it is likely that the HT Assist feature - enabled in a 2P Magny-Cours system by default - is the cause of the discrepancy. By reducing the available L3 cache by 1MB per die - 4MB of L3 cache total - HT Assist actually could be creating a slow-down. However, there are several things to remember here:

  • These are engineering samples qualified for 1.7GHz operation

  • Speed enhancements were performed with tools not yet adapted to Magny-Cours

  • The author indicated a lack of control over AMD's Cool 'n Quiet technology which could have made "as tested" core clocks somewhat lower than what CPUz reported (at least during the extended tests)

  • It is speculated that AMD will release Magny-Cours at 2.2GHz (top bin) upon release, making the 2.6+ GHz results non-typical

  • The BIOS and related dependencies are likely still being "baked"


Looking at the more "typical" engineering sample speed tests posted on the XtremeSystems' forum tracks with the 3.0GHz overclock results at a more "typical" clock speed of 2.6GHz for 2P Magny-Cours: 3.947 seconds and 79.625 seconds for wPrime 32M and 1024M, respectively. Even at that speed, the 24-core system is on par with the 2P Nehalem system clocked nearly a GHz faster. Oddly, Intel reports the W5590  as not supporting "turbo" or hyper-threading although it is clear that Intel's marketing is incorrect based on actual testing.

Assuming Magny-Cours improves slightly on its way to market, we already know how 24-core Istanbul stacks-up against 16-thread Nehalem in VMmark and what that means for Nehalem-EP. This partly explains the marketing shift as Intel tries to position Nehalep-EP as a destined for workstations instead of servers. Whether or not you consider this move a prelude to the ensuing Nehalem-EX v. Magny-Cours combat to come or an attempt to keep Intel's server chip power average down by eliminating the 130W+ parts from the "server" list,  Intel and AMD will each attempt win the war before the first shot is fired. Either way, we see nothing that disrupts the price-performance and power-performance comparison models that dominate the server markets.

[Ed: The 10% difference is likely due to the fact that the author was unable to get "more than one core" clocked at 3.0GHz. Likewise, he was uncertain that all cores were reliably clocking at 2.6GHz for the longer wPrime tests. Again, this engineering sample was designed to run at 1.7GHz and was not likely "hand picked" to run at much higher clocks. He speculated that some form of dynamic core clocking linked to temperature was affecting clock stability - perhaps due to some AMD-P tweaks in Magny-Cours.]

Wednesday, September 9, 2009

Quick Take: Dell/Nehalem Take #2, 2P VMmark Spot




The new 1st runner-up spot for VMmark in the “8 core” category was taken yesterday by Dell's R710 - just edging-out the previous second spot HP ProLiant BL490 G6 by 0.1% - a virtual dead heat. Equipped with a pair of Xeon X5570 ($1386/ea, bulk list) and 96GB registered DDR3/1066 (12x8GB), the 2U, rack mount R710 weighs-in with a tile ratio of 1.43 over 102 VMs. :

  • Dell R710 w/redundant high-output power supply, ($18,209)

  • 2 x Intel Xeon X5570 Processors (included)

  • 96GB ECC DDR3/1066 (12×8GB) (included)

  • 2 x Broadcom NexXtreme II 5709 dual-port GigabitEthernet w/TOE (included)

  • 1 x Intel PRO 1000VT quad-port GigabitEthernet (1x PCIe-x4 slot, $529)

  • 3 x QLogic QLE2462 FC HBA (1x PCIe slot, $1,219/ea)

  • 1 x LSI1078 SAS Controller (on-board)

  • 8 x 15K SAS OS drive, RAID10 (included)

  • Required ProSupport package ($2,164)

  • Total as Configured: $24,559 ($241/VM, not including storage)


Three Dell/EMC CX3-40f arrays were used as the storage backing of the test. The storage system included 8GB cache, 2 enclosures and 15, 15K disks per array delivering 19 LUNs at about 300GB each. Intel's Hyper-Threading and  "Turbo Boost" were enabled for 8-thread, 3.33GHz core clocking as was VT; however embedded SATA and USB were disabled as is common practice.

At about $1,445/tile ($241/VM) the new “second dog” delivers its best at a 20% price premium over Lenovo's "top dog" - although the non-standard OS drive configuration makes-up a half of the difference, with Dell's mandatory support package making-up the remainder. Using a simple RAID1 SAS and eliminating the support package would have droped the cost to $20,421 - a dead heat with Lenovo at $182/VM.

Comparing the Dell R710 the 2P, 12-core benchmark HP DL385 G6 Istanbul system at 15.54@11 tiles:

  • HP DL385 G6  ($5,840)

  • 2 x AMD 2435 Istanbul Processors (included)

  • 64GB ECC DDR2/667 (8×8GB) ($433/DIMM)

  • 2 x Broadcom 5709 dual-port GigabitEthernet (on-board)

  • 1 x Intel 82571EB dual-port GigabitEthernet (1x PCIe slot, $150/ea)

  • 1 x QLogic QLE2462 FC HBA (1x PCIe slot, $1,219/ea)

  • 1 x HP SAS Controller (on-board)

  • 2 x SAS OS drive (included)

  • $10,673/system total (versus $14,696 complete from HP)


Direct pricing shows Istanbul’s numbers at $1,336/tile ($223/VM) which is  a 7.5% savings per-VM over the Dell R710. Going to the street - for memory only - changes the Istanbul picture to $970/tile ($162/VM) representing a 33% savings over the R710.

SOLORI's Take: Istanbul continues to offer a 20-30% CAPEX value proposition against Nehalem in the virtualization use case - even without IOMMU and higher memory bandwidth promised in upcoming Magny-Cours. With the HE parts running around $500 per processor, the OPEX benefits are there for Istanbul too. It is difficult to understand why HP wants to charge $900/DIMM for 8GB PC-5300 sticks when they are available on the street for 50% less - that's a 100% markup. Looking at what HP charges for 8GB DDR3/1066 - $1,700/DIM - they are at least consistent. HP's memory pricing practice makes one thing clear - customers are not buying large memory configurations from their system vendors...

On the contrary, Dell appears to be happy to offer decent prices on 8GB DDR3/1066 with their R710 at approximately $837/DIMM - almost par with street prices.  Looking to see if this parity held up with Dell's AMD offerings, we examined the prices offered with Dell's R805: while - at $680/DIMM - Dell's prices were significantly better than HP's, they still exceeded the market by 50%. Still, we were able to configure a Dell R805 with AMD 2435's for much less than the equivalent HP system:

  • Dell R805 w/redundant power ($7,214)

  • 2 x AMD 2435 Istanbul Processors (included)

  • 64GB ECC DDR2/667 (8×8GB) ($433/ea, street)

  • 4 x Broadcom 5708 GigabitEthernet (on-board)

  • 1 x Intel PRO 100oPT dual-port GigabitEthernet (1x PCIe slot, included)

  • 1 x QLogic QLE2462 FC HBA (1x PCIe slot, included)

  • 1 x Dell PERC SAS Controller (on-board)

  • 2 x SAS OS drive (included)

  • $10,678/system total (versus $12,702 complete from Dell)


This offering from Dell should be able to deliver equivalent performance with HP's DL385 G6 and likewise savings/VM compared to the Nehalem-based R710. Even at the $12,702 price as delivered from Dell, the R805 represents a potential $192/VM price point - about $50/VM (25%) savings over the R710.

Tuesday, September 8, 2009

Quick-Take: VMworld 2009 Wrap-Up

VMworld 2009 in San Franciso started off with a crash and a fist fight, but ended without further incident. If you're looking for what happened, it would be hard to beat Duncan Epping's link-summary of the San Francisco VMworld 2009 at Yellow-Bricks, so we won't even try. Likewise, Chad Sakacc has some great EMC view point on his Virtualgeek blog, and - fresh from his new book release - Scott Lowe has some great detail about the VMworld keynotes, events and sessions he attended.

There is a great no-spin commentary on VMworld's "softer underbelly" on Jon William Toigo's Drunken Data blog - especially the post about Xsigo's participation in VMworld 2009. Also, Brian Madden has a great wrap-up video of of interviews from the VMworld floor including VMware's Client Virtualization Platform (CVP) and the software implementation of Teradici's PC-over-IP.

AMD's IOMMU was on display using a test mule with two 12-core 6100 processors and a SR5690 chipset. The targets were a FirePro graphics card and a Solarflare 10GE NIC. For IOMMU-based virtualization to have broad appeal, hardware device segmentation must be supported in a manner compatible with vMotion (live migration.) No segmentation was hinted at in AMD's demo (for FirePro), but the fact that vSphere+IOMMU+Magny-Cours equated to enough stability to be openly demonstrating the technology says a lot about the maturity of AMD's upcoming chips and chipsets. On the other hand, Solarflare's demonstration previewed - in 10GE - what could be possible in a future version of IOV for GPU's:
"The flexible vNIC demonstration will highlight the Solarstorm server adapter’s scalable, virtualized architecture, supporting 100s of virtual machines and 1000s of vNICs. The Solarstorm vNIC architecture provides flexible mapping of vNICs, so that each guest OS can have its own vNIC, as well as traffic management, enabling prioritization and isolation of IP flows between vNICs."

- Solarflare Press Release



SOLORI's Take: The controversy surrounding VMware's "focus" on the VMware "sphere" of products was a non-starter. The name VMworld does not stand for "Virtualization World" - it stands for "VMware World" and denying competitor's "marketing access" to that venue seems like a reasonable restriction. While it may seem like a strong-arm tactic to some, insisting that vendors/partners are there "for VMworld only" - and hence restricting cross-marketing efforts in and around the venue - makes it more difficult for direct competitors to play the "NASCAR-style marketing" (as Toigo calls it) game.

VMworld is a showcase for technologies driving the virtualization eco-system as seen from VMware's perspective. While there are a growing number of competitors for virtualization mind-share, VMware's pace and vision - to date - has been driven by careful observation of use-case more so than innovation for innovation's sake. It is this attention to business need that has made VMware successful and what defines VMworld's focus - and it is in that light that VMworld 2009 looks like a great success.
http://www.networkworld.com/news/2009/090309-vmworld-vmware-roundup.html

Friday, September 4, 2009

vSphere Client in Windows7

Until there is an updated release of the VMware vSphere Client, running the client on a Windows7 system will require a couple of tricks. While the basic process outlined in these notes accomplishes the task well, the use of additional "helper" batch files is not necessary. By adding the path to the "System.dll" library to your user's environment, the application can be launched from the standard icon without further modification.

First, add the XML changes at the end of the "VpxClient.exe.config" file. The end of your config file will now look something like this:
  <runtime>
<developmentMode developerInstallation="true"/>
</runtime>
</configuration>

Once the changes are made, save the "VpxClient.exe.config" file (if your workstation is secured, you may need "Administrator" privileges to save the file.) Next, copy the "System.dll" file from the "%WINDOWS%\Microsoft.NET\Framework\v2.0.50727" folder on an XP/Vista machine to a newly created "lib" folder in the VpxClient's directory. Now, you will need to update the user environment to reflect the path to "System.dll" to complete the "developer" hack.

To do this, right-click on the "Computer" menu item on the "Start Menu" and select "Properties." In the "Control Panel Home" section, click on "Advanced system settings" to open the "System Properties" control panel. Now, click on the "Environment Variables..." button to open the Environment Variables control panel. If "DEVPATH" is already defined, simply add a semi-colon (";") to the existing path and add the path to your copied "System.dll" file (not including "System.dll") to the existing path. If it does not exist, create a new variable called "DEVPATH" and enter the path string in the "Variable Value" field.

System-Properties-Environment-Panel-Windows7



The path begins with either %ProgramFiles% or %ProgramFiles(x86)% depending on whether or not 32-bit or 64-bit Windows7 is installed, respectively. Once the path is entered into the environment and the "System.dll" file is in place, the vSphere Client will launch and run without additional modification. Remember to remove the DEVPATH modification to the environment when a Windows7 vSphere Client is released.

Note that this workaround is not supported by VMware and that the use of the DEVPATH variable could have unforseen consequences in your specific computing environment. Therefore, appropriate considerations should be made prior to the implementation of this "hack." While SOLORI presents this information "AS-IS" without warranty of any kind, we can report that this workaround is effective for our Windows7 workstations, however ...