In this segment, Part 5, we will create a VMware Virtual Center (vCenter) virtual machine and place the ESX and ESXi machines under management. Using this vCenter instance, we will complete the configuration of ESX and ESXi using some of the new features available in vCenter.
Part 5, Managing our ESX Cluster-in-a-Box
With our VSA and ESX servers purring along in the virtual lab, the only thing stopping us from moving forward with vMotion is the absence of a working vCenter to control the process. Once we have vCenter installed, we have 60-days to evaluate and test vSphere before the trial license expires.
Prepping for vCenter Server for vSphere
We are going to install Microsoft Windows Server 2003 STD for the vCenter Server operating system. We chose Server 2003 STD since we have limited CPU and memory resources to commit to the management of the lab and because our vCenter has no need of 64-bit resources in this use case.
Since one of our goals is to have a fully functional vMotion lab with reasonable performance, we want to create a vCenter virtual machine with at least the minimum requirements satisfied. In our 24GB lab server, we have committed 20GB to ESX, ESXi and the VSA (8GB, 8GB and 4GB, respectively). Our base ESXi instance consumes 2GB, leaving only 2GB for vCenter - or does it?
Memory Use in ESXi
VMware ESX (and ESXi) does a good job of conserving resources by limiting commitments for memory and CPU. This is not unlike any virtual memory capable system that puts a premium on "real" memory by moving less frequently used pages to disk. With a lot of idle virtual machines, this ability alone can create significant over-subscription possibilities for VMware; this is why it could be possible to run 32GB worth of VM's to run on a 16-24GB host.
Do we really want this memory paging to take place? The answer - for the consolidation use cases - is usually "yes." This is because consolidation is born out of the need to aggregate underutilized systems in a more resource efficient way. Put another way, administrators tend to provision systems based on worst case versus average use, leaving 70-80% of those resources idle in off-peak times. Under ESX's control those underutilized resources can be re-tasked to another VM without impacting the performance of either one.
On the other hand, our ESX and VSA virtual machines are not the typical use case. We intend to fully utilized their resources and let them determine how to share them in turn. Imagine a good number of virtual machines running on our virtualized ESX hosts: will they perform well with the added hardship of memory paging? Also, when begin to use vMotion those CPU and memory resources will appear on BOTH virtualized ESX servers at the same time.
It is pretty clear that if all of our lab storage is committed to the VSA, we do not want to page its memory. Remember that any additional memory not in use by the SAN OS in our VSA is employed as ARC cache for ZFS to increase read performance. Paging memory that is assumed to be "high performance" by NexentaStor would result in poor storage throughput. The key to "recursive computing" is knowing how to anticipate resource bottlenecks and deploy around them.
This brings the question: how much memory is left after reserving 4GB for the VSA? To figure that out, let's look at what NexentaStor uses at idle with 4GB provisioned:
[caption id="attachment_1169" align="aligncenter" width="374" caption="NexentaStor's RAM footprint with 4GB provisioned, at idle."]

As you can see, we have specified a 4GB reservation which appears as "4233 MB" of Host Memory consumed (4096MB+137MB). Looking at the "Active" memory we see that - at idle - the NexentaStor is using about 2GB of host RAM for OS and to support the couple of file systems mounted on the host ESXi server (recursively).
Additionally, we need to remember that each VM has a memory overhead to consider that increases with the vCPU count. For the four vCPU ESX/ESXi servers, the overhead is about 220MB each; the NexentaStor VSA consumes an additional 140MB with its two vCPU's. Totaling-up the memory plus overhead identifies a commitment of at least 21,828MB of memory to run the VSA and both ESX guests - that leaves a little under 1.5GB for vCenter if we used a 100% reservation model.
Memory Over Commitment
The same concerns about memory hold true for our ESX and ESXi hosts - albeit in a less obvious way. We obviously want to "reserve" memory for required by the VMM - about 2.8GB and 2GB for ESX and ESXi respectively. Additionally, we want to avoid over subscription of memory on the host ESXi instance - if at all possible - since it will already be working running our virtual ESX and ESXi machines.