Monday, August 17, 2009

In-the-Lab: Full ESX/vMotion Test Lab in a Box, Part 1

There are many features in vSphere worth exploring but to do so requires committing time, effort, testing, training and hardware resources. In this feature, we'll investigate a way - using your existing VMware facilities - to reduce the time, effort and hardware needed to test and train-up on vSphere's ESXi, ESX and vCenter components. We'll start with a single hardware server running VMware ESXi free as the "lab mule" and install everything we need on top of that system.

Part 1, Getting Started


To get started, here are the major hardware and software items you will need to follow along:

ESX-hardwareRecommended Lab Hardware Components



  • One 2P, 6-core AMD "Istanbul" Opteron system

  • Two 500-1,500GB Hard Drives

  • 24GB DDR2/800 Memory

  • Four 1Gbps Ethernet Ports (4x1, 2x2 or 1x4)

  • One 4GB SanDisk "Cruiser" USB Flash Drive

  • Either of the following:

    • One CD-ROM with VMware-VMvisor-Installer-4.0.0-164009.x86_64.iso burned to it

    • An IP/KVM management card to export ISO images to the lab system from the network




Recommended Lab Software Components



  • One ISO image of NexentaStor 2.x (for the Virtual Storage Appliance, VSA, component)

  • One ISO image of ESX 4.0

  • One ISO image of ESXi 4.0

  • One ISO image of VCenter Server 4

  • One ISO image of Windows Server 2003 STD (for vCenter installation and testing)


For the hardware items to work, you'll need to check your system components against the VMware HCL and community supported hardware lists. For best results, always disable (in BIOS) or physically remove all unsupported or unused hardware- this includes communication ports, USB, software RAID, etc. Doing so will reduce potential hardware conflicts from unsupported devices.

The Lab Setup


We're first going to install VMware ESXi 4.0 on the "test mule" and configure the local storage for maximum use. Next, we'll create three (3) machines two create our "virtual testing lab" - deploying ESX, ESXi and NexentaStor running directly on top of our ESXi "test mule." All subsequent tests VMs will be running in either of the virtualized ESX platforms from shared storage provided by the NexentaStor VSA.

[caption id="attachment_929" align="aligncenter" width="450" caption="ESX, ESXi and VSA running atop ESXi"]ESX, ESXi and VSA running atop ESXi[/caption]

Next up, quick-and-easy install of ESXi to USB Flash...

Installing ESXi to Flash


This is actually a very simple part of the lab installation. ESXi 4.0 installs to flash directly from the basic installer provided on the ESXi disk. In our lab, we use the IP/KVM's "virtual CD" capability to mount the ESXi ISO from network storage and install it over the network. If using an attached CD-ROM drive, just put the disk in, boot and follow the instructions on-screen. We've produced a blog showing how to "Install ESXi 4.0 to Flash" if you need more details - screen shots are provided.

Once ESXi reboots for the first time, you will need to configure the network cards in an appropriate manner for your lab's networking needs. This represents your first decision point: will the "virtual lab" be isolated from the rest of your network? If the answer is yes, one NIC will be plenty for management since all other "virtual lab" traffic will be contained within the ESXi host. If the answer is no, let's say you want to have two or more "lab mules" working together, then consider the following common needs:

  • One dedicated VMotion/Management NIC

  • One dedicated Storage NIC (iSCSI initiator)

  • One dedicated NIC for Virtual Machine networks


We recommend following interface configurations:

  • Using one redundancy group

    • Add all NICs to the same group in the configuration console

    • Use NIC Teaming Failover Order to dedicate one NIC to  management/VMotion and one NIC to iSCSI traffic within the default vSwitch

    • Load balancing will be based on port ID



  • Using two redundancy groups (2 NIC per group)

    • Add only two NICs to the management group in the configuration console

    • Use NIC Teaming Failover Order to dedicate one NIC to  management/VMotion traffic within the default vSwitch (vSwitch0)

    • From the VI Client, create a new vSwitch, vSwitch1, with the remaining two NICs

    • Use either port ID (default) or hash load balancing depending on your SAN needs




[caption id="attachment_938" align="aligncenter" width="395" caption="Our switch ports and redundancy groups - 2-NICs using port ID load balancing, 2-NICs using IP hash load balancing."]Port and switch interconnections.[/caption]

Test the network configuration by failing each port and make sure that all interfaces provide equal function. If you are new to VMware networking concepts, stick to the single redundancy group until your understanding matures - it will save time and hair... If you are a veteran looking to hone your ESX4 or vSphere skills, then you'll want to tailor the network fit your intended lab use.

Next, we cover some ESXi topics for first-timers...

First-Time Users


First-time users of VMware will now have a basic installation of ESXi and may be wondering where to go next. If the management network test has not been verified, now is a good time to do it from the console. This test will ping the DNS servers and gateway configured for the management port, as well as perform a "reverse lookup" of the IP address (in-addr.arpa requesting name resolution based on the IP address.) If you have not added the IP address of the ESXi host into your local DNS server, this item will fail.

[caption id="attachment_945" align="aligncenter" width="450" caption="Testing the ESXi Management Port Connectivity"]Testing the ESXi Management Port Connectivity[/caption]

Once the initial management network is setup and testing good we simply launch a web browser from the workstation we'll be managing from and enter the ESXi host's address as show on the console screen:

[caption id="attachment_946" align="aligncenter" width="450" caption="Management URL From Console Screen"]Management URL From Console Screen[/caption]

The ESXi host's embedded web server will provide a link to "Download vSphere Client" to your local workstation for installation. We call this the "VI Client" in the generic sense. The same URL provides links to VMware vCenter, vSphere Documentation, the vSphere Remote CLI installer and virtual appliance and Web Services SDK. For now, we only need the VI Client installed.

[caption id="attachment_947" align="aligncenter" width="334" caption="vSphere VI Client Login"]vSphere VI Client Login[/caption]

Once installed, login to the VI Client using the "root" user and password established when you configured ESXi's management interface. The "root" password should not be something easily guessed as a hacker owning your ESX console could present serious security consequences. Once logged-in, we'll turn our attention to the advanced network configuration.

Initial Port Groups for Hardware ESXi Server


If you used two redundancy groups like we do, you should have at lease four port groups defined: one virtual machine port group for each vSwitch and one VMkernel port group for each vSwitch. We wanted to enable two NICs for iSCSI/SAN network testing on an 802.3ad trunk group, and we wanted to be able to pass 802.1q VLAN tagged traffic to the virtual ESX servers on the other port group. We created the following:

[caption id="attachment_900" align="aligncenter" width="405" caption="vNetworking - notice the "stand by" adapter in vSwitch0 due to the active-standby selection. (Note we are not using vmnic0 and vmnic1.)"]vSwitch-example-01[/caption]




  • Virtual Switch vSwitch0

    • vSwitch of 56 ports, route by port ID, beacon probing, active adapter vmnic4, standby adapter vmnic2

    • Physical switch ports configured as 802.1q trunks, all VLANs allowed, VLAN1 untagged

      • Virtual Machine Port Group 1: "802.1q Only" - VLAN ID "4095"

      • Virtual Machine Port Group2: "VLAN1 Mgt NAT - VLAN ID "none"

      • VMkernel Port Group: "Management Network" - VLAN ID "none"





  • Virtual Switch vSwitch1

    • vSwitch of 56 ports, route by IP hash, link state only, active adapters vmnic0 and vmnic1

    • Physical switch ports configured as static 802.3ad trunk group, all VLANs allowed, VLAN2000 untagged

      • Virtual Machine Port Group 1: "VLAN2000 vSAN" - VLAN ID "none"

      • VMkernel Port Group 1: "VMkernel iSCSI200" - VLAN ID "none"

      • VMkernel Port Group: 2 "VMKernel iSCSI201" - VLAN ID "none"






This combination of vSwitches and port groups allow for the following base scenarios:

  1. Virtual ESX servers can connect to any VLAN through interfaces connected to "802.1q Only" port group;

  2. Virtual ESX servers can be managed via interfaces connected to "VLAN1 Mgt NAT" port group;

  3. Virtual ESX servers can access storage resources via interfaces connected to "VLAN2000 vSAN" port group;

  4. Hardware ESXi server can access storage resources on either of our lab SAN networks in 192.168.200.0/25 or 192.168.200.128/25 networks to provide resources beyond the direct attached storage available (mainly for ISO, canned templates and backup images);



Next, we take advantage of that direct attached storage...

Using Direct Attached Storage


We want to use the directly attached disks (DAS) as a virtual storage backing for our VSA (virtual SAN appliance.) To do so, we'll configure the local storage. In some installations, VMware ESXi will have found one of the two DAS drives and configured it as "datastore" in the Datastores list. The other drive will be "hidden" awaiting partitioning and formatting. We can access this from the VI Client by clicking the "Configuration" tab and selecting the "Storage" link from "Hardware."



[caption id="attachment_948" align="alignright" width="292" caption="ESXi may use a portion of the first disk for housekeeping and temporary storage. Do not delete these partitions, but the remainder of the disk can be used for virtual machines."]ESXi may use a portion of the first disk for housekeeping and temporary storage. Do not delete these partitions, but the remainder of the disk can be used for virtual machines.[/caption]

Note: We use a naming convention for our local storage to prevent conflicts when ESX hosts are clustered. This convention follows our naming pattern for the hosts themselves (i.e. vm01, vm02, etc.) such that local storage becomes vLocalStor[NN][A-Z] where the first drive of host "vm02" would be vLocalStor02A, the next drive vLocalStor02B, and so on.

If you have a "datastore" drive already configured, rename it according to your own naming convention and then format the other drive. Note that VMware ESXi will be using a small portion of the drive containing the "datastore" volume for its own use. Do not delete these partitions if they exist, but the remainder of the disk can be used for virtual machine storage.

If you do not see the second disk as an available volume, click the  "Add Storage..." link and select "Disk/LUN" to tell the VI Client that you want a local disk (or FC LUN). The remaining drive should be selectable from the list on the next page - SATA storage should be identified as "Local ATA Disk..." and the capacity should indicate the approximate volume of storage avaialbe on disk. Select it and click the "Next >" button.

The "Current Disk Layout" screen should show "the hard disk is blank" provided no partitions exist on the drive. If the disk has been recycled from another installation or machine, you will want to "destroy" the existing partions in favor of a single VMFS partion and click "Next."  For the "datastore name" enter a name consistent with your naming convention. As this is our second drive, we'll name ours vLocalStor02B and click "Next."

[caption id="attachment_950" align="aligncenter" width="405" caption="Selecting the default block size for ESX's attached storage volumes."]Selecting the default block size for ESX's attached storage volumes.[/caption]

The default block size on the next screen will determine the maximum supported single file size for this volume. The default setting is 1MB blocks, resulting in a maximum single file size of 256GB. This will be fine for our purposes as we will use multiple files for our VSA instead of one large monolithic file on each volume. If you have a different strategy, choose the block size that supports your VSA file requirements.

The base ESXi server is now complete. We've additionally enabled the iSCSI initiator and a remote NFS volume containing ISO images to our configuration to speed-up our deployment. While this is easy to do in a Linux environment, we expect most readers will be more comfortable in a Windows setting and we've modified the approach for those users.

[caption id="attachment_951" align="aligncenter" width="405" caption="Right-click on the storage volume you want to browse and select "Browse Datastore..." to open a filesystem browser."]Right-click on the storage volume you want to browse and select "Browse Datastore..." to open a filesystem browser.[/caption]

The last step before we end Part 1 in our Lab series is uploading the ISO images to the ESXi server's local storage. This can easily be accomplished from the VI Client by browsing the local file system, adding a folder named "iso" and uploading the appropriate ISO images to that directory. Once uploaded, these images will be used to install ESX, ESXi, NexentaStor, Windows Server 2003 and vCenter Server.

To come, Parts 2 & 3, the benefits of ZFS and installing the NexentaStor developer's edition as a Virtual Storage Appliance for our "Test Lab in a Box" system...

No comments:

Post a Comment