Initial Installation
XVS requires a virtual machine import: either using Converter or manual process. We follow the manual process. We used the command line to convert the imported disk into a ESXi-compliant format:
vmkfstools -i XVS.vmdk -d thin XVSnew.vmdk
rm -f XVS.vmdk XVS-*
After conversion, you have a 2GB virtual machine (times two) ready for configuration. We removed the legacy ethernet and hard disk that came with the inventory import. Then add the "existing" disk and new Ethernet (flex) controller.
We then added a 120GB virtual disk to each node using the local storage controllers: LSI 1068SAS (RAID1) for node 1 and NVidia MCP55Pro (RAID1) for node 2. Node 1 and 2 are using the same 250GB Seagate ES.2 (RAID edition) drives.
We then cloned the systems and moved the clone to our second ESX test platform. After creating a new ID for the clone, we were ready to make the XVS nodes active.
Configuring the XVS Nodes
The documentation provided by Xtravirt is step-by-step and includes a lot of screen shots. We will not reproduce them all here, suffice to say it is worth downloading and looking over.
On boot-up of the first node, you notice two things: Centos 5 (kernel 2.6.18-53.1.4.el5) and the "XVS Virtual SAN Main Menu" running in the console of the VM. This is a step-by-step, 1-2-3 approach to using this fixed configuration appliance. Upon use, you realize it is made to be run in pairs. If you exit the menu for any reason, run "confiure_node" from the shell to return to menu operation.
Configure This Node
The first option is to configure the node identity and IP scheme. Since your test lab (and ours) will have its own sub-netting, think about how the IP addressing will be used before configuration. Each node needs:
- A node identity, either 1 or 2
- One Heartbeat IP address
- One iSCSI Target address
The documentation recommends maintaining heartbeat and target addresses on the same subnet. However, since they will share the same network interface, that really is optional. (We'd like to see an advanced configuration where replication takes place over a dedicated interface - separate from the iSCSI target address.) Xtravirt also points out that the iSCSI interface needs to be on the same subnet and port group as the vmkernel interface to work with the iSCSI initiator on the ESX box. We're using:
Node | Heartbeat IP | Target IP |
---|---|---|
1 | 192.168.100.234 | 192.168.100.236 |
2 | 192.168.100.235 | 192.168.100.237 |
Configure XVS Disk
The next step is to configure the XVS disk. This step will commit the storage of the second virtual disk to use as the iSCSI target. It will completely destroy the contents of the disk, so if a local RDM is used, the drive will be erased. There are no additional options, so once the disk is initialized, the operation is complete.
Once the XVS Node and XVS Disk configurations are both complete, it is time to synchronize the nodes. We suggest you reboot now to avoid problems with the next step.
Initial XVS Synchronization
After a reboot, return to configure node on both appliances and, beginning with node 1, perform the initial XVS synchronization. We got an average of about 13MB/sec (about 2 hours for 120GB) on the synchronization over the vmxnet interface with about 500MHz of CPU activity.
Activity and Load Durring Synchronization
[caption id="attachment_248" align="aligncenter" width="455" caption="DRBD Sync CPU, Node 1"]

[caption id="attachment_249" align="aligncenter" width="455" caption="DRBD Sync CPU, Node 2"]

[caption id="attachment_250" align="aligncenter" width="455" caption="DRBD Sync Disk, Node 1"]

[caption id="attachment_251" align="aligncenter" width="455" caption="DRBD Sync Disk, Node 2"]

[caption id="attachment_252" align="aligncenter" width="455" caption="DRBD Sync Traffic, Node 1"]

[caption id="attachment_247" align="aligncenter" width="455" caption="DRBD Sync Traffic, Node 2"]

No comments:
Post a Comment