OnApp XEN to KVM disk migration

1. Introduction

The migration from XEN to KVM is a complex task that includes changes in the storage layout, guest OS and even OnApp if there is a requirement to keep the networking configuration.

StorPool provides the procedure to migrate the VM disks from XEN to KVM at the storage level.

The changes needed at the OS level of the VM and in OnApp Control Panel are out of the scope of this documentation.

OnApp does configure and boot the guest VMs for XEN and KVM differently but it could be summarized on two common migration cases that share same Preparation steps.

Attention

  • The procedure of manipulating the VM disks must be done on a Powered off VMs via the OnApp’s Control Panel!

  • The resizing of the VM disks while the migration is in progress is not supported!

2. Preparation steps

  1. Figure out which XEN VM disks need to be migrated.

  2. Create a shell KVM VM with same disk sizes as the XEN VM. Power off the KVM VM.

  3. Note the OnApp’s VM and disk identifiers for both XEN and KVM VMs. They are needed when calling the xen2kvmDisk.sh helper tool. They will be referenced further as XEN_DISK_IDENTIFIER, KVM_DISK_IDENTIFIER and KVM_VM_IDENTIFIER.

  4. The XEN VM’s guest OS should be reconfigured to support the KVM devices on next boot (this part is not covered in the guide).

  5. Power off the XEN VM.

Hint

For Linux based VMs the guest OS reconfiguration could be done after the XEN VM is Powered off with the help of a helper tool called from ./xen2kvmDisk.sh tool via the -p command line argument.

Tip

It is possible to convert XEN VM to KVM VM without a shell KVM VM. See the addendum.

3. Migration Case-1: Same disk structure (common for Windows and FreeBSD)

In this case the migration is relatively simple - by replacing the StorPool volumes backing the KVM VM disks with clones of the XEN VM disks. For each VM disk run the following command:

cd /usr/local/lvmsp
./xen2kvmDisk.sh -y -x $XEN_DISK_IDENTIFIER -k $KVM_DISK_IDENTIFIER -b

Note

The -b command line argument is mandatory to trigger the Migration Case-1 handling.

Without -y command line argument the script will show the commands to execute without actually running them.

Tip

The OnApp’s VMs based on the FreeBSD Templates have a second disk marked as a “swap” disk but looking at the disk’s content it is a raw disk formatted with ext2 that is used for provisioning the VM’s OS.

4. Migration Case-2: XEN VM with raw disk, KVM with partition table (Linux)

To reduce the VMs downtime two layers of device mapper device are assembled in advance. The first layer is a linear device mapper that presents the raw XEN disk as a partition for use by the KVM based VM. To do the real data migration a RAID1 device mapper is assembled on top of the linear device mapper assembly. The three step migration procedure is outlined below.

4.1. (1) Preparing the KVM disks

In this step:

  1. The KVM VM disk will be resized and repartitioned. There are two possible options:

    1. w/ DOS compatible partition table (OnApp’s default). The volume must be extended with 32 sectors * 512 bytes = 16384 Bytes (16KiB)

    2. w/o DOS compatibility partition table. The volume must be extended with 2048 sectors * 512 bytes = 1048576 Bytes (1MiB)

  2. The XEN VM disk volume will be cloned. The name of the cloned volume is further referenced as $CLONED_XEN_VM_DISK_VOLUME

  3. The KVM VM disk volume will be tagged with tag xen=$CLONED_XEN_VM_DISK_VOLUME in StorPool

The xen2kvmDisk.sh helper could be used to complete the steps. It is preferred to execute the commands on the server where the KVM VM was provisioned (The provisioning is completed on a compute node if there is no backup server).

cd /usr/local/lvmsp
./xen2kvmDisk.sh -y -x $XEN_DISK_IDENTIFIER -k $KVM_DISK_IDENTIFIER

Note

To enable the non-DOS compatibility mode append the -c argument

cd /usr/local/lvmsp
./xen2kvmDisk.sh -y -x $XEN_DISK_IDENTIFIER -k $KVM_DISK_IDENTIFIER -c

Tip

It is possible to call an external program providing the block device of the cloned XEN volume as its single argument:

cd /usr/local/lvmsp
./xen2kvmDisk.sh -y -x $XEN_DISK_IDENTIFIER -k $KVM_DISK_IDENTIFIER -p helper

The helper program will be called with the attached block device as it’s only argument. For example, the helper could mount the provided block device and alter the files inside to address the changes needed to enable the KVM mode of the image. There are three generic examples files located in /usr/local/lvmsp/ folder that could be used as a reference - xen2kvm.c5x, xen2kvm.c6x and xen2kvm.c7x used to test the migration procedure with the default OnApp Templates for CentOS 5,6 and 7 respectively.

4.2. (2) Migrating the data from the raw XEN disk to KVM disk with a partition

In this step:

  1. The KVM VM should be powered on. LVMSP will assemble the device mapper entries

  2. If the VM check does not show any issues, the operator should issue a command that will attach the raw KVM volumes to the prepared RAID1 array.

  3. The kernel will start mirroring the data from the assembled linear device mapper to the raw KVM volume.

  4. The synchronization process should be monitored via the provided script and once the data is in sync the volumes should be marked as ready

Note

The commands issued on this stage are intentionally left for manual operation so the operators could have better control on the migration process. During the RAID re-synchronization the data will be copied from the source to destination volume so the available free space on the cluster should be monitored. It is recommended to migrate the VMs in reasonable batches followed with deletion of the XEN VMs and their volumes.

Start the KVM VM and check that the VM is working properly.

If there are no issues the synchronization of the data to the raw KVM disk will be initiated with the following command on the KVM host:

cd /usr/local/lvmsp
./xen2kvmDisk.sh -y -m $KVM_VM_IDENTIFIER

On each sequential invocation of the script with same arguments it will do one of the following:

  • Print the progress of the RAID1 synchronization.

  • When the synchronization of all VM disks is complete it will

    • Remove the StorPool tag xen=$CLONED_XEN_VM_DISK_VOLUME from the KVM volume.

    • Tag the XEN volume and it’s clone with vs=$KVM_VM_IDENTIFIER.

Note

If the $KVM_VM_IDENTIFIER is omitted the script will process all local running VMs.

Tip

There are RAID1 performance tuning options described in Raid1 resync tuning that could improve the migration speed.

4.3. (3) Finalizing the disk migration

In this step:

  1. The VM will be migrated(or rebooted via OnApp Control panel). In the process LVMSP will destroy the device mapper devices and do a clean attach of the raw KVM volumes.

  2. The XEN VM should be deleted which will free the space allocated by the XEN disk volumes.

To start using the StorPool volume directly do a last VM migration to another KVM host (or VM reboot) from OnApp’s Control Panel.

The obsolete XEN volumes should be deleted with xen2kvmDisk.sh once the VM is successfully migrated:

cd /usr/local/lvmsp/
./xen2kvmDisk.sh -y -z -m wbrduubehanmjz
# [wbrduubehanmjz] Deleting XEN volume  ...
# (0) storpool -B volume xen-taxyjo3k36tg5w:fgwgtpeirhcqup delete xen-taxyjo3k36tg5w:fgwgtpeirhcqup

Hint

The script will delete all tagged xen volume clones if the -m flag is omitted

5. Optimization and tuning

5.1. Raid1 resync tuning

The resync process is managed by the kernel in a way to reduce the impact on the raid1 device performance. There are two variables that could be tweaked to improve the resync performance:

  • /proc/sys/dev/raid/speed_limit_min - the minimum guaranteed resync speed when there are active BIOs (default: 1000)

  • /proc/sys/dev/raid/speed_limit_max - the maximum resync speed allowed per device (default: 200000)

The values are in KiB/s and could be changed using sysctl tool. For example to set the guaranteed speed to 2MB/s and to not exceed 350MB/s:

sysctl -w dev.raid.speed_limit_min=2000
sysctl -w dev.raid.speed_limit_max=350000

Note

There is no golden rule how to tune these values. The VMs IO latencies should be monitored and if there are issues the values should be adjusted accordingly.

6. Addendum

It is possible to convert XEN VM to KVM VM without a shell KVM VM by altering the OnApp database.

UPDATE virtual_machines SET hypervisor_id="KVM_HV_ID" WHERE identifier="XEN_VM_IDENTIFIER";

Also Migration Step 2 - Step 1 provide the XEN VM disk identifier for both XEN and KVM disk identifiers arguments.

cd /usr/local/lvmsp
./xen2kvmDisk.sh -y -x $XEN_DISK_IDENTIFIER -k $XEN_DISK_IDENTIFIER