Revert OpenNebula volume from a snapshot

Common restore scenarios

Saving snapshots in the primary cluster

Use the following steps:

  1. Get the names of existing (i.e., not previously deleted) volumes related to a specific VM

  2. Find all remote exported snapshots for a specific volume in the primary cluster

  3. Transfer a remote snapshot locally

Saving snapshots in the backup cluster

Use the following steps:

  1. Get the names of existing (i.e., not previously deleted) volumes related to a specific VM

  2. Find all remote exported snapshots for a specific volume in the primary cluster

  3. Create a snapshot copy in the backup cluster for preservation purposes.

Reverting a VM with preserving the volume

Use the following steps:

  1. Get the names of existing (i.e., not previously deleted) volumes related to a specific VM

  2. Find all remote exported snapshots for a specific volume in the primary cluster

  3. Transfer a remote snapshot locally

  4. Undeploy the VM

  5. Preserve the old volume

  6. Create a new volume based on the snapshot

Use the Monitor transfer progress step afterward to follow the progress.

Reverting a VM without preserving the volume

Use the following steps:

  1. Get the names of existing (i.e., not previously deleted) volumes related to a specific VM

  2. Find all remote exported snapshots for a specific volume in the primary cluster

  3. Transfer a remote snapshot locally

  4. Undeploy the VM

  5. Delete the old volume

  6. Create a new volume based on the snapshot

Use the Monitor transfer progress step afterward to follow the progress.

Note

Removal of the old volumes should be trivial from OpenNebula itself, as they are normal images.

Revert procedure steps

Use the steps below to completed the scenarios listed above.

Consider that all commands can be run from any machine with StorPool, as well as the ONE control node (frontend), as it has access to the StorPool API management.

Find all remote exported snapshots for a specific volume in the primary cluster

# storpool -j snapshot list remote | jq -r '.data.snapshots[]|select(.name|endswith("<volume_name>"))|[.location,.remoteId,(.creationTimestamp|strftime("%c")),.name]|@csv'

Example output:

"Customer-Backup-01","xc7.b.84awn","Sat Jul 2 03:18:00 2022","spvc___1656731834___xc7___one-img-1-123-0"
"Customer-Backup-01","xc7.b.8428j","Sat Jul 2 04:17:41 2022","spvc___1656735434___xc7___one-img-1-123-0"
"Customer-Backup-01","xc7.b.8424i","Sat Jul 2 05:17:23 2022","spvc___1656739034___xc7___one-img-1-123-0"
"Customer-Backup-01","xc7.b.844p7","Sat Jul 2 06:18:02 2022","spvc___1656742634___xc7___one-img-1-123-0"

Transfer a remote snapshot locally

# snapshot <snapshot_name> template <template_name> remote <remote_cluster_location_name> <snapshot_global_id>

Example:

# snapshot spvc___1656731834___xc7___one-img-1-123-0 template one-ds-1 remote Client-Backup-01 xc7.b.84awn

Choosing spvc___1656731834___xc7___one-img-1-123-0 (for illustration purposes; the name can be given by the user) we are selecting the template one-ds-1 which corresponds with the number of the datastore. We also select the remote cluster location (you can list existing locations with storpool location list) and the global ID of the snapshot (by which the snapshot is selected).

Monitor transfer progress

Monitoring can be done with /usr/lib/storpool/transfer_progress. Alternatively, you can run the following on the primary (receiving) cluster:

# storpool -j volume quickStatus | jq -r '.data[]|select(.recoveringFromRemote==true)|[.name,.upSoonChainsCount,(.size/33554432)]|@csv'

Example output (columns: name, remaining objects, total objects):

"spvc___1656731834___xc7___one-img-1-123-0",160,32000
"spvc___1656743536___xc7___one-img-11-169-1",215,6400
"spvc___1656743592___xc7___one-img-43-509-0",25,12800

Attention

You don’t need the transfer to finish to start using the snapshot.

Undeploy the VM

Attention

Make sure that the volume of the VM is not in use! You can check if the volume is attached anywhere in the cluster via storpool attach list on any node.

Run the following on the ONE frontend node:

# onevm undeploy --hard <ID>

Make sure the VM’s status is UNDEPLOYED:

# onevm show <ID>
# tail -f /var/log/one/ID.log

Preserve the old volume

In OpenNebula (frontend):

# oneimage create -d <ID> --name <image_name> --type DATABLOCK --size <size_in_MB>

Where <ID> represents the datastore number (can be seen via onedatastore list), <image_name> is given by the user, and for <size_in_MB>, the recommendation is to take the value of the provisioned (total) size of the recovered volume.

The above command will output a specific ID number corresponding to an empty StorPool volume with the name one-img-ID.

In StorPool:

To back up the old volume as the image we created, delete the volume of the image from above:

# storpool volume <one-img-ID> delete <one-img-ID>

Rename the old volume to the one we just deleted:

storpool volume one-img-1-123-0 rename <one-img-ID>

Mark the volume to not be backed up:

# storpool volume <one-img-ID> tag vc-policy=no

Mark the volume so that it is not part of the VM:

# storpool volume <one-img-ID> tag nvm=

Delete the old volume

# storpool volume <volume_name> delete <volume_name>

Create a new volume based on the snapshot

Note

The name of the snapshot is the same as the one used in step 3.

# storpool volume <volume_name> parent <snapshot_name>

Because OpenNebula doesn’t automatically tag manually created volumes based on tagged snapshots, you’ll need to check the existing tags of the snapshot:

# storpool -j snapshot list | jq -r '.data[]|select(.name=="<snapshot_name>")|[.tags]'

Then assign all of them to the volume:

# storpool volume <volume_name> tag nvm=<VM_ID> tag vc-policy=<volumecare_policy>

The volume tags should be the same as the ones on the snapshot. You can double-check them with the following:

# storpool -j volume list | jq -r '.data[]|select(.name=="<volume_name>")|[.tags]'

To save some space, you can also remove the snapshot:

Note

The snapshot will go away when the transfer is complete and will not affect the volume.

# storpool snapshot <snapshot_name> delete <snapshot_name>

Create a snapshot copy in the backup cluster for preservation purposes

Create a volume from the desired snapshot and then create a new snapshot from it. Then you can export the newly created snapshot back to the primary cluster:

# storpool volume <volume_name> parent <backup_snapshot_name>
# storpool volume <volume_name> snapshot <preserved_snapshot_name>
# storpool volume <volume_name> delete <volume_name>
# storpool snapshot <preserved_volume_name> export <remote_cluster_location>