Volume and snapshot operations

Here you can find how to to perform some important operations in a multi-cluster or multi site setup. These volume and snapshot operations are performed using the storpool CLI tool, for details see Volumes and Snapshots.

Exporting snapshots

A snapshot in one of the clusters could be exported and become visible at all clusters in the location it was exported to. For example, a snapshot called snap1 could be exported using the following command:

user@hostA # storpool snapshot snap1 export location_b

It becomes visible in Cluster_B (which is part of location_b) and could be listed this way:

user@hostB # storpool snapshot list remote
-------------------------------------------------------------------------------------------------------
| location   | remoteId             | name     | onVolume | size         | creationTimestamp   | tags |
-------------------------------------------------------------------------------------------------------
| location_b | locationAId.aId.1    | snap1    |          | 107374182400 | 2019-08-11 15:18:02 |      |
-------------------------------------------------------------------------------------------------------

The snapshot may as well be exported to the location of the source cluster where the snapshot resides. This way it will become visible to all sub-clusters in this location.

Cloning remote snapshots

Any snapshot export could be cloned locally. You need the globalId, which can be obtained by running a storpool command for a volume using the -f option and specifying JSON or raw output; see Mode.

For example, to clone a remote snapshot with globalId of locationAId.aId.1 locally:

user@hostB # storpool snapshot snap1-copy template hybrid remote location_a locationAId.aId.1
digraph G {
  rankdir=LR;
  compound=true;
  ranksep=2;
  image=svg;
  subgraph cluster_a {
    style=filled;
    color=lightgrey;
    node [
        style=filled,
        color=white,
        shape=square,
        label="Bridge A",
    ];
    bridge0;
    node [
        style=filled,
        shape=circle,
        color=white,
        label="snap1\nlocationAId.aId.1"
    ]
    snap1;
    label = "Cluster A";
  }

  subgraph cluster_b {
    style=filled;
    color=grey;
    node [
        style=filled,
        color=white,
        shape=square,
        label="Bridge B"
    ];
    bridge1;
    node [
        style=filled,
        shape=circle,
        color=white,
        label="snap1_clone\nlocationAId.aId.1",
    ]
    snap1_clone;
    label = "Cluster B";
  }
  bridge0 -> bridge1 [color="red", lhead=cluster_b, ltail=cluster_a];
  bridge1 -> bridge0 [color="blue", lhead=cluster_a, ltail=cluster_b];
  snap1 -> snap1_clone
}

The name of the clone of the snapshot in Cluster_B will be snap1_clone, with all parameters from the hybrid template.

Note

Note that the name of the snapshot in Cluster_B could also be exactly the same in all sub-clusters in a multi-cluster setup, as well as in clusters in different locations in a multi site setup.

The transfer will start immediately. Only written parts from the snapshot will be transferred between the sites. If snap1 has a size of 100GB, but only 1GB of data was ever written in the volume when it was snapshotted, eventually approximately this amount of data will be transferred between the two (sub-)clusters.

If another snapshot in the remote cluster is already based on snap1 and then exported, the actual transfer will again include only the differences between snap1 and snap2, since snap1 exists in Cluster_B:

digraph G {
  graph [nodesep=0.5, ranksep=1]
  rankdir=LR;
  compound=true;
  image=svg;
  subgraph cluster_a {
    style=filled;
    color=lightgrey;
    node [
        style=filled,
        color=white,
        shape=square,
        label="Bridge A",
    ];
    bridge0;
    node [
        style=filled,
        shape=circle,
        color=white,
        label="snap1\nlocationAId.aId.1"
    ]
    snap1;
    node [
        style=filled,
        shape=circle,
        color=white,
        label="snap2\nlocationAId.aId.2"
    ]
    snap2;
    label = "Cluster A";
    {rank=same; bridge0 snap1 snap2}
  }

  subgraph cluster_b {
    style=filled;
    color=grey;
    node [
        style=filled,
        color=white,
        shape=square,
        label="Bridge B"
    ];
    bridge1;
    node [
        style=filled,
        shape=circle,
        color=white,
        label="snap1_clone\nlocationAId.aId.1",
    ]
    snap1_clone;
    node [
        style=filled,
        shape=circle,
        color=white,
        label="snap2_clone\nlocationAId.aId.2",
    ]
    snap2_clone;
    {rank=same; bridge1 snap1_clone snap2_clone}
    label = "Cluster B";
  }
  bridge0 -> bridge1 [color="red", lhead=cluster_b, ltail=cluster_a];
  bridge1 -> bridge0 [color="blue", lhead=cluster_a, ltail=cluster_b];
  snap2 -> snap1 [dir=back];
  snap2_clone -> snap1_clone [dir=back];
  snap2 -> snap2_clone;
}

The globalId for this snapshot will be the same for all sites it has been transferred to.

Creating a remote backup on a volume

The volume backup feature is in essence a set of steps that automate the backup procedure for a particular volume. For example, to backup a volume named volume1 in Cluster_A to Cluster_B:

user@hostA # storpool volume volume1 backup Cluster_B

The above command will actually trigger the following set of events:

  1. Creates a local temporary snapshot of volume1 in Cluster_A to be transferred to Cluster_B.

  2. Exports the temporary snapshot to Cluster_B.

  3. Instructs Cluster_B to initiate the transfer for this snapshot.

  4. Exports the transferred snapshot in Cluster_B to be visible from Cluster_A.

  5. Deletes the local temporary snapshot.

For example, if a backup operation has been initiated for a volume called volume1 in Cluster_A, the progress of the operation could be followed using this command:

user@hostA # storpool snapshot list exports
-------------------------------------------------------------
| location   | snapshot     | globalId          | backingUp |
-------------------------------------------------------------
| location_b | volume1@1433 | locationAId.aId.p | true      |
-------------------------------------------------------------

Once this operation completes the temporary snapshot will no longer be visible as an export, and a snapshot with the same globalId will be visible remotely:

user@hostA # storpool snapshot list remote
------------------------------------------------------------------------------------------------------
| location   | remoteId          | name    | onVolume    | size         | creationTimestamp   | tags |
------------------------------------------------------------------------------------------------------
| location_b | locationAId.aId.p | volume1 | volume1     | 107374182400 | 2019-08-13 16:27:03 |      |
------------------------------------------------------------------------------------------------------

Note

You must have a template configured in mgmtConfig backupTemplateName in Cluster_B for this to work (see Changing default template).

Creating an atomic remote backup for multiple volumes

Sometimes a set of volumes are used simultaneously in the same virtual machine; for example, different filesystems for a database and its journal. In order to restore back to the same point in time all volumes, you can initiate a group backup with this command:

user@hostA# storpool volume groupBackup Cluster_B volume1 volume2

Note

The same underlying feature is used by the VolumeCare for keeping consistent snapshots for all volumes on a virtual machine.

Restoring a volume from remote snapshot

To restore a volume to a previous state from a remote snapshot:

  1. Create a local snapshot from the remotely exported one:

    user@hostA # storpool snapshot volume1-snap template hybrid remote location_b locationAId.aId.p
    OK
    

    There are some bits to explain in the above example - from left to right:

    • volume1-snap - name of the local snapshot that will be created.

    • template hybrid - instructs StorPool what will be the replication and placement for the locally created snapshot.

    • remote location_b locationAId.aId.p - instructs StorPool where to look for this snapshot and what is its globalId.

    If the bridges and the connection between the locations are operational, the transfer will begin immediately.

  1. Create a volume with the newly created snapshot as a parent:

    .. code-block:: console
    
        user@hostA # storpool volume volume1-tmp parent volume1-snap
    
  2. Finally, the volume clone would have to be attached where it is needed.

The last two steps could be changed a bit to rename the old volume to something different, and directly create the same volume name from the restored snapshot. This is handled differently in different orchestration systems. The procedure for restoring multiple volumes from a group backup requires the same set of steps.

See VolumeCare node info for an example implementation.

Note

If the snapshot transfer hasn’t completed yet when the volume is created, read operations on an object that is not yet transferred will be forwarded through the bridge and will be processed by the remote cluster.