VolumeCare

1. Overview

VolumeCare is a StorPool service that creates and manages atomic consistent snapshots of volumes, based on defined retention policies. Its main objective is to provide the ability to roll back to a previous state of a volume.

The service also detects if multiple volumes belong to the same virtual machine (based on tags added by the orchestration/integration) and creates crash-consistent snapshots for the whole VM. It also allows the administrator to revert based on a virtual machine ID instead of volume by volume.

VolumeCare supports working modes and policies for multiple cluster configurations. With these a backup cluster can be used to store the snapshots instead or along with the primary cluster. In this configuration there will be a volumecare service running in each of the clusters.

2. General configuration

VolumeCare configuration is stored in the StorPool cluster’s key-value store in the form of a section based configuration file. It can be viewed and edited with storpool_vcctl tool. storpool_vcctl config show will print the current configuration and storpool_vcctl config edit will open an editor to alter the configuration.

Note

Initially, the VolumeCare configuration can be created in a text file in /etc/storpool/volumecare.conf and the VolumeCare daemon will transfer it automatically to the key-value store. Once the configuration is transferred, the text file will become obsolete.

The configuration file consists of 4 types of sections: format, volumecare, policy and template sections.

2.1. [format] Section

The [format] section is mandatory and describes the version of the format for the configuration through the version=X.Y option. Currently, only version 1.0 is available.

2.2. [volumecare] Section

The [volumecare] section describes working parameters for the service.

A mode option should be specified which defines the working routine of the service and can be one of the following:

  • normal - for a single cluster with snapshots kept in it

  • primary - for a primary (multi) cluster which will send snapshots to a backup cluster

  • backup - for a backup cluster which only stores snapshots from other clusters

  • multi_backup - for a multiclustered backup cluster which only stores snapshots from other clusters

  • primary_backup - for a situation of two clusters sending backups to each other

Note

VolumeCare working in backup and multi_backup modes can have multiple primary clusters sending backups. This is natively supported provided that policies defined in all the primary clusters do not contradict each other and all of them are defined in the backup cluster as well.

When the primary or primary_backup mode is selected, a remote location also needs to be specified in the configuration. It will tell the VolumeCare which is the backup cluster that should receive snapshots from it. This is achieved by setting the remote=<location-name> option. Its value should be the same as a remote location name seen in storpool location list.

Note

As of version 1.21 the remote can be overriden per policy by setting the remote=<location-name> directly in the policy definition.

Note

As of version 1.23 you can use two subclusters of the same StorPool multicluster as a primary and backup location. To do this the use_cluster_id=1 setting must be applied in both clusters.

By default, no tag except vc-policy is inherited by VolumeCare’s snapshots from the source volumes. To enable selective tag inheritance, use the inherit_tags option. It accepts a list of tags that will be inherited from each volume to all of its snapshots. If a tag from inherit_tags is missing on a volume, its snapshots will not have this tag defined at all.

For example, consider the following volumes and VolumeCare configuration fragment:

# VolumeCare configuration fragment
[volumecare]
inherit_tags=my_tag,my-other-tag

# Volumes
| vc-inherit-tags-example-vol-1 | ... | my_tag=one                                                                    |
| vc-inherit-tags-example-vol-2 | ... | my_tag=two skip_the_tag=three                                                 |
| vc-inherit-tags-example-vol-3 | ... | my-other-tag=six my_tag=four skip_the_tag=five                                |
| vc-inherit-tags-example-vol-4 | ... | my-other-tag=nine my_tag=seven skip_the_tag=eight                             |
| vc-inherit-tags-example-vol-5 | ... | skip_this_other_other_tag=eleven skip_this_other_tag=twelve skip_this_tag=ten |
| vc-inherit-tags-example-vol-6 | ... |                                                                               |

The resulting snapshots from this policy will be:

| <VC_PREFIX>--vc-inherit-tags-example-vol-1 | my_tag=one vc-policy=<CONFIGURED_VC_POLICY>                     |
| <VC_PREFIX>--vc-inherit-tags-example-vol-2 | my_tag=two vc-policy=<CONFIGURED_VC_POLICY>                     |
| <VC_PREFIX>--vc-inherit-tags-example-vol-3 | my-other-tag=six my_tag=four vc-policy=<CONFIGURED_VC_POLICY>   |
| <VC_PREFIX>--vc-inherit-tags-example-vol-4 | my-other-tag=nine my_tag=seven vc-policy=<CONFIGURED_VC_POLICY> |
| <VC_PREFIX>--vc-inherit-tags-example-vol-5 | vc-policy=<CONFIGURED_VC_POLICY>                                |
| <VC_PREFIX>--vc-inherit-tags-example-vol-6 | vc-policy=<CONFIGURED_VC_POLICY>                                |

Last thing that needs to specified is the driver option. Currently only driver=storpool is supported.

Policy and template sections are described further down in this document.

3. Advanced configuration

Some additional options for the [volumecare] section are available to fine-tune the VolumeCare behavior:

  • scan_interval_s - default: 60 (seconds). Configures how often the VolumeCare daemon will re-scan the cluster for changes. Higher values could be beneficial in big backup clusters. Lowering this to less than 30 seconds requires extreme caution as re-scanning is a time heavy operation.

  • care_obsolete_check_tout_s - default: 600 (10 minutes). The VolumeCare will periodically check if snapshots have expired. This sets the maximum amount of time between these checks.

3.1. Task control

Internally the VolumeCare daemon has a queue of scheduled tasks to perform. Below are a few options that can delay task execution times in different manners.

The two options care_max_ops and care_obsolete_check_tout_s should be used together. You can instruct the VolumeCare to not execute too many tasks together. It will wait the set number of seconds after it executes the set amount of tasks immediately one after the other.

  • care_max_ops - default: 10. The number of tasks that can be executed without a pause.

  • care_obsolete_check_tout_s - default: 1 (second). Time in seconds to wait (pause) between a batch of task executions.

A finer method of task control allows to spread/pace the snapshot deletion and creation tasks. This is achieved through three parameters:

  • create_delay - default: 0 (seconds, floating point). Time in seconds to wait between executing two snapshot creation tasks.

  • delete_delay - default: 0 (seconds, floating point). Time in seconds to wait between executing two snapshot deletion tasks.

  • create_delete_delay - default: 0 (seconds, floating point). Time in seconds to wait between executing a creation and a deletion task.

Warning

Delaying snapshot tasks too much can lead to VolumeCare not being able to execute all the needed operations thus failing to do its job. Use these options with care and after discussing them with the StorPool support team.

4. Retention policies

VolumeCare creates and manages snapshot based on retention policies. They are defined in the configuration in sections with the [policy:<policy-name>] template.

The policy name is defined in the section header.

The essence of each policy is defined by the policy mode, so the mode option should be specified in each policy section. Depending on the policy mode, other options should also be specified in the policy section to provide the policy parameters.

The list of policies for a cluster (or a set of clusters) is immutable, i.e. new policies can be added, but existing ones must not be removed or modified. This is required as otherwise the state of the already created snapshots will be undefined.

Below is the current list of policy modes:

4.1. Local

By default StorPool snapshots are created in their parent volume’s template. This can be overriden by setting template=<XXX> in the [volumecare] section of the configuration. Templates can also be overriden per-policy by setting template=<YYY> in each policy definition section ([policy:<PPP>]).

4.1.1. basic (stopgap)

This mode has two parameters - snapshots and interval (in hours). For each entity (volume or virtual machine) with this policy the VolumeCare will keep the specified number of snapshots, each of which is created at the interval specified. For example, 4 snapshots with interval of 6 hours means that the oldest snapshot will be 18-24h old and there will be three more, spaced 6 hours apart.

Example for a virtual machine snapshotted with stopgap, snapshots=5, interval=6:

State ID: 1581687345 -- VM 1000 @ 1581687345 (loc) -- volumes: volume-test2, volume-test9 -- stopgap-5-6 -- 2020-02-14 15:35:45 (0h 18m ago)
State ID: 1581665745 -- VM 1000 @ 1581665745 (loc) -- volumes: volume-test2, volume-test9 -- stopgap-5-6 -- 2020-02-14 09:35:45 (6h 18m ago)
State ID: 1581644145 -- VM 1000 @ 1581644145 (loc) -- volumes: volume-test2, volume-test9 -- stopgap-5-6 -- 2020-02-14 03:35:45 (12h 18m ago)
State ID: 1581622545 -- VM 1000 @ 1581622545 (loc) -- volumes: volume-test2, volume-test9 -- stopgap-5-6 -- 2020-02-13 21:35:45 (18h 18m ago)
State ID: 1581600945 -- VM 1000 @ 1581600945 (loc) -- volumes: volume-test2, volume-test9 -- stopgap-5-6 -- 2020-02-13 15:35:45 (1d 0h 18m ago)

4.1.2. exp

Mode exp has an exponentially increasing interval between the snapshots. Currently it’s parameterless and keeps 12-13 snapshots with the following ages:

  • 4 from the last 3 hours

  • 2-3 aged 3-12 hours

  • 2-3 aged 12-24 hours

  • 4 aged 24-48 hours

Example for a volume snapshotted with exp:

State ID: 1581687584 -- spvc___1581687584___loc___exp-test (loc) -- exp -- 2020-02-14 15:39:44 (0h 14m ago)
State ID: 1581683984 -- spvc___1581683984___loc___exp-test (loc) -- exp -- 2020-02-14 14:39:44 (1h 14m ago)
State ID: 1581680384 -- spvc___1581680384___loc___exp-test (loc) -- exp -- 2020-02-14 13:39:44 (2h 14m ago)
State ID: 1581676784 -- spvc___1581676784___loc___exp-test (loc) -- exp -- 2020-02-14 12:39:44 (3h 14m ago)
State ID: 1581673184 -- spvc___1581673184___loc___exp-test (loc) -- exp -- 2020-02-14 11:39:44 (4h 14m ago)
State ID: 1581665984 -- spvc___1581665984___loc___exp-test (loc) -- exp -- 2020-02-14 09:39:44 (6h 14m ago)
State ID: 1581655184 -- spvc___1581655184___loc___exp-test (loc) -- exp -- 2020-02-14 06:39:44 (9h 14m ago)
State ID: 1581640784 -- spvc___1581640784___loc___exp-test (loc) -- exp -- 2020-02-14 02:39:44 (13h 14m ago)
State ID: 1581629984 -- spvc___1581629984___loc___exp-test (loc) -- exp -- 2020-02-13 23:39:44 (16h 14m ago)
State ID: 1581604784 -- spvc___1581604784___loc___exp-test (loc) -- exp -- 2020-02-13 16:39:44 (23h 14m ago)
State ID: 1581575984 -- spvc___1581575984___loc___exp-test (loc) -- exp -- 2020-02-13 08:39:44 (1d 7h 14m ago)
State ID: 1581543584 -- spvc___1581543584___loc___exp-test (loc) -- exp -- 2020-02-12 23:39:44 (1d 16h 14m ago)

4.1.3. keep-daily

The mode has two parameters - interval and days. It will create snapshots at every interval hours. All snapshots created in the last 24 hours will be kept and snapshots older than 24 hours will be reduced to one per day. All snapshots older than days will be deleted. Essentially, it is the same as stopgap-remote (see below), but all snapshots are kept locally.

Example for a virtual machine snapshotted with keep-daily, interval=1, days=7:

State ID: 1581687584 -- VM 1024 @ 1581687584 (loc) -- volumes: test -- keep-daily -- 2020-02-14 15:39:44 (0h 14m ago)
State ID: 1581683984 -- VM 1024 @ 1581683984 (loc) -- volumes: test -- keep-daily -- 2020-02-14 14:39:44 (1h 14m ago)
State ID: 1581680384 -- VM 1024 @ 1581680384 (loc) -- volumes: test -- keep-daily -- 2020-02-14 13:39:44 (2h 14m ago)
State ID: 1581676784 -- VM 1024 @ 1581676784 (loc) -- volumes: test -- keep-daily -- 2020-02-14 12:39:44 (3h 14m ago)
State ID: 1581673184 -- VM 1024 @ 1581673184 (loc) -- volumes: test -- keep-daily -- 2020-02-14 11:39:44 (4h 14m ago)
State ID: 1581669584 -- VM 1024 @ 1581669584 (loc) -- volumes: test -- keep-daily -- 2020-02-14 10:39:44 (5h 14m ago)
State ID: 1581665984 -- VM 1024 @ 1581665984 (loc) -- volumes: test -- keep-daily -- 2020-02-14 09:39:44 (6h 14m ago)
State ID: 1581662384 -- VM 1024 @ 1581662384 (loc) -- volumes: test -- keep-daily -- 2020-02-14 08:39:44 (7h 14m ago)
State ID: 1581658784 -- VM 1024 @ 1581658784 (loc) -- volumes: test -- keep-daily -- 2020-02-14 07:39:44 (8h 14m ago)
State ID: 1581655184 -- VM 1024 @ 1581655184 (loc) -- volumes: test -- keep-daily -- 2020-02-14 06:39:44 (9h 14m ago)
State ID: 1581651584 -- VM 1024 @ 1581651584 (loc) -- volumes: test -- keep-daily -- 2020-02-14 05:39:44 (10h 14m ago)
State ID: 1581647984 -- VM 1024 @ 1581647984 (loc) -- volumes: test -- keep-daily -- 2020-02-14 04:39:44 (11h 14m ago)
State ID: 1581644384 -- VM 1024 @ 1581644384 (loc) -- volumes: test -- keep-daily -- 2020-02-14 03:39:44 (12h 14m ago)
State ID: 1581640784 -- VM 1024 @ 1581640784 (loc) -- volumes: test -- keep-daily -- 2020-02-14 02:39:44 (13h 14m ago)
State ID: 1581637184 -- VM 1024 @ 1581637184 (loc) -- volumes: test -- keep-daily -- 2020-02-14 01:39:44 (14h 14m ago)
State ID: 1581633584 -- VM 1024 @ 1581633584 (loc) -- volumes: test -- keep-daily -- 2020-02-14 00:39:44 (15h 14m ago)
State ID: 1581629984 -- VM 1024 @ 1581629984 (loc) -- volumes: test -- keep-daily -- 2020-02-13 23:39:44 (16h 14m ago)
State ID: 1581626384 -- VM 1024 @ 1581626384 (loc) -- volumes: test -- keep-daily -- 2020-02-13 22:39:44 (17h 14m ago)
State ID: 1581622784 -- VM 1024 @ 1581622784 (loc) -- volumes: test -- keep-daily -- 2020-02-13 21:39:44 (18h 14m ago)
State ID: 1581619184 -- VM 1024 @ 1581619184 (loc) -- volumes: test -- keep-daily -- 2020-02-13 20:39:44 (19h 14m ago)
State ID: 1581615584 -- VM 1024 @ 1581615584 (loc) -- volumes: test -- keep-daily -- 2020-02-13 19:39:44 (20h 14m ago)
State ID: 1581611984 -- VM 1024 @ 1581611984 (loc) -- volumes: test -- keep-daily -- 2020-02-13 18:39:44 (21h 14m ago)
State ID: 1581608384 -- VM 1024 @ 1581608384 (loc) -- volumes: test -- keep-daily -- 2020-02-13 17:39:44 (22h 14m ago)
State ID: 1581604784 -- VM 1024 @ 1581604784 (loc) -- volumes: test -- keep-daily -- 2020-02-13 16:39:44 (23h 14m ago)
State ID: 1581583184 -- VM 1024 @ 1581583184 (loc) -- volumes: test -- keep-daily -- 2020-02-13 10:39:44 (1d 5h 14m ago)
State ID: 1581496784 -- VM 1024 @ 1581496784 (loc) -- volumes: test -- keep-daily -- 2020-02-12 10:39:44 (2d 5h 14m ago)
State ID: 1581410384 -- VM 1024 @ 1581410384 (loc) -- volumes: test -- keep-daily -- 2020-02-11 10:39:44 (3d 5h 14m ago)
State ID: 1581323984 -- VM 1024 @ 1581323984 (loc) -- volumes: test -- keep-daily -- 2020-02-10 10:39:44 (4d 5h 14m ago)
State ID: 1581237584 -- VM 1024 @ 1581237584 (loc) -- volumes: test -- keep-daily -- 2020-02-09 10:39:44 (5d 5h 14m ago)
State ID: 1581151184 -- VM 1024 @ 1581151184 (loc) -- volumes: test -- keep-daily -- 2020-02-08 10:39:44 (6d 5h 14m ago)
State ID: 1581064784 -- VM 1024 @ 1581064784 (loc) -- volumes: test -- keep-daily -- 2020-02-07 10:39:44 (7d 5h 14m ago)

4.1.4. nosnap

Use this mode for a policy that does not create snapshots. Nosnap policies are usually used in two scenarios - as a default that is overriden per entity, or as a override of another default.

4.2. Remote

Remote policies are able to keep snapshots in a remote location. A default remote can be set for all policies and overriden for each policy.

By default, backup clusters will copy snapshots in the SP_BRIDGE_TEMPLATE template. This can be overriden by setting template=<XXX> in the [volumecare] section of the configuration. Templates can also be overriden per-policy by setting template=<YYY> in each policy definition section ([policy:<PPP>]).

Primary clusters still support the global and per-policy template override.

For each policy a head_template=<HHH> can be set in the backup clusters. This will instruct the service to keep the newest transferred snapshot in the <HHH> template. After a newer snapshot is present, snapshots will be moved to their main template.

Warning

Please note, that the per-policy template option can be different in a primary and backup cluster for the same policy. Each volumecare instance will interpret it locally and search for that template in its own cluster. It is also non-mandatory, so it can be used only in one of the locations if necessary.

4.2.1. basic-mirror (stopgap-mirror)

This mode is available only in primary/backup/primary_backup mode and has two parameters - interval and snapshots. Essentially it is the same as the local stopgap mode (see above), but copies all snapshots to the backup cluster as well.

Example for a volume snapshotted with stopgap-mirror, interval=24, snapshots=3:

State ID: 1581867697 -- spvc___1581867697___loc___test (loc) -- stopgap-mirror -- 2020-02-16 17:41:37 (16h 27m ago)
State ID: 1581781297 -- spvc___1581781297___loc___test (loc) -- stopgap-mirror -- 2020-02-15 17:41:37 (1d 16h 27m ago)
State ID: 1581694897 -- spvc___1581694897___loc___test (loc) -- stopgap-mirror -- 2020-02-14 17:41:37 (2d 16h 27m ago)
State ID: 1581867697 -- spvc___1581867697___loc2___test (loc2) -- stopgap-mirror -- 2020-02-16 17:41:37 (16h 27m ago)
State ID: 1581781297 -- spvc___1581781297___loc2___test (loc2) -- stopgap-mirror -- 2020-02-15 17:41:37 (1d 16h 27m ago)
State ID: 1581694897 -- spvc___1581694897___loc2___test (loc2) -- stopgap-mirror -- 2020-02-14 17:41:37 (2d 16h 27m ago)

4.2.2. basic-remote

Added in version 1.12. Essentially it follows the same logic as the stopgap and stopgap-mirror modes (see above), but keeps all snapshots except the most recent one in the backup cluster only. The parameters are also the same - interval and snapshots.

Example for a volume snapshotted with basic-remote, interval=24, snapshots=3:

State ID: 1581867697 -- spvc___1581867697___loc___test (loc) -- basic-remote -- 2020-02-16 17:41:37 (16h 27m ago)
State ID: 1581867697 -- spvc___1581867697___loc2___test (loc2) -- basic-remote -- 2020-02-16 17:41:37 (16h 27m ago)
State ID: 1581781297 -- spvc___1581781297___loc2___test (loc2) -- basic-remote -- 2020-02-15 17:41:37 (1d 16h 27m ago)
State ID: 1581694897 -- spvc___1581694897___loc2___test (loc2) -- basic-remote -- 2020-02-14 17:41:37 (2d 16h 27m ago)

4.2.3. keep-daily-remote (stopgap-remote)

This mode is available only in primary/backup/primary_backup mode and has two parameters - interval and days. It creates snapshots at every interval hours, and copies all of them in the configured backup cluster. The service will keep all snapshots from the last 24 hours in both clusters. Snapshots older than 24 hours will be reduced to one per day and will be kept in the backup cluster only. Snapshots older than days days will be deleted.

Example for a virtual machine snapshotted with stopgap-remote, interval=1, days=7:

State ID: 1581687584 -- VM 1572 @ 1581687584 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 15:39:44 (0h 14m ago)
State ID: 1581683984 -- VM 1572 @ 1581683984 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 14:39:44 (1h 14m ago)
State ID: 1581680384 -- VM 1572 @ 1581680384 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 13:39:44 (2h 14m ago)
State ID: 1581676784 -- VM 1572 @ 1581676784 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 12:39:44 (3h 14m ago)
State ID: 1581673184 -- VM 1572 @ 1581673184 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 11:39:44 (4h 14m ago)
State ID: 1581669584 -- VM 1572 @ 1581669584 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 10:39:44 (5h 14m ago)
State ID: 1581665984 -- VM 1572 @ 1581665984 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 09:39:44 (6h 14m ago)
State ID: 1581662384 -- VM 1572 @ 1581662384 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 08:39:44 (7h 14m ago)
State ID: 1581658784 -- VM 1572 @ 1581658784 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 07:39:44 (8h 14m ago)
State ID: 1581655184 -- VM 1572 @ 1581655184 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 06:39:44 (9h 14m ago)
State ID: 1581651584 -- VM 1572 @ 1581651584 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 05:39:44 (10h 14m ago)
State ID: 1581647984 -- VM 1572 @ 1581647984 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 04:39:44 (11h 14m ago)
State ID: 1581644384 -- VM 1572 @ 1581644384 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 03:39:44 (12h 14m ago)
State ID: 1581640784 -- VM 1572 @ 1581640784 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 02:39:44 (13h 14m ago)
State ID: 1581637184 -- VM 1572 @ 1581637184 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 01:39:44 (14h 14m ago)
State ID: 1581633584 -- VM 1572 @ 1581633584 (loc) -- volumes: test -- stopgap-remote -- 2020-02-14 00:39:44 (15h 14m ago)
State ID: 1581629984 -- VM 1572 @ 1581629984 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 23:39:44 (16h 14m ago)
State ID: 1581626384 -- VM 1572 @ 1581626384 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 22:39:44 (17h 14m ago)
State ID: 1581622784 -- VM 1572 @ 1581622784 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 21:39:44 (18h 14m ago)
State ID: 1581619184 -- VM 1572 @ 1581619184 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 20:39:44 (19h 14m ago)
State ID: 1581615584 -- VM 1572 @ 1581615584 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 19:39:44 (20h 14m ago)
State ID: 1581611984 -- VM 1572 @ 1581611984 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 18:39:44 (21h 14m ago)
State ID: 1581608384 -- VM 1572 @ 1581608384 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 17:39:44 (22h 14m ago)
State ID: 1581604784 -- VM 1572 @ 1581604784 (loc) -- volumes: test -- stopgap-remote -- 2020-02-13 16:39:44 (23h 14m ago)
State ID: 1581687584 -- VM 1572 @ 1581687584 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 15:39:44 (0h 14m ago)
State ID: 1581683984 -- VM 1572 @ 1581683984 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 14:39:44 (1h 14m ago)
State ID: 1581680384 -- VM 1572 @ 1581680384 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 13:39:44 (2h 14m ago)
State ID: 1581676784 -- VM 1572 @ 1581676784 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 12:39:44 (3h 14m ago)
State ID: 1581673184 -- VM 1572 @ 1581673184 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 11:39:44 (4h 14m ago)
State ID: 1581669584 -- VM 1572 @ 1581669584 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 10:39:44 (5h 14m ago)
State ID: 1581665984 -- VM 1572 @ 1581665984 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 09:39:44 (6h 14m ago)
State ID: 1581662384 -- VM 1572 @ 1581662384 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 08:39:44 (7h 14m ago)
State ID: 1581658784 -- VM 1572 @ 1581658784 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 07:39:44 (8h 14m ago)
State ID: 1581655184 -- VM 1572 @ 1581655184 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 06:39:44 (9h 14m ago)
State ID: 1581651584 -- VM 1572 @ 1581651584 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 05:39:44 (10h 14m ago)
State ID: 1581647984 -- VM 1572 @ 1581647984 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 04:39:44 (11h 14m ago)
State ID: 1581644384 -- VM 1572 @ 1581644384 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 03:39:44 (12h 14m ago)
State ID: 1581640784 -- VM 1572 @ 1581640784 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 02:39:44 (13h 14m ago)
State ID: 1581637184 -- VM 1572 @ 1581637184 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 01:39:44 (14h 14m ago)
State ID: 1581633584 -- VM 1572 @ 1581633584 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-14 00:39:44 (15h 14m ago)
State ID: 1581629984 -- VM 1572 @ 1581629984 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 23:39:44 (16h 14m ago)
State ID: 1581626384 -- VM 1572 @ 1581626384 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 22:39:44 (17h 14m ago)
State ID: 1581622784 -- VM 1572 @ 1581622784 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 21:39:44 (18h 14m ago)
State ID: 1581619184 -- VM 1572 @ 1581619184 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 20:39:44 (19h 14m ago)
State ID: 1581615584 -- VM 1572 @ 1581615584 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 19:39:44 (20h 14m ago)
State ID: 1581611984 -- VM 1572 @ 1581611984 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 18:39:44 (21h 14m ago)
State ID: 1581608384 -- VM 1572 @ 1581608384 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 17:39:44 (22h 14m ago)
State ID: 1581604784 -- VM 1572 @ 1581604784 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 16:39:44 (23h 14m ago)
State ID: 1581583184 -- VM 1572 @ 1581583184 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-13 10:39:44 (1d 5h 14m ago)
State ID: 1581496784 -- VM 1572 @ 1581496784 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-12 10:39:44 (2d 5h 14m ago)
State ID: 1581410384 -- VM 1572 @ 1581410384 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-11 10:39:44 (3d 5h 14m ago)
State ID: 1581323984 -- VM 1572 @ 1581323984 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-10 10:39:44 (4d 5h 14m ago)
State ID: 1581237584 -- VM 1572 @ 1581237584 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-09 10:39:44 (5d 5h 14m ago)
State ID: 1581151184 -- VM 1572 @ 1581151184 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-08 10:39:44 (6d 5h 14m ago)
State ID: 1581064784 -- VM 1572 @ 1581064784 (loc2) -- volumes: test -- stopgap-remote -- 2020-02-07 10:39:44 (7d 5h 14m ago)

4.2.4. keep-daily-split

This mode is available only in primary/backup/primary_backup mode and has two parameters - interval and days. It creates snapshots at every interval hours. The service will keep all snapshots from the last 24 hours in the primary clusters. In the backup cluster snapshots will be reduced to one per day. Snapshots older than days days will be deleted.

Example for a virtual machine snapshotted with keep-daily-split, interval=1, days=7:

State ID: 1581687584 -- VM 1572 @ 1581687584 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 15:39:44 (0h 14m ago)
State ID: 1581683984 -- VM 1572 @ 1581683984 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 14:39:44 (1h 14m ago)
State ID: 1581680384 -- VM 1572 @ 1581680384 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 13:39:44 (2h 14m ago)
State ID: 1581676784 -- VM 1572 @ 1581676784 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 12:39:44 (3h 14m ago)
State ID: 1581673184 -- VM 1572 @ 1581673184 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 11:39:44 (4h 14m ago)
State ID: 1581669584 -- VM 1572 @ 1581669584 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 10:39:44 (5h 14m ago)
State ID: 1581665984 -- VM 1572 @ 1581665984 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 09:39:44 (6h 14m ago)
State ID: 1581662384 -- VM 1572 @ 1581662384 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 08:39:44 (7h 14m ago)
State ID: 1581658784 -- VM 1572 @ 1581658784 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 07:39:44 (8h 14m ago)
State ID: 1581655184 -- VM 1572 @ 1581655184 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 06:39:44 (9h 14m ago)
State ID: 1581651584 -- VM 1572 @ 1581651584 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 05:39:44 (10h 14m ago)
State ID: 1581647984 -- VM 1572 @ 1581647984 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 04:39:44 (11h 14m ago)
State ID: 1581644384 -- VM 1572 @ 1581644384 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 03:39:44 (12h 14m ago)
State ID: 1581640784 -- VM 1572 @ 1581640784 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 02:39:44 (13h 14m ago)
State ID: 1581637184 -- VM 1572 @ 1581637184 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 01:39:44 (14h 14m ago)
State ID: 1581633584 -- VM 1572 @ 1581633584 (loc) -- volumes: test -- keep-daily-split -- 2020-02-14 00:39:44 (15h 14m ago)
State ID: 1581629984 -- VM 1572 @ 1581629984 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 23:39:44 (16h 14m ago)
State ID: 1581626384 -- VM 1572 @ 1581626384 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 22:39:44 (17h 14m ago)
State ID: 1581622784 -- VM 1572 @ 1581622784 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 21:39:44 (18h 14m ago)
State ID: 1581619184 -- VM 1572 @ 1581619184 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 20:39:44 (19h 14m ago)
State ID: 1581615584 -- VM 1572 @ 1581615584 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 19:39:44 (20h 14m ago)
State ID: 1581611984 -- VM 1572 @ 1581611984 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 18:39:44 (21h 14m ago)
State ID: 1581608384 -- VM 1572 @ 1581608384 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 17:39:44 (22h 14m ago)
State ID: 1581604784 -- VM 1572 @ 1581604784 (loc) -- volumes: test -- keep-daily-split -- 2020-02-13 16:39:44 (23h 14m ago)
State ID: 1581669584 -- VM 1572 @ 1581669584 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-14 10:39:44 (5h 14m ago)
State ID: 1581583184 -- VM 1572 @ 1581583184 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-13 10:39:44 (1d 5h 14m ago)
State ID: 1581496784 -- VM 1572 @ 1581496784 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-12 10:39:44 (2d 5h 14m ago)
State ID: 1581410384 -- VM 1572 @ 1581410384 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-11 10:39:44 (3d 5h 14m ago)
State ID: 1581323984 -- VM 1572 @ 1581323984 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-10 10:39:44 (4d 5h 14m ago)
State ID: 1581237584 -- VM 1572 @ 1581237584 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-09 10:39:44 (5d 5h 14m ago)
State ID: 1581151184 -- VM 1572 @ 1581151184 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-08 10:39:44 (6d 5h 14m ago)
State ID: 1581064784 -- VM 1572 @ 1581064784 (loc2) -- volumes: test -- keep-daily-split -- 2020-02-07 10:39:44 (7d 5h 14m ago)

4.2.5. mhdm (minutes-hours-days-months)

This mode is available only in primary/backup/primary_backup mode and has four parameters - minute_interval, minute_count, days and months. It creates snapshots at every minute_interval minutes. The service will keep all minute_count of these snapshots. All older snapshots from the last 24h will be reduced to one per hour in the primary clusters. Backup clusters will keep daily snapshots for days days. Older snapshots will be reduced to one per month. Snapshots older than months days will be deleted.

Example for a virtual machine snapshotted with mhdm, with minute_interval=15, minute_count=16, days=7, months=6

State ID: 1581687584 -- VM 1572 @ 1581687584 (loc) -- volumes: test -- mhdm -- 2020-02-14 15:39:44 (0h 14m ago)
State ID: 1581686684 -- VM 1572 @ 1581686684 (loc) -- volumes: test -- mhdm -- 2020-14-02 15:24:44 (0h 29m ago)
State ID: 1581685784 -- VM 1572 @ 1581685784 (loc) -- volumes: test -- mhdm -- 2020-14-02 15:09:44 (0h 44m ago)
State ID: 1581684884 -- VM 1572 @ 1581684884 (loc) -- volumes: test -- mhdm -- 2020-14-02 14:54:44 (0h 59m ago)
State ID: 1581683984 -- VM 1572 @ 1581683984 (loc) -- volumes: test -- mhdm -- 2020-14-02 14:39:44 (1h 14m ago)
State ID: 1581683084 -- VM 1572 @ 1581683084 (loc) -- volumes: test -- mhdm -- 2020-14-02 14:24:44 (1h 29m ago)
State ID: 1581682184 -- VM 1572 @ 1581682184 (loc) -- volumes: test -- mhdm -- 2020-14-02 14:09:44 (1h 44m ago)
State ID: 1581681284 -- VM 1572 @ 1581681284 (loc) -- volumes: test -- mhdm -- 2020-14-02 13:54:44 (1h 59m ago)
State ID: 1581680384 -- VM 1572 @ 1581680384 (loc) -- volumes: test -- mhdm -- 2020-14-02 13:39:44 (2h 14m ago)
State ID: 1581679484 -- VM 1572 @ 1581679484 (loc) -- volumes: test -- mhdm -- 2020-14-02 13:24:44 (2h 29m ago)
State ID: 1581678584 -- VM 1572 @ 1581678584 (loc) -- volumes: test -- mhdm -- 2020-14-02 13:09:44 (2h 44m ago)
State ID: 1581677684 -- VM 1572 @ 1581677684 (loc) -- volumes: test -- mhdm -- 2020-14-02 12:54:44 (2h 59m ago)
State ID: 1581676784 -- VM 1572 @ 1581676784 (loc) -- volumes: test -- mhdm -- 2020-14-02 12:39:44 (3h 14m ago)
State ID: 1581675884 -- VM 1572 @ 1581675884 (loc) -- volumes: test -- mhdm -- 2020-14-02 12:24:44 (3h 29m ago)
State ID: 1581674984 -- VM 1572 @ 1581674984 (loc) -- volumes: test -- mhdm -- 2020-14-02 12:09:44 (3h 44m ago)
State ID: 1581674084 -- VM 1572 @ 1581674084 (loc) -- volumes: test -- mhdm -- 2020-14-02 11:54:44 (3h 59m ago)
State ID: 1581673184 -- VM 1572 @ 1581673184 (loc) -- volumes: test -- mhdm -- 2020-14-02 11:39:44 (4h 14m ago)
State ID: 1581669584 -- VM 1572 @ 1581669584 (loc) -- volumes: test -- mhdm -- 2020-02-14 10:39:44 (5h 14m ago)
State ID: 1581665984 -- VM 1572 @ 1581665984 (loc) -- volumes: test -- mhdm -- 2020-02-14 09:39:44 (6h 14m ago)
State ID: 1581662384 -- VM 1572 @ 1581662384 (loc) -- volumes: test -- mhdm -- 2020-02-14 08:39:44 (7h 14m ago)
State ID: 1581658784 -- VM 1572 @ 1581658784 (loc) -- volumes: test -- mhdm -- 2020-02-14 07:39:44 (8h 14m ago)
State ID: 1581655184 -- VM 1572 @ 1581655184 (loc) -- volumes: test -- mhdm -- 2020-02-14 06:39:44 (9h 14m ago)
State ID: 1581651584 -- VM 1572 @ 1581651584 (loc) -- volumes: test -- mhdm -- 2020-02-14 05:39:44 (10h 14m ago)
State ID: 1581647984 -- VM 1572 @ 1581647984 (loc) -- volumes: test -- mhdm -- 2020-02-14 04:39:44 (11h 14m ago)
State ID: 1581644384 -- VM 1572 @ 1581644384 (loc) -- volumes: test -- mhdm -- 2020-02-14 03:39:44 (12h 14m ago)
State ID: 1581640784 -- VM 1572 @ 1581640784 (loc) -- volumes: test -- mhdm -- 2020-02-14 02:39:44 (13h 14m ago)
State ID: 1581637184 -- VM 1572 @ 1581637184 (loc) -- volumes: test -- mhdm -- 2020-02-14 01:39:44 (14h 14m ago)
State ID: 1581633584 -- VM 1572 @ 1581633584 (loc) -- volumes: test -- mhdm -- 2020-02-14 00:39:44 (15h 14m ago)
State ID: 1581629984 -- VM 1572 @ 1581629984 (loc) -- volumes: test -- mhdm -- 2020-02-13 23:39:44 (16h 14m ago)
State ID: 1581626384 -- VM 1572 @ 1581626384 (loc) -- volumes: test -- mhdm -- 2020-02-13 22:39:44 (17h 14m ago)
State ID: 1581622784 -- VM 1572 @ 1581622784 (loc) -- volumes: test -- mhdm -- 2020-02-13 21:39:44 (18h 14m ago)
State ID: 1581619184 -- VM 1572 @ 1581619184 (loc) -- volumes: test -- mhdm -- 2020-02-13 20:39:44 (19h 14m ago)
State ID: 1581615584 -- VM 1572 @ 1581615584 (loc) -- volumes: test -- mhdm -- 2020-02-13 19:39:44 (20h 14m ago)
State ID: 1581611984 -- VM 1572 @ 1581611984 (loc) -- volumes: test -- mhdm -- 2020-02-13 18:39:44 (21h 14m ago)
State ID: 1581608384 -- VM 1572 @ 1581608384 (loc) -- volumes: test -- mhdm -- 2020-02-13 17:39:44 (22h 14m ago)
State ID: 1581604784 -- VM 1572 @ 1581604784 (loc) -- volumes: test -- mhdm -- 2020-02-13 16:39:44 (23h 14m ago)
State ID: 1581669584 -- VM 1572 @ 1581669584 (loc2) -- volumes: test -- mhdm -- 2020-02-14 10:39:44 (5h 14m ago)
State ID: 1581583184 -- VM 1572 @ 1581583184 (loc2) -- volumes: test -- mhdm -- 2020-02-13 10:39:44 (1d 5h 14m ago)
State ID: 1581496784 -- VM 1572 @ 1581496784 (loc2) -- volumes: test -- mhdm -- 2020-02-12 10:39:44 (2d 5h 14m ago)
State ID: 1581410384 -- VM 1572 @ 1581410384 (loc2) -- volumes: test -- mhdm -- 2020-02-11 10:39:44 (3d 5h 14m ago)
State ID: 1581323984 -- VM 1572 @ 1581323984 (loc2) -- volumes: test -- mhdm -- 2020-02-10 10:39:44 (4d 5h 14m ago)
State ID: 1581237584 -- VM 1572 @ 1581237584 (loc2) -- volumes: test -- mhdm -- 2020-02-09 10:39:44 (5d 5h 14m ago)
State ID: 1581151184 -- VM 1572 @ 1581151184 (loc2) -- volumes: test -- mhdm -- 2020-02-08 10:39:44 (6d 5h 14m ago)
State ID: 1581064784 -- VM 1572 @ 1581064784 (loc2) -- volumes: test -- mhdm -- 2020-02-07 10:39:44 (7d 5h 14m ago)
State ID: 1578472784 -- VM 1572 @ 1578472784 (loc2) -- volumes: test -- mhdm -- 2020-08-01 10:39:44 (37d 5h 14m ago)
State ID: 1575880784 -- VM 1572 @ 1575880784 (loc2) -- volumes: test -- mhdm -- 2019-09-12 10:39:44 (67d 5h 14m ago)
State ID: 1573288784 -- VM 1572 @ 1573288784 (loc2) -- volumes: test -- mhdm -- 2019-09-11 10:39:44 (97d 5h 14m ago)
State ID: 1570696784 -- VM 1572 @ 1570696784 (loc2) -- volumes: test -- mhdm -- 2019-10-10 11:39:44 (127d 5h 14m ago)
State ID: 1568104784 -- VM 1572 @ 1568104784 (loc2) -- volumes: test -- mhdm -- 2019-10-09 11:39:44 (157d 5h 14m ago)
State ID: 1565512784 -- VM 1572 @ 1565512784 (loc2) -- volumes: test -- mhdm -- 2019-11-08 11:39:44 (187d 5h 14m ago)

4.2.6. remote-backup

This mode is available only in primary/backup/primary_backup mode and has a few parameters - minute_interval, keep_minutely, keep_hourly, keep_weekly, keep_daily, keep_monthly and keep_local. It creates snapshots at every minute_interval minutes. The service will keep all keep_minutely of these snapshots. In addition to these, one snapshot from the last keep_hourly hours will be kept. Similarly, one snapshot from each of the last keep_daily days, keep_weekly weeks and keep_montly months will be kept. All snapshots are kept on the remote site. The primary location will keep only the last keep_local snapshots (defaults to 1, can be set to 0).

Example for a virtual machine snapshotted with remote-backup, with minute_interval=15, keep_minutely=2, keep_hourly=8, keep_daily=7, keep_weekly=2, keep_monthly=7

State ID: 1581687584 -- VM 1572 @ 1581687584 (loc) -- volumes: test -- mhdm -- 2020-02-14 15:39:44 (0h 14m ago)       | minute, hour (local)
State ID: 1581687584 -- VM 1572 @ 1581687584 (loc2) -- volumes: test -- mhdm -- 2020-02-14 15:39:44 (0h 14m ago)      | minute, hour
State ID: 1581686684 -- VM 1572 @ 1581686684 (loc2) -- volumes: test -- mhdm -- 2020-14-02 15:24:44 (0h 29m ago)      | minute
State ID: 1581683984 -- VM 1572 @ 1581683984 (loc2) -- volumes: test -- mhdm -- 2020-14-02 14:39:44 (1h 14m ago)      | hour
State ID: 1581680384 -- VM 1572 @ 1581680384 (loc2) -- volumes: test -- mhdm -- 2020-14-02 13:39:44 (2h 14m ago)      | hour
State ID: 1581676784 -- VM 1572 @ 1581676784 (loc2) -- volumes: test -- mhdm -- 2020-14-02 12:39:44 (3h 14m ago)      | hour
State ID: 1581673184 -- VM 1572 @ 1581673184 (loc2) -- volumes: test -- mhdm -- 2020-14-02 11:39:44 (4h 14m ago)      | hour
State ID: 1581669584 -- VM 1572 @ 1581669584 (loc2) -- volumes: test -- mhdm -- 2020-02-14 10:39:44 (5h 14m ago)      | hour, day
State ID: 1581665984 -- VM 1572 @ 1581665984 (loc2) -- volumes: test -- mhdm -- 2020-02-14 09:39:44 (6h 14m ago)      | hour
State ID: 1581662384 -- VM 1572 @ 1581662384 (loc2) -- volumes: test -- mhdm -- 2020-02-14 08:39:44 (7h 14m ago)      | hour
State ID: 1581583184 -- VM 1572 @ 1581583184 (loc2) -- volumes: test -- mhdm -- 2020-02-13 10:39:44 (1d 5h 14m ago)   | day
State ID: 1581496784 -- VM 1572 @ 1581496784 (loc2) -- volumes: test -- mhdm -- 2020-02-12 10:39:44 (2d 5h 14m ago)   | day
State ID: 1581410384 -- VM 1572 @ 1581410384 (loc2) -- volumes: test -- mhdm -- 2020-02-11 10:39:44 (3d 5h 14m ago)   | day, week
State ID: 1581323984 -- VM 1572 @ 1581323984 (loc2) -- volumes: test -- mhdm -- 2020-02-10 10:39:44 (4d 5h 14m ago)   | day
State ID: 1581237584 -- VM 1572 @ 1581237584 (loc2) -- volumes: test -- mhdm -- 2020-02-09 10:39:44 (5d 5h 14m ago)   | day
State ID: 1581151184 -- VM 1572 @ 1581151184 (loc2) -- volumes: test -- mhdm -- 2020-02-08 10:39:44 (6d 5h 14m ago)   | day
State ID: 1581064784 -- VM 1572 @ 1581064784 (loc2) -- volumes: test -- mhdm -- 2020-02-07 10:39:44 (7d 5h 14m ago)   | month
State ID: 1580805584 -- VM 1572 @ 1580805584 (loc2) -- volumes: test -- mhdm -- 2020-02-04 10:39:44 (10d 5h 14m ago)  | week
State ID: 1578472784 -- VM 1572 @ 1578472784 (loc2) -- volumes: test -- mhdm -- 2020-08-01 10:39:44 (37d 5h 14m ago)  | month
State ID: 1575880784 -- VM 1572 @ 1575880784 (loc2) -- volumes: test -- mhdm -- 2019-09-12 10:39:44 (67d 5h 14m ago)  | month
State ID: 1573288784 -- VM 1572 @ 1573288784 (loc2) -- volumes: test -- mhdm -- 2019-09-11 10:39:44 (97d 5h 14m ago)  | month
State ID: 1570696784 -- VM 1572 @ 1570696784 (loc2) -- volumes: test -- mhdm -- 2019-10-10 11:39:44 (127d 5h 14m ago) | month
State ID: 1568104784 -- VM 1572 @ 1568104784 (loc2) -- volumes: test -- mhdm -- 2019-10-09 11:39:44 (157d 5h 14m ago) | month
State ID: 1565512784 -- VM 1572 @ 1565512784 (loc2) -- volumes: test -- mhdm -- 2019-11-08 11:39:44 (187d 5h 14m ago) | month

5. Policy resolution

Each entity (volume or virtual machine) should have a policy applied to it. Only one policy can be applied to an entity at a time. The policy for each entity is resolved, in order of precedence:

  • volumes, by adding a tag vc-policy=<policy-name> to the volume;

  • templates, via [template:<template-name>] section in the configuration with the policy=<policy-name> option in it;

  • the whole cluster, via the [template:*] section. This section is mandatory.

Virtual Machines have the common policy of all their volumes. All volumes of a virtual machine must have the same retention policy, otherwise it is considered a misconfiguration and the virtual machine will be ignored by the service.

Note

[template:*] is mandatory, otherwise VolumeCare will refuse to start. As of version 1.19 its default value is policy=no.

6. storpool_vcctl

Note

In older storpool versions the storpool_vcctl tool is in the /root/storpool/volumecare directory.

To control VolumeCare, StorPool provides the storpool_vcctl tool. It’s usage is:

usage: storpool_vcctl [-h] [-C CONFIGFILE] {show,list,status,revert} ...

The following sub-commands are available:

6.1. config

Show the present configuration:

usage: storpool_vcctl config show

Edit the configuration:

usage: storpool_vcctl config edit

The above opens an editor and validates the configuration before overwriting the present config in the key-value store.

6.2. show

usage: storpool_vcctl show vm VMTAG=VMID
       storpool_vcctl show volume VOLUME

This sub-command shows the snapshots created for a volume or virtual machine. Below are two examples:

# storpool_vcctl show volume test
volume: test (loc) -- stopgap-short
   State ID: 1581928784 -- spvc___1581928784___loc___test (loc) -- stopgap-short -- 2020-02-17 10:39:44 (0h 12m ago)
   State ID: 1581925184 -- spvc___1581925184___loc___test (loc) -- stopgap-short -- 2020-02-17 09:39:44 (1h 12m ago)
   State ID: 1581921584 -- spvc___1581921584___loc___test (loc) -- stopgap-short -- 2020-02-17 08:39:44 (2h 12m ago)
test (loc) total: 3; local: 3; remote: 0

Here you can see three snapshots for a single volume, 1 hour apart.

# storpool_vcctl show vm nvm=2
vm: nvm=2 (loc) -- stopgap-short
 State ID: 1581928784 -- NVM 2 @ 1581928784 (loc) -- volumes: test5, test6, test7, test8 -- stopgap-short -- 2020-02-17 10:39:44 (0h 9m ago)
 State ID: 1581925184 -- NVM 2 @ 1581925184 (loc) -- volumes: test5, test6, test7, test8 -- stopgap-short -- 2020-02-17 09:39:44 (1h 9m ago)
 State ID: 1581921584 -- NVM 2 @ 1581921584 (loc) -- volumes: test5, test6, test7, test8 -- stopgap-short -- 2020-02-17 08:39:44 (2h 9m ago)
nvm=2 (loc) total: 3; local: 3; remote: 0

In this example, the orchestration uses the nvm tag to mark the volumes belonging to a virtual machine. The status shows that there are 4 drives on the VM, and they have 3 previous snapshots, 1 hour apart.

6.3. list

usage: storpool_vcctl list {volumes,vms,policies}

This sub-command lists the affected volumes, VMs and active policies. Examples:

# storpool_vcctl list vms
cvm=1 (loc)
cvm=1024 (loc)
nvm=1 (loc)
nvm=2 (loc)

A setup of three VMs, two identified with nvm tag and one with a cvm tag.

# storpool_vcctl list volumes
bridge-test (loc)
company-test (loc)
one-img-9 (loc)
35f12f61-5c43-48f1-8036-3b3c254e8a54 (loc)
one-img-93 (loc)
one-img-94 (loc)
one-img-95 (loc)
one-img-96 (loc)
one-img-97 (loc)
one-img-98 (loc)
8a293540-33fc-4f33-8aa2-a761cbe0684e (loc)

This is a short list of the standalone volumes the service takes care of.

# storpool_vcctl list policies

[policy:stopgap-short]
mode=stopgap
snapshots=3
interval=1

[policy:cust-main]
mode=exp

[policy:no]
mode=nosnap

[policy:keep-daily]
mode=keep-daily
interval=2
days=7

6.4. status

This sub-command is the same as show for all VMs and volumes.

6.5. node info

Shows information for the VolumeCare daemon running on the node. Json output available with -J/--json.

# storpool_vcctl nodeinfo
package version: 1.27
running version: 1.27
config in StorPool KVS: True
pid: 3531823
status: active

The active status indicates whether the daemon on this node is the active one.

6.6. revert

For a specific volume revert (OpenNebula) check this page.

Warning

PLEASE MAKE SURE you have read and understood the implications before using this command.

usage: storpool_vcctl revert [-h] {rename,delete} {volume,vm} ...
       storpool_vcctl revert {rename,delete} vm [-N] vm_id state_id
       storpool_vcctl revert {rename,delete} volume [-N] volume_id state_id

This sub-command reverts a specified volume or VM to a previous state. Consider the following constraint: the volumes must not be attached to any node.

The first argument (rename or delete) specifies what will be done to the existing volumes that will be reverted:

rename

Snapshot the volumes before reverting. If a virtual machine with multiple volumes is being reverted an atomic group snapshot will be created. You can recognize these snapshots in one of the following ways:

  • By their name in the vcrevert_VOLUMENAME_TIMESTAMP format, where TIMESTAMP is the current Unix time.

  • By the pair of tags vc-revert=GLOBALID and vc-ts=TIMESTAMP, where GOBALID is the StorPool global ID of the volume.

Anonymous (unnamed) volumes will have anonymous tagged snapshots, while named volumes will have named snapshots.

Note

These snapshots have to be removed manually.

delete

Only revert the volumes, effectively deleting the current data.

Note

Use delete only if you are absolutely sure you will not need the current data stored in the volumes.

A state is defined by an Unix timestamp and can be obtained via the list sub-command.

The -N option will not do any changes, but will report what will be done. It is strongly recommended to run every command first with -N , to make sure it does what is expected.

7. Changing a policy

The policies are best explained as immutable objects, thus when a change is required, the old policy needs to be kept until the new policy is applied. This immutability is not enforced due to rare exceptions requiring actual changes in an existing policy, but is strongly encouraged.

7.1. Single-cluster

Policy changes are best performed by creating a new policy and leaving the snapshots in the old one to expire when this is possible. Any changes in the /etc/storpool/volumecare.conf require restarting the storpool_volumecare service (presently running in the first node in a cluster if configured).

7.2. Multiple clusters

Changing a remote policy requires the change to occur both in the local and the remote clusters, otherwise the policy goes out of sync. A monitoring alert for policies out of sync is in development and will alert for such issues once done.

8. Example configurations

8.1. Single-cluster

[format]
version=1.0

[volumecare]
mode=normal
driver=storpool

[policy:stopgap-short]
mode=stopgap
snapshots=3
interval=1

[policy:cust-main]
mode=exp

[policy:no]
mode=nosnap

[policy:keep-daily]
mode=keep-daily
interval=2
days=7

[template:one-ds-0]
policy=stopgap-short

[template:one-ds-1]
policy=cust-main

[template:*]
policy=no

8.2. Primary cluster

[format]
version=1.0

[volumecare]
mode=primary
driver=storpool
remote=cust-tier2

[policy:cust-main-remote]
mode=stopgap-remote
interval=2
days=7

[policy:cust-main]
mode=keep-daily
interval=2
days=7

[policy:no]
mode=nosnap

[template:one-ds-0]
policy=cust-main-remote

[template:one-ds-1]
policy=cust-main

[template:*]
policy=no

8.3. Backup cluster

[format]
version=1.0

[volumecare]
mode=backup
driver=storpool

[policy:cust-main-remote]
mode=stopgap-remote
interval=2
days=7

[policy:cust-main]
mode=keep-daily
interval=2
days=7

[policy:no]
mode=nosnap

[template:*]
policy=no

8.4. Two clusters sending backups to each other

[format]
version=1.0

[volumecare]
mode=primary_backup
driver=storpool
remote=cluster2

[policy:cust-main-remote]
mode=keep-daily-remote
interval=2
days=7

[policy:cust-main]
mode=keep-daily
interval=2
days=7
[format]
version=1.0

[volumecare]
mode=primary_backup
driver=storpool
remote=cluster1

[policy:cust-main-remote]
mode=keep-daily-remote
interval=2
days=7

[policy:cust-main]
mode=keep-daily
interval=2
days=7

9. VolumeCare Changelog

1.29

  • Added the vcprobe tool to fetch information from the running VolumeCare daemon for monitoring purposes.

  • VolumeCare will now periodically (every minute) check for changes in its configuration, and will restart if it finds a difference with its currently loaded configuration.

  • Fixed a rare bug with tracking copies of snapshots in backup clusters caused by virtual machines with the same ID.

1.28

  • Updated the storpool_vcctl revert procedure. It now works with remote snapshots, and uses StorPool VolumeRevert calls.

1.27.3

  • Added multicluster mode for storpool_vcctl status -M to fetch information from the whole multicluster in primary locations.

1.27.1

  • The remote-backup mode will now rotate snapshots waiting for transfer in the primary cluster, instead of stacking them.

  • Fixed mhdm policy mode, which sometimes kept 1 minute snapshot more than it should.

  • storpool_vcctl revert bugfix for VMs with anonymous volumes.

1.27

  • Implemented a pacing feature for VolumeCare’s snapshot creation and deletion tasks.

  • Fix doubling of log messages in some places

1.26.1

  • Added storpool_vcctl nodeinfo that shows information about the currently installed/running VolumeCare on the node. Json output available.

  • General fixes for the moving the VolumeCare daemon with the active StorPool management

1.26

  • High Availability: VolumeCare is now installed by default on each mgmt node in the cluster. The daemons will always be running everywhere with only one of them actively executing snapshot operations - the one on the active mgmt node.

  • VolumeCare configuration is now kept in the key-value store of the StorPool cluster and manipulated through storpool_vcctl config <option>.

1.25

  • Add a global and per-policy template override for primary and local clusters snapshot creation.

1.24

  • Add the remote-backup policy.

  • Introduce location status tracking for backup clusters as well. Backups will now not be deleted when the location they came from is unreachable.

  • Fix a problem in the last snapshot keeping mechanism from 1.21

1.23

  • Introduce the per-policy template and head_template settings.

  • Introduce the use_cluster_id option to handle the primary-backup scenario in the same StorPool multicluster.

1.22

  • Add StorPool volume to snapshot map in the storpool_vcctl json output.

1.21

  • Remote locations can be now set (overriden) per policy in primary clusters.

  • Add the mhdm policy.

  • A small feature for last snapshot keeping.

1.20

  • Add support for primary_backup mode. This allows a cluster to serve as a primary for itself and backup for its backup cluster. This enables a configuration with two clusters backing up in each other.

1.19.1

  • Fixes a small chance of group snapshots having different timestamps

1.19

  • Add support for multiclustered backup clusters

  • Renamed policies (with backwards compatible alias):

    • stopgap -> basic

    • stopgap-mirror -> basic-mirror

    • stopgap-remote -> keep-daily-remote

  • Add the no policy with mode nosnap if it does not exist in the config

  • template:* is now not a mandatory section in the configuration, It is populated with policy=no if not present

  • Add a new policy keep-daily-split

  • Move some constants to the volumecare section of the config:

    • scan_interval_s - the re-scan interval for the volumecare in seconds

    • care_max_ops - maximum number of operations per CARE task run

    • care_min_wait_s - minimum number of seconds to wait between two CARE task runs

    • care_obsolete_check_tout_s - maximum number of seconds between obsolete snapshots check

  • The bridge connection test snapshot now contains the cluster id in its name

1.18.1

  • Fix a bug with tags on internal vc “fake” snapshot objects

1.18

  • Add support for multicluster with backup

  • Add the vc-nv tag to track number of volumes for vm snapshots

  • Only backup clusters will not see recovering snapshots now

1.17

  • Add the vc-orig tag to track incoming location

1.16

  • Add support for anonymous (unnamed) volumes

  • Backups now wait remote unexport before deleting a snapshot

  • Fix an issue with a too long name + tags for snapshots

1.15

  • Small bugfix of the CARE task rescheduling with the same timestamp

1.14

  • vcctl: add json output for status

  • vcctl: add option to hide entities with policy nosnap

  • vcctl: show vms when searching for volume

  • vcctl: add local and remote filtering to show and status commands

  • Implement a timeout for the obsolete snapshots check

  • vcctl: add internal storpool snapshot stats to status

  • Add a jq module for transfers in the primary clusters

1.13

  • Do not unexport in the deletion tasks in primary clusters

  • Add the inherit_tags functionality

1.12

  • Added basic-remote policy

  • Fixed some bugs in stopgap-mirror

  • vcctl: added verbose output; vm storpool snapshot names can be seen there

  • vcctl: implemented show policy

  • vcctl: show and list accept location as well

1.11

  • Do not add export/unexport tasks for snapshots that are pending deletion.

1.10

  • Fixed the location deducing of some volumes.

1.09

  • Backup clusters got “next_remote” for exporting snapshots to one more place.

1.08

  • Fixed a race condition bug on transfer/delete snapshot in primary clusters.

  • stopgap-remote now deleted the non-daily snapshots form the primary if they expire (got age > 24h) immediately, not keeping them to be transferred to the backup.

1.07

  • Track snapshot exports and unexports with events.

  • Protection for duplicating delete/export/unexport/copy tasks.

  • Use only location ids internally, remote option in the config is not affected.

  • Don’t show recovering snapshots from the storpool driver. It is conceptually wrong. A snapshot is present when it is recovered.

  • Backup clusters now manually export snapshots after recovering. This is needed so that vm volumes snapshots appear at the same time.

  • Default retry policy changed to: initial - 3 min, increment - x2.

1.06

  • VolumeCare in normal mode does not scan for remote snapshots.

1.05

  • Primary clusters do the exports per entity one by one.

  • Backup clusters export special dummy snapshot to the primary for location tracking. Primary clusters do not export snapshots to down locations.

1.04

  • Reschedule the care task immediately if there are unhandled events.

  • Primary clusters unexport snapshots after they are transferred.

1.03

  • Add stopgap-mirror policy.

  • Fix the issue with timeutils.now that could cause 15s premature task execution

1.02

  • Add -L/–location option to vcctl status; queries volumes/vms only from the given location. Locations are raw (e.g. bbht - the first part of CLUSTER_ID).

  • Volumecare in primary mode now exports the snapshots sorted by age, oldest first (in the same manner the volumecare in backup mode invokes copyFromRemote).

  • Volumecare in backup mode now does not require the “remote” option in the config. Backup mode natively supports multiple primary clusters and that option was actually never used when running in backup mode.

1.01

  • Fix vcctl show volume <volume_name> not working.

1.0

  • initial