Advanced configuration variables
The main configuration information is provided in OpenNebula configuration. Here you can find additional details about configuration variables.
Managing datastore on another StorPool cluster
It is possible to use the OpenNebula’s Cluster configuration to manage hosts that are members of different StorPool clusters. To achieve this, set the StorPool API credentials (see Network communication) in the datastore template:
SP_AUTH_TOKEN: The AUTH token for of the StorPool API to use for this datastore. String.SP_API_HTTP_HOST: The IP address of the StorPool API to use for this datastore. IP address.SP_API_HTTP_PORT: (optional) The port of the StorPool API to use for this datastore. Number.
There are rare cases when the driver could not determine the StorPool node ID. To resolve this issue, set in the OpenNebula’s Host template the SP_OURID variable (see Identification and voting).
SP_OURID: The node id in the StorPool’s configuration. Number.
VM checkpoint file on a StorPool volume
Requires qemu-kvm-ev installed on the hypervisors. Please follow OpenNebula’s Nodes Platform notes for installation instructions.
Reconfigure OpenNebula to use local actions for save and restore:
VM_MAD = [ ... ARGUMENTS = "... -l ...save=tmsave,restore=tmrestore" ]
Set
SP_CHECKPOINT_BD=1to enable the direct save/restore to StorPool backed block device:ONE_LOCATION=/var/lib/one echo "SP_CHECKPOINT_BD=1" >> $ONE_LOCATION/remotes/addon-storpoolrc
Disk snapshot limits
It is possible to limit the number of snapshots per disk. When the limit is reached, the addon will return an error which will be logged in the VM log, and the ERROR variable will be set in the VM’s Info tab with relevant error message.
There are three levels of configuration possible:
Global in
$ONE_LOCATION/remotes/addon-storpoolrcThe snapshot limits could be set as global in addon-storpoolrc:
ONE_LOCATION=/var/lib/one echo "DISKSNAPSHOT_LIMIT=15" >> $ONE_LOCATION/remotes/addon-storpoolrc
Per SYSTEM datastore
Set the
DISKSNAPSHOT_LIMITvariable in the SYSTEM’s datastore template with he desired limit. The limit will be enforced on all VM’s that are provisioned on the given SYSTEM datastore.Per VM
Note: Only the members of the
oneadmingroup could alter the variable.Open the VM’s Info tab and add to the Attributes:
DISKSNAPSHOT_LIMIT=15
Re-configure the ‘VM snapshot’ to do atomic disk snapshots
Append the following string to the VM_MAD’s ARGUMENTS:
-l snapshotcreate=snapshot_create-storpool,snapshotrevert=snapshot_revert-storpool,snapshotdelete=snapshot_delete-storpool
Edit
/etc/one/oned.confand setKEEP_SNAPSHOTS=YESin theVM_MADsection:VM_MAD = [ ... KEEP_SNAPSHOTS = "yes", ... ]
It is possible to do fsfreeze/fsthaw while the VM disks are snapshotted:
ONE_LOCATION=/var/lib/one
echo "VMSNAPSHOT_FSFREEZE=1" >> $ONE_LOCATION/remotes/addon-storpoolrc
VM snapshot limits
To enable VM snapshot limits:
VMSNAPSHOT_ENABLE_LIMIT=1
The snapshot limits could be set as global in addon-storpoolrc:
VMSNAPSHOT_LIMIT=10
It is possible to override the global defined values in the SYSTEM datastore configuration.
Also, it is possible to override them per VM by defining the variables in the VM’s USER_TEMPLATE (the attributes at the bottom in the Sunstone’s “VM Info” tab). In this case, it is possible (and recommended ) to restrict the variables to be managed by the members of ‘oneadmin’ group by setting in /etc/one/oned.conf:
cat >>/etc/one/oned.conf <<EOF
VM_RESTRICTED_ATTR = "VMSNAPSHOT_LIMIT"
VM_RESTRICTED_ATTR = "DISKSNAPSHOT_LIMIT"
EOF
# restart oned to refresh the configuration
systemctl restart opennebula
Send volume snapshot to a remote StorPool cluster when deleting a volume
REMOTE_BACKUP_DELETE=<remote_location>:<optional_argumets>
There is an option to send last snapshot to a remote StorPool cluster when deleting a disk image. The configuration is as follow:
remote_location- the name of the remote StorPool cluster to send volume snapshot tooptional_argumets- extra arguments
For example, to send volume snapshot to a remote cluster with name clusterB and tag the volumes with tag del=y set:
REMOTE_BACKUP_DELETE="clusterB:tag del=y"
Monitoring space usage
In OpenNebula there are three probes for datastore related monitoring. The IMAGE datastore is monitored from the front end, and the SYSTEM datastore and the VM disks (and their snapshots) are monitored from the hosts. As this addon does not support direct access to the StorPool API from the hosts, we should provide the needed data to the related probes. This is done by the monitor_helper-sync script, which is run via a cron job on the front-end. The script is collecting the needed data and propagating (if needed) to the hosts for use. Here is the default configuration:
# path to the json files on the hosts
# (must be writable by the oneadmin user)
SP_JSON_PATH="/tmp/"
# Path to the json files on the front-end
# (must be writable by the oneadmin user)
SP_FE_JSON_PATH="/var/cache/addon-storpool-monitor"
# datastores stats JSON and the command that generate them
SP_TEMPLATE_STATUS_JSON="storpool_template_status.json"
SP_TEMPLATE_STATUS_RUN="storpool -j template status"
# VM disks stats JSON and the command that generate them
SP_VOLUME_SPACE_JSON="storpool_volume_usedSpace.json"
SP_VOLUME_SPACE_RUN="storpool -j volume usedSpace"
# VM disks snapshots stats JSON and the command that generate them
SP_SNAPSHOT_SPACE_JSON="storpool_snapshot_space.json"
SP_SNAPSHOT_SPACE_RUN="storpool -j snapshot space"
# Copy(scp) the JSON files to the remote hosts
MONITOR_SYNC_REMOTE="YES"
# Do template propagate. When there is change in the template attributes
# propagate the change to all current volumes based on the template
SP_TEMPLATE_PROPAGATE="YES"
# Uncomment to enable debugging
#DEBUG_MONITOR_SYNC=1
For the case when there is shared filesystem between the one-fe and the HV nodes there is no need to copy the JSON files. In this case, the SP_JSON_PATH variable must be altered to point to a shared folder, and you should set MONITOR_SYNC_REMOTE=NO. The configuration change can be completed in the /var/lib/one/remotes/addon-storpoolrc or /var/lib/one/remotes/monitor_helper-syncrc configuration files.
Note: The shared filesystem must be mounted on same system path on all nodes as on the one-fe!
For example, the shared file system in mounted on /sharedfs:
cat >>/var/lib/one/remotes/monitor_helper-syncrc <<EOF
# datastores stats on the sharedfs
export SP_JSON_PATH="/sharedfs"
# disabled remote sync
export MONITOR_SYNC_REMOTE="NO"
EOF
Multiverse - use multiple OpenNebula instances on single StorPool cluster
For each OpenNebula instance, set a different string in the ONE_PX variable.
It is possible to change the instance tag name (nloc) by setting LOC_TAG variable.
The ${LOC_TAG} tag value is set using the LOC_TAG_VAL variable, where the default is LOC_TAG_VAL=${ONE_PX}.
It is possible to change the default volume tag name (nvm) by setting VM_TAG variable.
The ${VM_TAG} tag value has format ${VM_ID}.
Delayed delete of VM disks
When a VM is terminated, non-persistent image is detached or VM backup restored the corresponding StorPool volume is Deleted. To prevent from accidental volume delete (wrong detach/termination) there is an option to rename the volume to an “anonymous” snapshot and mark for deletion by StorPool after a given time.
# For example to set the delay to 12 hours
echo 'DELAY_DELETE="12h"' >> /var/lib/one/remotes/addon-storpoolrc
# and propagate the changes
su - oneadmin -c "onehost sync --force"
Notes:
There is no configuration option to disable DELAY_DELETE, but defining a relatively small time – like
DELAY_DELETE=1s– should work as an alternative.In case of need to recover the data immediately contact the StorPool support for assistance.
More information
For details about the StorPool VolumeCare policy tags, see VolumeCare.
For details about the StorPool QoS Class tags, see QoS class configuration.
For details about VM’s domain XML tweaking, see Deploy tweaks