Installing the StorPool Proxmox integration
Install the StorPool storage plugin
Perform these steps on all the Proxmox VE hosts which will need to access StorPool-backed volumes and snapshots:
Make sure the StorPool client (the
storpool_blockservice) is installed on the Proxmox host.Point Apt at the StorPool Debian package repository backports suite for your release.
For example, this can be
bookworm-backportsif you are running Proxmox VE 8 (based onbookworm). This is what your/etc/apt/sources.list.d/storpool-backports.sourcesfile would look like:Types: deb deb-src URIs: https://repo.storpool.com/public/contrib/debian/ Suites: bookworm-backports Components: main Signed-By: /usr/share/keyrings/storpool-keyring.gpg
Install the
pve-storpoolpackage:apt update apt install pve-storpool
Some PVE services will be restarted, but running guests on the node will not be affected. If using PVE HA, you need to manually restart the
pve-ha-lrmservice:systemctl restart pve-ha-lrm.serviceThis is standard procedure also used in PVE updates, and is not expected to affect HA resources in the cluster.
Upgrading the StorPool plugin
Similarly to the installation, some services will be restarted, but no guests will be interrupted.
apt update apt install --only-upgrade pve-storpool systemctl restart pve-ha-lrm.service # if using PVE HA
Upgrading StorPool watchdog
No guests will be interrupted.
apt update apt install --only-upgrade pve-storpool-watchdog systemctl restart sp-watchdog-mux.service systemctl restart pve-ha-lrm.service # if using PVE HA
Check the status of the StorPool and Proxmox installation
Make sure all of the following conditions are met:
The StorPool client (see storpool_block) is operational:
# systemctl status storpool_block.service storpool_block.service - StorPool block device client service Loaded: loaded (/usr/lib/systemd/system/storpool_block.service; enabled; vendor preset: disabled) Active: active (running) since ср 2025-02-03 16:06:02 GMT; 22h ago Main PID: 2149 (storpool_block) Tasks: 3 Memory: 0B CGroup: /storpool.slice/storpool-common.slice/storpool_block.service ├─2149 /usr/bin/perl /usr/sbin/storpool_block └─2391 /usr/sbin/storpool_block.bin -l -p /run/storpool_block.bin.pid -a b4ct.b -P /var/run/storpool -i 1 -b 0 -W 6
The StorPool configuration includes the API access variables:
# storpool_confshow -e SP_API_HTTP_HOST SP_API_HTTP_PORT SP_AUTH_TOKEN SP_OURID SP_API_HTTP_HOST=10.9.8.7 SP_API_HTTP_PORT=81 SP_AUTH_TOKEN=1234567890987654321 SP_OURID=1
The StorPool cluster sees this client as operational (see Services and Client):
storpool service list storpool client status
The Proxmox cluster is operational and has a sensible name configured:
# pvesh get /cluster/status ┌──────────────┬───────────────┬─────────┬───────────────┬───────┬───────┬────────┬───────┬────────┬─────────┬─────────┐ │ id │ name │ type │ ip │ level │ local │ nodeid │ nodes │ online │ quorate │ version │ ╞══════════════╪═══════════════╪═════════╪═══════════════╪═══════╪═══════╪════════╪═══════╪════════╪═════════╪═════════╡ │ cluster │ ABCcomp-Abc01 │ cluster │ │ │ │ │ 5 │ │ 1 │ 5 │ ├──────────────┼───────────────┼─────────┼───────────────┼───────┼───────┼────────┼───────┼────────┼─────────┼─────────┤ │ node/ab07-16 │ ab07-16 │ node │ 172.17.11.101 │ p │ 1 │ 5 │ │ 1 │ │ │ ├──────────────┼───────────────┼─────────┼───────────────┼───────┼───────┼────────┼───────┼────────┼─────────┼─────────┤ │ node/ab08-05 │ ab08-05 │ node │ 172.17.11.102 │ p │ 0 │ 4 │ │ 1 │ │ │ ├──────────────┼───────────────┼─────────┼───────────────┼───────┼───────┼────────┼───────┼────────┼─────────┼─────────┤ │ node/ab08-15 │ ab08-15 │ node │ 172.17.11.103 │ p │ 0 │ 3 │ │ 1 │ │ │ ├──────────────┼───────────────┼─────────┼───────────────┼───────┼───────┼────────┼───────┼────────┼─────────┼─────────┤ │ node/px09-04 │ px09-04 │ node │ 172.17.11.104 │ p │ 0 │ 2 │ │ 1 │ │ │ ├──────────────┼───────────────┼─────────┼───────────────┼───────┼───────┼────────┼───────┼────────┼─────────┼─────────┤ │ node/px09-05 │ px09-05 │ node │ 172.17.11.105 │ p │ 0 │ 1 │ │ 1 │ │ │ └──────────────┴───────────────┴─────────┴───────────────┴───────┴───────┴────────┴───────┴────────┴─────────┴─────────┘
# pvesh get /cluster/status -output-format json | jq -r '.[] | select(.id == "cluster") | .name' ABCcomp-Abc01
Create a StorPool-backed Proxmox VE storage
Note
This part may be partly automated by a command-line helper tool.
Choose a StorPool template to use for the storage entry (see Listing templates):
storpool template listCreate a storage entry
Using the
-extra-tagsparameter, you can specify tags to be added to every StorPool volume for this storage entry, e.g. for managing Quality of service.pvesm add \ 'storpool' \ 'sp-nvme' \ -shared true \ -content 'images' \ -extra-tags 'qc=tier0' \ -template 'nvme'
Note
The StorPool integration for Proxmox does not support storing the “iso” content type (installation ISOs) on StorPool.
Make sure Proxmox VE can query the status of the created storage:
# pvesm status Name Type Status Total Used Available % local dir active 71021728 10464916 56903388 14.73% local-lvm lvmthin active 148082688 0 148082688 0.00% sp-nvme storpool active 1762116080 41516864 1720599216 2.36% zfs zfspool active 17824612352 30157 17824582194 0.00%
Enable StorPool’s HCI HA watchdog
When enabling StorPool HA watchdog replacement, the host will be put in maintenance mode in the Proxmox VE cluster, which will migrate away all HA resources to another machine. After the watchdog service is replaced, the host will be returned to online mode.
Note
Replacement must be done one host at a time!
Install the
pve-storpool-watchdogpackage:apt update apt install pve-storpool-watchdog
To enable the StorPool watchdog replacement on a host:
/opt/storpool/pve/set-pve-watchdog storpool
You can also revert to the default PVE watchdog service if necessary:
/opt/storpool/pve/set-pve-watchdog pve
If the chosen watchdog service is already enabled, the script will exit early. You can verify the status of the StorPool and PVE watchdogs:
systemctl status sp-watchdog-mux.service
systemctl status watchdog-mux.service
The service that is currently disabled will be masked.