StorPool 20.0 release notes

20.0 revision 20.0.1095.734a81b7e

Released 02 Oct 2023

  • Additional collection of volume status (quickStatus) data in for all volumes and snapshots in the cluster for more comprehensive alerts.

20.0 revision 20.0.1032.aeda18feb

Released 13 Sep 2023

  • Adds support for recent Intel Optane devices.

20.0 revision 20.0.1005.c221f1691

Released 04 Sep 2023

  • storpool_logd default cached memory usage to 100MiB.

  • Adds support for Kingston DC1500M.

20.0 revision 20.0.987.e0aa2a0f7

Released 28 Aug 2023

  • Adds storpool_qos, a service to update per-volume or snapshot QoS settings

    The new service can now be used to track and configure storage tiers for all volumes and snapshots based on the updates in their template, or on a dedicated qc tag specifying the system storage tier required by the orchestration for this volume or snapshot. For more information, see Quality of service.

20.0 revision 20.0.953.b098a1c1a

Released 17 Aug 2023

  • Fix in storpool_server that might cause larger number of aggregations than necessary with some peculiar workloads.

  • Fix in storpool_q internal API helper tool to be able to operate in environments with HTTP proxy configured.

20.0 revision 20.0.920.e06a6829e

Released 10 Aug 2023

  • storpool_server - fix for a duplicate object on snapshot delete that causes the service to crash upon inserting such a disk.

20.0 revision 20.0.843.07f4dfc11

Released 27 Jul 2023

  • Adds support for Micron 7450 Max

20.0 revision 20.0.787.54d165f71

Released 11 Jul 2023

  • Add OOM killer scores to processes to prevent the OOM killer from stopping the client in an OOM event in the storpool.slice memory cgroup.

  • storpool_iscsi:

    • Fix for crash under full queue and SCSI COMPARE AND WRITE operation.

    • Fix for crash when Persistent reservations READ FULL STATUS command is issued and no persistent reservations have ever been set.

20.0 revision 20.0.768.9d77ff11e

Released 29 Jun 2023

  • storpool_server: significantly faster joining for disks in the cluster due to parallelized initialization.

20.0 revision 20.0.520.e3fc57b76

Released 07 Jun 2023

  • storpool_stat now handles gracefully error exit codes from inventory-collecting processes.

  • storpool_volumecare now postpones migrating the local configuration until cluster upgrade is done.

20.0 revision 20.0.508.e2623b204

Released 26 May 2023

  • Initial support for Debian 11 and Proxmox operating system.

  • storpool_stat now also sends all kernels installed on the node in order to trigger alerts for known-bad kernel versions.

20.0 revision 20.0.487.818a7cea4

Released 08 May 2023

  • Updated disk_init_helper to properly handle multipath enabled NVMe devices.

  • Revised net_helper to generate a random MAC address for bond and bridge interfaces in all bond enabled modes. This resolves an issue where bridge interfaces would have a different MAC address than the underlying bond, causing local (intra-node) communication between services to fail.

  • A single VF interface is now being created for all mlx5_core based NICs by default by the vf-genconf tool, so that a dummy interface will no longer be required for any bond-based configurations.

20.0 revision 20.0.473.ad8854a34

Released 28 Apr 2023

  • Fix for storpool_server to exit the cluster before waiting for all operations on ejected drives to complete, when ejecting all drives simultaneously.

20.0 revision 20.0.466.2a2e26fd9

Released 21 Apr 2023

  • storpool_volumecare is now a highly available service running on each API node

    The storpool_volumecare service is now running in high-availability mode on all nodes in the cluster with the storpool_mgmt service installed, and will process operations from the currently active management node.

    For the releases until this one, the storpool_volumecare service was running only on one node in the cluster. In some cases, this resulted in the service being offline if a whole node went down, until the node returned or the service was manually migrated to a new node. This was fine, since most clusters rarely change their API nodes, thus the task for keeping the service running on one node somewhere in the cluster was a rare occurrence.

    Starting with this release, the storpool_volumecare service will be automatically installed on all API nodes in the cluster, and will migrate automatically to the active API node, thus making sure the service is active at all times.

    The volumecare installation module is no longer available, and the service gets installed on all nodes running the storpool_mgmt service (API). The service is kept up automatically on the presently active API node; for more information, see 6.3.2.  Address for API management).

    The migration from the old configuration file is done automatically on the first start of the new version of the service. The existing configuration file will be migrated in the assigned key-value store.

    For more information about this service, see VolumeCare.

  • The SP_BRIDGE_TEMPLATE configuration option is now superseded by the mgmtConfig backupTemplateName option (more at this part of the CLI tutorial)

  • The storpool_stat service now collects only specific sysctl values instead of all of them, due to some operations being triggered on reading the values of sysctls like vm.stat_refresh.

20.0 revision 20.0.441.61c5774c8

Released on 08 Apr 2023

  • Fixed storpool_stat crashing if a volume disappears while it is collecting stats for it.

  • New scrubbing procedure executed through cron with lower number of drives for each run for less impact on cluster operations.

  • New Management menu in CLI initially showing active API tasks progress (more details here)

20.0 revision 20.0.419.261c0e3b2

Released on 18 Mar 2023

  • Adds support for the Micron 7450 PRO NVMe drive.

  • Functional updates for storpool_ctl now leaving storpool_nvmed running upon stop when the --expose-nvme is not provided

  • Configure a custom ssh port for the storpool_havm service

  • Fixed CLI warning when reporting per-GiB IOPS or bandwidth limits

20.0 revision 20.0.386.8ea42a7d8

Released 08 Mar 2023

  • New option to storpool_ctl to show only not running or not enabled services (more info here)

  • The output of storpool attach list CLI command now shows volume and snapshot tags.

20.0 revision 20.0.372.4cd1679db

Released 24 Feb 2023

20.0 revision 20.0.353.e5f7ee9c0

Released 16 Feb 2023

  • Fix for storpool_volumecare printing erroneous output on stdout, breaking JSON output.

  • Add an API option to control whether a volume and its snapshots are moved to the destination sub-cluster upon multicluster attach operation.

20.0 revision 20.0.334.70c4fbda5

Released 03 Feb 2023

  • Fix for random read performance through storpool_bridge with a configured large number of maxRemoteRecoverRequests.

20.0 revision 20.0.325.7d16d036f

Released 01 Feb 2023

  • Gathering additional data in storpool_stat for the installed package versions for OpenNebula and CloudStack orchestrations.

  • Increase the number of NVMe drives for a single node from 16 to 64.

20.0 revision 20.0.276.b05695816

Released 11 Jan 2023

  • The balancer tool now requires one of -F, -A or -R options to proceed, see Rebalancing StorPool in the User Guide.

  • Removed support for Debian 9.

  • storpool_nvmed service now waits for all NVMe devices to be initialized by the OS, which fixes an issue with NVMe’s not getting back in the cluster upon reboot.

  • MetaV1 compatibility is removed, minimum version for supported upgrade is now 19.01.2995.15aa353e8.

  • Increase gratuitous ARP interval in storpool_iscsi to workaround broken behavior in Xen Server 7.x.

20.0 revision 20.0.205.c5cbaeb49

Released 19 Dec 2022

  • storpool_mgmt - fix for crash on an attempt to soft eject a disk with multicluster attached volume.

20.0 revision

Released 12 Dec 2022

  • Adds support for Samsung PM1733 NVMe.

  • Fix ARP resolve on backup interface in missing-cross-switch-link iSCSI multipath case.

  • Gathering additional data in storpool_stat:

    • status of iSCSI controllers connections.

    • kdump service status.

    • present kernel command line parameters.

    • present sysctl configuration.

    • status for all systemd units.

20.0 revision

Released 30 Nov 2022

  • storpool_iscsi:

    • scalability improvements with 1000+ exports.

    • decreased failover times.

  • Automatically detect NVMe devices with StorPool signatures (removes the SP_NVME_PCI_ID configuration option).

  • Removed support for pre-AMD Zen-based processors (the amdfam10 architecture).

  • Fix the storpool_vcctl tool causing failure with status volume command.

  • Fix in storpool_server for induced latency during entries aggregation.

20.0 revision

Released 20 Oct 2022

  • Adds support for Dell branded Samsung PM1725 and XS1715 NVMe devices.

  • Add handling for non-null-terminated NVMe serial and product numbers.

  • New default latency threshold values (more at Disk and journal performance tracking).

20.0 revision 20.0.38.c5178becb

Released 03 Oct 2022

  • storpool_server detects new trim operations that only trim over old ones and completes them immediately.

20.0 revision

Released 08 Sep 2022

  • storpool_volumecare - significant performance speedup in large backup clusters.

  • storpool_havm - highly available virtual machine service helper; for details, see 9.13.  storpool_havm.

  • storpool_mgmt - fix for crash on an attempt to rebase a volume or a snapshot to an already destroyed snapshot.

Previous release

19.4 revision 19.01.3152.2b7de29c0