Release notes

StorPool 21.0

21.0 revision 21.0.1041.42d32fec3

Released 22 Nov 2024

  • Disable needrestart for all StorPool services in Ubuntu 24.04 LTS so that it does no longer restart services upon a new installation.

  • Added a fix for disk aggregation control bug in storpool_server that could lead to increased latency and degraded performance.

21.0 revision 21.0.971.f7fba2495

Released 23 Oct 2024

  • Introduced a new set of lower default network event timeouts that reduces the max latency outliers in some networks supposedly during switch buffers congestion or overload.

  • Added a fix that is speeding up rebalancing in EC clusters with irreversibly failed disks or nodes.

  • Added a more memory efficient logging facility, which should now no longer trigger alerts for the storpool_mgmt memory cgroup with enough cached logs.

  • Added a fix in storpool_iscsi to correctly handle net-masks that are not /24, /16 or /8 networks.

  • Added fixes in multiple places in the upgrade tooling to fail early to prevent updating the same node more than once.

  • Added a fix to properly rotate the logs from the storpool_volumecare service.

21.0 revision 21.0.956.a36a6ff17

Released 12 Aug 2024

  • Updated tooling (the abandon-down-drives tool) part of the procedure for bringing a cluster back in running state faster, after the following conditions:

    • An irreversibly failed drive.

    • A power-loss event.

  • Added a fix for the storpool_havm service to keep a VM running until a stable API is back up and running.

  • Updated VolumeCare to version 1.29.3; for details, see Change history.

  • Added improvements in the storpool_server service:

    • Added a fix for stuck read from an erasure coded volume with disks missing on more than one node with erasure coding (EC) 8+2.

    • Added a fix for a complicated case with aggregating entries with interleaved trims and data.

    • Introduced the ability to delete and dematerialize newly created snapshots with a down disk.

    • Introduced fixes for slower/stuck re-balance operations in EC clusters during redundancy restore.

    • Added a fix for a crash during initial EC conversion.

  • Added a fix in storpool_initdisk to recognize disks with overwritten final 4k.

21.0 revision 21.0.841.983f5880c

Released 12 Aug 2024

  • storpool_mgmt: added a fix for crash when creating a volume from snapshot, and the volume is larger than the snapshot.

  • storpool_mgmt: added configurable space aggregation algorithm parameters in order to lessen the latency impact from space aggregation when a large amount of data gets deleted from a disk (details).

  • storpool_bd: added the SP_BD_NOSLEEP option to disable push-back on full IO queue.

21.0 revision 21.0.809.72cdd84cd

Released 08 Jul 2024

  • Added support for Ubuntu 24.04 LTS.

  • Added support for 6.8 line of kernels under Proxmox Virtual Environment 8.

  • storpool_mgmt: better handling of relocation/balancing in EC clusters.

21.0 revision 21.0.691.da4ac3daf

Released 26 Jun 2024

  • Performance improvements with mlx5_core based NICs with lower interfaces speeds (2x10GE) on Linux kernels 5.15 and later.

  • VolumeCare update to version 1.29.2; for details, see Change history.

  • storpool_mgmt: handle case with hanging reads in multicluster for snapshots that are recovering from remote (backup) cluster.

  • storpool_block, storpool_iscsi: fix crash on queue full and requests for an idle volume.

21.0 revision 21.0.670.8737a7719

Released 07 Jun 2024

  • Disable VF creation on Mellanox ConnectX-6 Lx/Dx NICs with recent firmware due to multicast storm issues.

  • Blacklist acpi_pad module, which sometimes is resulting to very difficult to debug IO stalls of the storage system.

21.0 revision 21.0.647.9a4154290

Released 17 May 2024

  • Fix in storpool_mgmt showing lower than real stored size for volumes and snapshots.

  • Fix in storpool_iscsi to work against TCP Window Scaling = 1 (default in Ubuntu 24.04).

  • Fix in relocator to not overflow disks in erasure-coded clusters during rebalance.

21.0 revision 21.0.606.609fab857

Released 02 May 2024

  • Fix in storpool_mgmt to not falsely deplete objects on down disks.

21.0 revision 21.0.576.6d56902e2

Released 23 Apr 2024

  • Erasure Coding:

    • Improved relocator operation with erasure coding when re-balancing out ejected disks.

    • Improvements in maintenance set logic while there are erasure coding tasks in progress.

  • Added support for the WD Ultrastar DC SN650 series drives.

  • Updated capabilities of storpool_cg cgroups configuration tool to keep the last successful configuration in a continuously automatically updated configuration file. The update also removes the support for the -D / --dump-configfile option for writing the configuration to file. For details, see Configuration options and parameters.

  • Added -E/--expect option for storpool_cg print. It can be used for printing the expected memory usage. For details, see Verifying machine’s cgroups state and configurations.

21.0 revision 21.0.359.dfeb4244c

Released 05 Apr 2024

  • Fix for storpool_iscsi causing sessions to reconnect upon creating a snapshot.

  • VolumeCare update to version 1.29 (more info at Change history).

21.0 revision 21.0.330.28fa27b64

Released 07 Mar 2024

  • Reduced memory requirements for the storpool_iscsi service.

  • The storpool_beacon service no longer supports compatibility with releases earlier than 20.0

  • Removed hardware acceleration support for bnx2x based network interface controllers.

  • Improved latency of operations in multi-cluster environments while there are active transfers.

21.0 revision 21.0.318.051ec1d80

Released 20 Feb 2024

  • storpool_mgmt: handle some cases of stale snapshots during multicluster migrations

  • Performance improvements with storpool_iscsi and faster network (25+ GE).

  • Added support for 6.5 line of kernels.

21.0 revision 21.0.288.e52a50c1a

Released 28 Jan 2024

  • Fix for mlx5 driver for hang due to queue overflow.

  • The update tooling gets all services back up even in case the command is interrupted.

21.0 revision 21.0.266.455001e21

Released 16 Jan 2024

  • Move some network tuning sysctls to be activated only on bridge nodes.

  • The VolumeFreeze API call is in process of deprecation, the storpool CLI will start showing deprecation warnings, until this is completed.

  • Fix for mlx5_core based NICs for hanging traffic due to incorrect signalization handling.

  • Adds support for SK Hynix PE8110 E1.S series NVMe devices.

  • Improved the storpool_tree tool. For details, see StorPool tree.

21.0 revision 21.0.242.e4067e0e4

Released 4 Jan 2024

  • Hardware acceleration related fixes for the mlx5 driver with an older CentOS 7 kernel.

  • VolumeCare update to version 1.27.1 (more info at Change history).

  • New balancer options for better handling of degraded states: --ignore-down-disks and --empty-down-disks. For details, see Options

  • New helper tool storpool_capacity_planner for better planning of hardware upgrades for different erasure coding schemes. For details, see StorPool capacity planner.

21.0 revision 21.0.216.701e0f8e0

Released 15 Dec 2023

  • Erasure Coding:

    • Performance and stability fixes with very large volumes.

    • storpool_mgmt: fix for handling re-balancing a cluster with active erasure coding with a forgotten disk.

  • VolumeCare update to version 1.27 (more info at Change history).

21.0 revision 21.0.94.f7de41582

Released 25 Oct 2023

  • Improved API responsiveness during high load while performing remote recovery into a local hybrid pool.

21.0 revision 21.0.75.1e0880427

Released 13 Oct 2023

  • Introduced new feature: Erasure Coding

    StorPool introduces a new redundancy mechanism called erasure coding, which can be applied on systems based on NVMe drives. It can be used as a replacement for the standard replication mechanism offered by StorPool, which would reduce the overhead and the amount of hardware needed for storing data reliably.

    Migration to an erasure-coded scheme happens online without interruption in the service, and is handled by the StorPool support team for the initial conversion of a running cluster.

    For more information on using erasure coding and what advantages it provides , see Erasure Coding.

  • Added support for Debian 12 and Proxmox Virtual Environment 8.

  • All services local to a node are now able to detect if their storpool_beacon service is hung.

  • Introduced an additional check in the update_rdma tool that the kernel modules to be reloaded exist for the running kernel.

  • Prevented StorPool services from using UDP port 27489 in order to prevent collisions with other services running on the node or in the storage network.

  • Added support for the Micron 9400 PRO.

  • Reduced the amount of logging from all StorPool services.

Previous releases