StorPool 21.0 Change Log

21.0 revision 21.0.647.9a4154290

Released 17 May 2024

  • Fix in storpool_mgmt showing lower than real stored size for volumes and snapshots.

  • Fix in storpool_iscsi to work against TCP Window Scaling = 1 (default in Ubuntu 24.04).

  • Fix in relocator to not overflow disks in erasure-coded clusters during rebalance.

21.0 revision 21.0.606.609fab857

Released 02 May 2024

  • Fix in storpool_mgmt to not falsely deplete objects on down disks.

21.0 revision 21.0.576.6d56902e2

Released 23 Apr 2024

  • Erasure Coding:

    • Improved relocator operation with erasure coding when re-balancing out ejected disks.

    • Improvements in maintenance set logic while there are erasure coding tasks in progress.

  • Added support for the WD Ultrastar DC SN650 series drives.

  • Updated capabilities of storpool_cg cgroups configuration tool to keep the last successful configuration in a continuously automatically updated configuration file. The update also removes the support for the -D / --dump-configfile option for writing the configuration to file. For details, see Configuration options and parameters.

  • Added -E/--expect option for storpool_cg print. It can be used for printing the expected memory usage. For details, see Verifying machine’s cgroups state and configurations.

21.0 revision 21.0.359.dfeb4244c

Released 05 Apr 2024

  • Fix for storpool_iscsi causing sessions to reconnect upon creating a snapshot.

  • VolumeCare update to version 1.29 (more info at 9.  VolumeCare Changelog).

21.0 revision 21.0.330.28fa27b64

Released 07 Mar 2024

  • Reduced memory requirements for the storpool_iscsi service.

  • The storpool_beacon service no longer supports compatibility with releases earlier than 20.0

  • Removed hardware acceleration support for bnx2x based network interface controllers.

  • Improved latency of operations in multi-cluster environments while there are active transfers.

21.0 revision 21.0.318.051ec1d80

Released 20 Feb 2024

  • storpool_mgmt: handle some cases of stale snapshots during multicluster migrations

  • Performance improvements with storpool_iscsi and faster network (25+ GE).

  • Added support for 6.5 line of kernels.

21.0 revision 21.0.288.e52a50c1a

Released 28 Jan 2024

  • Fix for mlx5 driver for hang due to queue overflow

  • The update tooling gets all services back up even in case the command is interrupted

21.0 revision 21.0.266.455001e21

Released 16 Jan 2024

  • Move some network tuning sysctls to be activated only on bridge nodes.

  • The VolumeFreeze API call is in process of deprecation, the storpool CLI will start showing deprecation warnings, until this is completed.

  • Fix for mlx5_core based NICs for hanging traffic due to incorrect signalization handling.

  • Adds support for SK Hynix PE8110 E1.S series NVMe devices.

21.0 revision 21.0.242.e4067e0e4

Released 4 Jan 2024

  • Hardware acceleration related fixes for the mlx5 driver with an older CentOS 7 kernel.

  • VolumeCare update to version 1.27.1 (more info at 9.  VolumeCare Changelog).

  • New balancer options for better handling of degraded states: --ignore-down-disks and --empty-down-disks. For details, see 18.3.  Options

  • New helper tool storpool_capacity_planner for better planning of hardware upgrades for different erasure coding schemes. For details, see StorPool Capacity Planner.

21.0 revision 21.0.216.701e0f8e0

Released 15 Dec 2023

  • Erasure Coding:

    • Performance and stability fixes with very large volumes.

    • storpool_mgmt: fix for handling re-balancing a cluster with active erasure coding with a forgotten disk.

  • VolumeCare update to version 1.27 (more info at 9.  VolumeCare Changelog).

21.0 revision 21.0.94.f7de41582

Released 25 Oct 2023

  • Improved API responsiveness during high load while performing remote recovery into a local hybrid pool.

21.0 revision 21.0.75.1e0880427

Released 13 Oct 2023

  • Introduced new feature: Erasure Coding

    StorPool introduces a new redundancy mechanism called erasure coding, which can be applied on systems based on NVMe drives. It can be used as a replacement for the standard replication mechanism offered by StorPool, which would reduce the overhead and the amount of hardware needed for storing data reliably.

    Migration to an erasure-coded scheme happens online without interruption in the service, and is handled by the StorPool support team for the initial conversion of a running cluster.

    For more information on using erasure coding and what advantages it provides , see 14.2.  Erasure Coding.

  • Added support for Debian 12 and Proxmox Virtual Environment 8.

  • All services local to a node are now able to detect if their storpool_beacon service is hung.

  • Introduced an additional check in the update_rdma tool that the kernel modules to be reloaded exist for the running kernel.

  • Prevented StorPool services from using UDP port 27489 in order to prevent collisions with other services running on the node or in the storage network.

  • Added support for the Micron 9400 PRO.

  • Reduced the amount of logging from all StorPool services.

20.0 revision 20.0.1095.734a81b7e

Released 02 Oct 2023

  • Additional collection of volume status (quickStatus) data in monitor.pl for all volumes and snapshots in the cluster for more comprehensive alerts.

20.0 revision 20.0.1032.aeda18feb

Released 13 Sep 2023

  • Adds support for recent Intel Optane devices.

Previous release here