Logo
  • Introduction
    • Overview
    • Architecture
    • Feature highlights
      • Scale-out, not Scale-up
      • High performance
      • High availability and reliability
      • Commodity hardware
      • Shared block device
      • Co-existence with hypervisor software
      • Compatibility
      • CLI interface and API
      • Reliable support
    • More information
  • Installation and setup
    • Hardware requirements
      • Minimum StorPool cluster
      • Recommended StorPool cluster
      • How StorPool relies on hardware
        • CPU
        • RAM
        • Storage (HDDs / SSDs)
        • Network
      • Software compatibility
        • Operating systems
        • File systems
        • Hypervisors and cloud management/orchestration
    • StorPool capacity planner
      • Introduction
        • Why it is needed
        • How it works
      • Confirming supported modes
      • Usage
      • Mode support calculation
      • Examples
        • Online, checking for mode eligibility
        • Planning, adding disks to an existing cluster
    • Xeon scalable BIOS & OS tuning
      • Intro
      • TLDR
      • BIOS
      • BIOS Walkthrough
      • OS
      • Rationale
    • EPYC BIOS & OS Tuning
      • Introduction
      • BIOS configuration
        • SuperMicro
        • HPE
      • OS configuration
      • Validating
        • Power settings
        • C-States
      • References
    • BIOS configuration via OS tools
      • Intro
      • SuperMicro
        • SR-IOV
        • NUMA nodes on AMD EPYC2
      • DELL
        • SR-IOV
      • HP
    • Storage devices
      • Journals
      • Using disk_init_helper
        • Example node
        • Discovering drives
          • Basic usage
          • Viewing configuration
          • Recognizing SSDs
          • Specifying a journal
        • Initializing drives
      • Manual partitioning
        • Creating partitions
        • Initializing a drive
        • Drive initialization options
    • Persistent memory support
      • About
      • Usage
      • History
    • Introduction to network interface helper
      • About
      • How it works
      • Use cases
      • History
    • Network interfaces
      • Preparing Interfaces
      • Automatic configuration
        • Exclusive interfaces mode
        • Active backup bond modes
        • LACP modes
        • Creating the configuration
        • Applying the configuration
        • Simple example
        • Advanced example
        • iSCSI configuration
      • Manual configuration
      • Network and storage controllers interrupts affinity
    • Initial iSCSI configuration
      • Configuring interfaces
      • Example setup
    • Advanced iSCSI setup
      • Routed setup overview
      • Routed setup configuration
      • Caveats with a complex iSCSI architecture
  • Administration guide
    • Adding and removing nodes
      • Prerequisites
      • Adding nodes
        • Prerequisites
        • Procedure
      • Removing nodes
        • Stopping drive usage
        • Updating cluster configuration
      • Recovering nodes
    • Node configuration options
      • Introduction
        • Configuration basics
        • Per host configuration
        • Using the files in /etc/storpool.conf.d/
        • Maintaining consistency
        • Minimal node configuration
      • Identification and voting
        • Node ID
        • Non-voting beacon node
        • Expected nodes
      • Network communication
        • Interfaces for StorPool cluster communication and StorPool block protocol
        • Address for API management
        • Port for API management
        • API authentication token
        • Ignore RX port option
        • Preferred port
        • Address for the bridge service
        • Interface for the bridge address
        • Resolve interface is bridge
        • Name of the bond interface
        • Bridge service network mask
        • Path to iproute2
        • Host and port for the Web interface
      • Drives
        • Exclude disks globally or per server instance
        • Disable drive ejection
        • Group owner for the StorPool devices
        • Permissions for the StorPool devices
        • Mirror directory
        • Mirror directory offset
        • NVMe SSD drives
        • Resetting stuck NVMes
      • Monitoring and issue reports
        • Cluster name
        • Cluster ID
        • Local user for debug data collection
        • Addresses for sending issue reports
        • Deleting local reports
      • Cgroup options
        • Enabling cgroups
        • StorPool RDMA module
        • Options for StorPool services
        • More information
      • NVMe target service
        • Routing for NVMe target
        • Network interface for NVME target
        • BGP speaker for routed NVMe target
      • iSCSI options
        • Cgroups configuration
        • Network interface to use
        • Enabling routing
        • Configuring BGP speaker
        • More information
      • Miscellaneous options
        • C state latency
        • Cache size
        • CLI prompt string
        • Configuring the StorPool log daemon service
        • Free space for reports
        • Internal write-back caching
        • Issue reports location
        • Logging for opening or closing of StorPool devices
        • Restart automatically in case of crash
        • Type of sleep
        • Type of sleep for the bridge service
        • Working directory
    • Node maintenance
      • Features
      • More information
    • Control groups
      • About kernel control groups
        • Cgexec
        • Slices
      • StorPool and Cgroups
        • Cgroup configuration
        • Memory configuration
        • Cpuset configuration
      • Introduction to storpool_cg
        • Before you start
        • Format
        • Viewing results before applying them
        • Understanding results
      • Configuration options and parameters
        • Configuration options
        • Saving a configuration as a file
        • Resetting all saved parameters
        • Viewing the configuration
        • Loading configuration from a file
        • Configuration parameters
        • Cpuset isolation for the machine.slice
        • Examples
          • Changing the system.slice and the user.slice
          • Resetting saved setting
      • Creating cgroups configurations for hypervisors
        • Setting slice limits
        • Setting number of CPUs
        • Overriding services detection
        • Overriding hardware acceleration
        • Setting memory for the kernel
      • Creating cgroups configurations for dedicated storage and hyperconverged machines
        • Dedicated storage machines
          • Cache size
          • Number of servers
        • Hyperconverged machines
      • Configuring multiple similar machines
      • Verifying machine’s cgroups state and configurations
        • storpool_cg print
          • NUMA nodes and cpuset slices
          • Memory usage
          • Expected memory usage
        • storpool_cg check
        • storpool_process
      • Updating already configured machines
        • Migrating to new-style configuration
        • Migrating machines with previous manual setting of SYSTEM_LIMIT or USER_LIMIT
    • Hugepages
      • Setting up hugepages
      • The storpool_hugepages tool
      • For new installations
      • For installed machines
    • Drive management
      • Introduction
      • Adding drives
      • Ejecting drives
      • Removing a drive without replacement
      • Replacing a drive without balancing-out the old drive
        • Ejecting the old drive
        • Adding the new drive
      • Replacing an ejected drive
      • Removing an ejected drive without replacement
      • Recovering drives
      • More information
    • Redundancy
      • Replication
        • Triple replication
        • Dual replication
      • Erasure Coding
        • Features
        • Redundancy schemes
        • FAQ
    • Automatic drive tests
    • Volumes and snapshots
      • Creating a volume
      • Deleting a volume
      • Renaming a volume
      • Resizing a volume
      • Snapshots
      • Creating a snapshot of a volume
      • Creating a volume based on an existing snapshot (a.k.a. clone)
      • Deleting a snapshot
      • Rebase to null (a.k.a. promote)
      • Rebase
      • Example use of snapshots
      • More information
    • Setting IOPS and bandwidth limits
      • About
      • Examples
      • More information
    • Quality of service
      • Introduction
      • Defining tiers
      • Setting tiers
      • More information
    • Background services
      • storpool_beacon
      • storpool_server
      • storpool_block
      • storpool_mgmt
      • storpool_bridge
      • storpool_controller
      • storpool_nvmed
      • storpool_stat
      • storpool_qos
      • storpool_iscsi
      • storpool_abrtsync
      • storpool_cgmove
      • storpool_havm
      • storpool_logd
    • Managing services with storpool_ctl
      • Supported actions
      • Getting status
      • Starting services
      • Enabling services
      • Disabling services
      • Stopping services
    • Rebalancing the cluster
      • Overview
      • Rebalancing procedure
      • Options
      • Restoring volume redundancy on a failed drive
      • Restoring volume redundancy for two failed drives (single-copy situation)
      • Adding new drives and rebalancing data on them
      • Restoring volume redundancy with rebalancing data on other placementGroup
      • Decommissioning a live node
      • Decommissioning a dead node
      • Resolving imbalances in the drive usage
      • Resolving imbalances in the drive usage with three-node clusters
      • Reverting balancer to a previous state
      • Reading the output of storpool balancer disks
        • Balancer tool output
      • Errors from the balancer tool
      • Miscellaneous
    • Multi-server
      • Configuration
      • Helper
    • Introduction to multi-cluster mode
      • Basic concepts
      • Use cases
      • Implementation
      • Volume naming
    • Multi-site and multi-cluster
      • Multi-cluster
      • Multi site
      • Setup
      • Connecting two clusters
        • Cluster A
        • Cluster B
      • Bridge redundancy
        • Separate IP addresses
        • Single IP failed over between the nodes
      • Bridge throughput performance
        • Network
        • CPU
        • Disks throughput
      • Exports
      • Remote clones
      • Creating a remote backup on a volume
      • Creating an atomic remote backup for multiple volumes
      • Restoring a volume from remote snapshot
      • Remote deferred deletion
      • Volume and snapshot move
        • Volume move
        • Snapshot move
    • iSCSI overview
      • Introduction
      • How does iSCSI function in StorPool
      • Configuring StorPool iSCSI targets
    • Setting iSCSI targets
      • A quick overview of iSCSI
      • Prerequisites
      • Defining a portal group
      • Multi-tenant configuration
      • Defining portals
      • Configuring initiators
        • Restricting the IP addresses
        • Setting chap authentication
        • Removing initiators
      • Configuring iSCSI targets
        • Creating iSCSI targets
        • Removing iSCSI targets
        • Exporting iSCSI targets
        • Un-exporting iSCSI targets
      • Obtaining details
    • Connecting a VMware ESXi host to StorPool iSCSI
      • Introduction
      • Configuring a VMware ESXi host
      • Optimizing Multipath Performance
      • Configuring vSwitch
    • Connecting a Windows Server 2012-2022 host to StorPool iSCSI
      • Introduction
      • Configuring Multipath I/O (MPIO)
        • Prerequisites for MPIO
        • Installation
      • Reliability settings
      • Network settings
      • Connecting to targets
      • Connecting additional paths
    • Configuring a Linux initiator for StorPool iSCSI
      • iSCSI target checks
      • Multipath
        • Configure multipath
      • iSCSI Initiator
        • Set block device timeout
        • Discover and connect to target
        • Check multipath
        • Verify timeouts
      • Failover and recovery
    • Enable TRIM/Discard operations for KVM-based clouds
      • Verify the storage solution
      • Virtualization stack
      • Virtual machine type
      • Virtual drives discard
      • Guest configuration
      • Hints
  • Operations guide
    • Monitoring alerts
      • Alerts categories
      • Severity levels
      • Cluster status alerts
        • agscore-entries
        • agscore-space
        • attachments-count
        • balancer
        • bridge
        • bridgestatus
        • client
        • clusteruptime
        • controller
        • dematerialization
        • disbalance
        • disk
        • diskentries
        • disk-missing-pg
        • disk-journals
        • disk-pending-errors
        • disk-recoveries
        • disk-softeject
        • disk-test
        • disk-to-test
        • diskobjects
        • disks
        • disk-softeject
        • diskspace
        • features
        • iscsi
        • iscsi-backup
        • iscsi-ctrlrs
        • latthreshold-cli
        • latthreshold-disk
        • locations
        • maintenances
        • mgmt
        • mgmtConfig
        • monitoringdata
        • needbalance
        • network
        • onapp-bkp-vol
        • placementgroup-drives
        • quorum
        • reformat
        • relocator
        • server
        • snapfromremote
        • snaplen
        • snaprepl
        • snaptargets
        • snaptmpl
        • tasks
        • template
        • totalvolumes
        • volumecare-local
        • volumecare-policies
        • volumecare-remote
        • volumecare-svc
        • volumerepl
        • volumes
        • volumesizes
        • volumetargets
        • volumetmpl
      • Per-host alerts
        • apichecks
        • configfile
        • configuration
        • hw-ecc
        • initiators-iscsi
        • kernels
        • lldp
        • rootcgprocess
        • portals-iscsi
        • status
      • Metrics-based alerts
        • cgroups
        • cpustat
        • dataholes
        • diskerrors
        • io-latencies
        • iolatmon
        • service-latency
        • service-load
        • stats
        • status
      • Others
        • billingdata
    • Monitoring metrics collected
      • Overview
      • What data is collected
        • Cluster status
        • Per-host status
        • Performance monitoring
        • Metadata
        • User data
      • How the data is collected
      • How the data is sent
      • How the data is processed
    • Common monitoring via the StorPool API
      • Introduction
      • Internal elements
        • Tasks
        • Attachments
      • Visible elements
        • Networks
        • Services
          • Server
          • Client
          • Management
          • iSCSI
          • Bridge
          • Disks
          • Templates
          • Volumes
        • General cluster status
          • Disks
          • Networks
          • Services
          • Cluster services
    • Metrics collected and available from StorPool
      • Overview
      • Internals
        • storpool_stat operations
        • Data interpolation between s1 and m1 retention policies
        • Disk usage and IO of InfluxDB databases
      • Data structure
      • Measurements reference
        • bridgestatus
        • cpustat
        • disk
        • diskiostat
        • diskstat
        • iostat
        • iscsisession
        • memstat
        • netstat
        • servicestat
        • task
        • template
        • per_host_status
    • StorPool analytics
      • Overview
      • Basics
      • Example scenarios for use
        • Single drive creating delays
        • Balancing CPU usage on hypervisors
        • Checking for CPU starvation for StorPool services
      • Dashboards reference
        • Home
        • CPU
          • CPU stat
          • Per-CPU stat
          • Per-service CPU stat
          • Total CPU non-SP stat
          • Total CPU stat
        • Servers
          • All server stat
          • General server stat
          • Per-disk backend stat
          • Per-host backend stat
        • Clients
          • All client stat
          • General client stat
          • Per-host client stat
        • Memory
          • Cgroup memory
          • Cgroup memory per node
        • Volume
          • Per-volume stat
          • Top volumes
        • Network
          • Network service stat
        • System disk
          • SP disk stat
          • System disk stat
        • iSCSI
          • iSCSI stats per initiator
          • iSCSI stats per node/network
          • iSCSI stats per target
          • iSCSI totals per initiator
          • iSCSI totals per initiator
        • Template
          • Template usage
          • Template usage - internal
        • Disk
          • Disk usage
          • Disk usage - internal
          • Single disk usage - internal
        • Custom
    • Disk and journal performance tracking
      • Why it is needed
      • How it works
      • Typical use cases
      • History
    • Adding a drive to a running cluster
      • Drives stress tests
        • How disk_tester works
        • How to read the results
      • Partition and init drive
        • HDD on 3108 MegaRaid controller
          • Locating the disk
          • Prepare HDD
          • Create Partitions
          • Initialize the HDD
        • SATA SSD drive
        • NVMe drive
          • Partitioning the NVMe
          • Adjusting configuration
          • Initializing the NVMe
          • Adjusting hugepages
      • Adjusting cgroups
      • Restarting the required services (only for NVMe drives)
      • Manually adding a drive initialized with –no-notify in the cluster
      • Check disk is added to the cluster
      • Adding drive to a placement group and balancing data
    • Ejected disk
      • Checking the drives
        • Check if the drive is actually available
        • Check if this is a repeated event
        • Check if the drive is causing delays for the cluster operations
      • Returning the drive to the cluster
      • Removing and re-balancing out
        • Collect information on the drive
          • Location information
          • Error information
        • Forget and start balancing
          • Remove drive from placement groups and from list in the cluster
          • Make sure the drive is flagged as ejected
          • Clean up RAID VDs, if applicable
          • Start locate function, if available
          • Balance out the drive (restore redundancy)
        • Monitor progress
        • After the drive is replaced, disable locator lights
    • Testing a StorPool drive
      • Put node in maintenance
      • Perform test
        • Check disk’s server instance
        • No single drives on server instances
        • Single drive on server instance
      • Validate result
        • Disk comes back
        • Disk doesn’t come back
    • Cleanup of stale backup volumes and snapshots
      • Introduction
      • Volumes attached to nodes
      • Volumes exported through iSCSI (Hypervisor nodes)
      • Volumes held by the device mappper (Backup nodes)
    • StorPool tree
      • Usage
      • Properties
      • Options
      • Examples
      • More information
    • Troubleshooting
      • Normal state of the system
        • All nodes in the storage cluster are up and running
        • All configured StorPool services are up and running
        • Working cgroup memory and cpuset isolation is properly configured
        • All network interfaces are properly configured
        • All drives are up and running
        • There are no hanging active requests
      • Degraded state
        • Degraded state due to service issues
        • Degraded state due to host OS misconfiguration
        • Degraded state due to network interface issues
        • Drive/Controller issues
      • Critical state
        • API service failure
        • Server service failure
        • Client service failure
        • Network interface or Switch failure
        • Hard Drive/SSD failures
        • Hanging requests in the cluster
  • Reference guide
    • CLI tutorial
      • Using the standard shell
      • Using the interactive shell
      • Error messages
      • Multi-cluster mode
    • CLI reference
      • Location
      • Cluster
      • Remote bridge
        • Registering and de-registering
        • Minimum deletion delay
        • Listing registered remote bridges
        • Status of remote bridges
      • Network
      • Server
      • Services
      • Kubernetes
      • Disk
        • Disk list main info
        • Disk list additional information
        • Disk list server internal information
        • Disk list performance information
        • Ejecting disks and internal server tests
      • Fault sets
      • Placement groups
      • Volumes
        • Volume parameters
        • Listing all volumes
        • Listing exported volumes
        • Volume status
        • Used space estimation
        • Listing disk sets and objects
        • Managing volumes
      • Snapshots
        • Creating snapshots
        • Listing snapshots
        • Volume operations
        • Deleting snapshots
        • Snapshot parameters
        • Remote snapshots
      • Attachments
      • Templates
        • Creating
        • Listing
        • Getting status
        • Changing parameters
        • Renaming
        • Deleting
      • Client
      • iSCSI
        • Creating a portal group
        • Registering an initiator
        • Exporting a volume
        • Getting iSCSI configuration
        • Getting active sessions
        • Operations
        • Using iscsi_tool
        • Using iscsi_targets
      • Relocator
        • Turning on and off
        • Displaying status
        • Additional relocator commands
      • Balancer
      • Tasks
        • More information
      • Maintenance mode
      • Management menu
      • Management configuration
        • Listing current configuration
        • Local and remote recovery
        • Miscellaneous parameters
        • Snapshot dematerialization
        • Multi-cluster parameters
        • Reusing server on disk failure
        • Changing default template
        • Cluster maintenance mode
        • Latency thresholds
        • Aggregate score parameters
      • Mode
    • API reference
    • Disaster Recovery Engine API
  • VolumeCare
    • Configuration options
      • About
      • [format] section
      • [volumecare] section
        • Mode
        • Tags
        • Driver
        • Tuning
        • Task control
      • More information
    • Retention policies
      • About
      • Policy resolution
      • Changing a policy
        • Single-cluster
        • Multiple clusters
    • Retention policy modes
      • Local
        • basic (stopgap)
        • exp
        • keep-daily
        • nosnap
      • Remote
        • basic-mirror (stopgap-mirror)
        • basic-remote
        • keep-daily-remote (stopgap-remote)
        • keep-daily-split
        • mhdm (minutes-hours-days-months)
        • remote-backup
    • VolumeCare control tool
      • About
      • config
      • show
      • list
      • status
      • node info
      • revert
    • Example configurations
      • Single-cluster
      • Primary cluster
      • Backup cluster
      • Two clusters sending backups to each other
    • Change history
      • 1.29.3
      • 1.29
      • 1.28
      • 1.27.3
      • 1.27.1
      • 1.27
      • 1.26.1
      • 1.26
      • 1.25
      • 1.24
      • 1.23
      • 1.22
      • 1.21
      • 1.20
      • 1.19.1
      • 1.19
      • 1.18.1
      • 1.18
      • 1.17
      • 1.16
      • 1.15
      • 1.14
      • 1.13
      • 1.12
      • 1.11
      • 1.10
      • 1.09
      • 1.08
      • 1.07
      • 1.06
      • 1.05
      • 1.04
      • 1.03
      • 1.02
      • 1.01
      • 1.0
  • Disaster Recovery Engine
    • About DRE
      • Overview
      • Integration
      • Availability
    • How DRE works
      • DRE procedures
        • Overview
        • Protection models
        • Disaster recovery procedures
      • Create a DR service for a VM
        • Prerequisites
        • Procedure
        • More information
      • Delete a DR service for a VM
        • Prerequisites
        • Procedure
        • More information
      • Update a DR service
        • Prerequisites
        • Procedure
        • More information
      • Test VM failover
        • Prerequisites
        • Procedure
        • More information
      • VM failover
        • Prerequisites
        • Procedure
        • More information
      • Terminology
        • Cloud Management Platform (CMP)
        • Service Portal
        • DR Engine
        • StorPool Storage
        • Zone
        • Recovery point
        • Disaster recovery service (DR service)
        • VM metadata
        • End-user
    • DRE compatibility and integration with cloud management platforms
      • StorPool supported integrations
      • Integrating a CMP with the DRE
  • StorPool integrations
    • CloudStack
      • Introduction
        • Plugin overview
        • Cloudstack overview
          • Primary and Secondary storage
          • ROOT and DATA volumes
        • More information
      • StorPool advantages
        • Bypassing secondary storage
        • Using hyper-converged setup
        • Tiers and availability
        • Built-in functionality vs. additional packages
        • Volume and snapshot deletion
        • Temporarily backing up volumes before deletion
      • Installation and configuration
        • Installing StorPool
        • Setting up a StorPool PRIMARY storage pool in CloudStack
        • Configuring the plugin
        • More information
      • Plugin settings
      • Using QoS
        • Before you start
        • Configuration
        • Creating Disk Offering for each tier
        • Creating a VM with QoS
        • More information
      • Plugin functionality
        • Creating template from a snapshot
        • Creating ROOT volume from templates
        • Creating a ROOT volume from an ISO image
        • Creating a DATA volume
        • Creating volume from snapshot
        • Resizing volumes
        • Creating snapshots
        • Reverting volume to snapshot
        • Migrating volumes to other Storage pools
        • Virtual machine snapshot and group snapshot
        • BW/IOPS limitations
        • Support for host HA
        • Supported operations for volume encryption
        • More information
    • OnApp
      • OnApp acceptance tests
        • Installation
        • Acceptance tests
          • 1. Test that proper CPU and RAM are reported to OnApp for each hypervisor
          • 2. Test creating a VPS
          • 3. Test live migration between two nodes with StorPool
          • 4. Test resizing a volume for a VPS
          • 5. Test adding a volume as a drive for a VPS
          • 6. Test shrinking a volume used as a drive for a VPS
          • 7. Test backing up a VPS (only in cluster with available backup server configured with StorPool OnApp integration)
          • 8. Test restoring a VPS from backup (only in cluster with available backup server configured with StorPool OnApp integration)
          • 9. Test removing a volume configured for the VPS
          • 10. Test undeploy/deletion of the VPS
      • OnApp LVM to LVMSP disk migration
        • Limitations
        • Preparation
        • Data synchronization
        • Finalizing
        • RAID1 resync tuning
      • OnApp XEN to KVM disk migration
        • Introduction
        • Preparation steps
        • Migration Case-1: Same disk structure (common for Windows and FreeBSD)
        • Migration Case-2: XEN VM with raw disk, KVM with partition table (Linux)
          • (1) Preparing the KVM disks
          • (2) Migrating the data from the raw XEN disk to KVM disk with a partition
          • (3) Finalizing the disk migration
        • Optimization and tuning
          • Raid1 resync tuning
        • Addendum
      • OnApp KVM to KVM with virtio migration procedure
        • Preparation
          • Prepare OnApp’s DB
          • Alter the VM’s content
          • Update the VM’s metadata in the OnApp’s DB
          • Finalize
      • Known issues and FAQ
        • Known issues
          • Unable to Delete VM - logical volume(s) not found
        • Frequently asked questions
          • CloudBoot
    • OpenNebula
      • OpenNebula Integration
      • Support Life Cycles for OpenNebula Environments
        • Fully-Managed OpenNebula Cloud (FMOC)
        • StorPool add-on for OpenNebula (add-on)
      • Revert OpenNebula volume from a snapshot
        • 1. Common restore scenarios
          • 1.1. Saving snapshots in the primary cluster
          • 1.2. Saving snapshots in the backup cluster
          • 1.3. Reverting a VM
        • 2. Revert procedure steps
          • 2.1. Get the names of existing (i.e., not previously deleted) volumes related to a specific VM
          • 2.2. Find all remote exported snapshots for a specific volume in the primary cluster
          • 2.3. Transfer a remote snapshot locally
          • 2.4. Monitor transfer progress
          • 2.5. Undeploy the VM
          • 2.6. Preserve the old volume as a snapshot (optional)
          • 2.7. Revert the volume based on the snapshot
          • 2.8. Create a snapshot copy in the backup cluster for preservation purposes
      • Copying a VM between clusters
        • Using multi-site
        • Transferring a VM
          • Exporting a snapshot
          • Importing a snapshot
        • More information
    • OpenStack
    • Kubernetes
      • StorPool CSI
        • Overview
        • Prerequisites
        • Deployment
      • Usage example
    • Proxmox VE
      • Installing the StorPool Proxmox integration
        • Install the StorPool storage plugin
        • Upgrading the StorPool plugin
        • Check the status of the StorPool and Proxmox installation
        • Create a StorPool-backed Proxmox VE storage
        • Enable StorPool’s HCI HA watchdog
      • Changelog
        • [0.5.1] - 2025-03-18
          • Fixes
          • Changes
          • Additions
        • [0.5.0] - 2025-02-04
          • Fixes
          • Additions
        • [0.4.1] - 2024-11-15
          • Additions
        • [0.3.3] - 2024-08-07
          • Fixes
        • [0.3.2] - 2024-06-28
          • Fixes
          • Additions
        • [0.3.1] - 2024-06-17
          • Fixes
        • [0.3.0] - 2024-06-12
          • Additions
        • [0.2.4] - 2024-03-29
          • Fixes
          • Additions
          • Other changes
        • [0.2.3] - 2023-12-11
          • Fixes
        • [0.2.2] - 2023-09-06
          • Fixes
          • Additions
          • Other changes
        • [0.2.1] - 2023-07-12
          • Fixes
          • Additions
          • Other changes
        • [0.2.0] - 2023-06-01
          • Incompatible changes
          • Additions
          • Fixes
          • Other changes
        • [0.1.0] - 2023-05-28
          • Started
      • StorPool HA watchdog replacement
      • Using Proxmox VE with StorPool and Veeam Backup and Replication
        • Enable locking of VMs during backup
      • Development guidelines
        • Object names stored in the Proxmox VE database
        • Volume and snapshot tags
      • FAQ
        • How can we use Cloud-init with StorPool?
  • Release notes
    • StorPool 21.0
      • 21.0 revision 21.0.1212.bbe140b37
      • 21.0 revision 21.0.1143.39aeed538
      • 21.0 revision 21.0.1096.ccbc168c7
      • 21.0 revision 21.0.1041.42d32fec3
      • 21.0 revision 21.0.971.f7fba2495
      • 21.0 revision 21.0.956.a36a6ff17
      • 21.0 revision 21.0.841.983f5880c
      • 21.0 revision 21.0.809.72cdd84cd
      • 21.0 revision 21.0.691.da4ac3daf
      • 21.0 revision 21.0.670.8737a7719
      • 21.0 revision 21.0.647.9a4154290
      • 21.0 revision 21.0.606.609fab857
      • 21.0 revision 21.0.576.6d56902e2
      • 21.0 revision 21.0.359.dfeb4244c
      • 21.0 revision 21.0.330.28fa27b64
      • 21.0 revision 21.0.318.051ec1d80
      • 21.0 revision 21.0.288.e52a50c1a
      • 21.0 revision 21.0.266.455001e21
      • 21.0 revision 21.0.242.e4067e0e4
      • 21.0 revision 21.0.216.701e0f8e0
      • 21.0 revision 21.0.94.f7de41582
      • 21.0 revision 21.0.75.1e0880427
    • Previous releases
      • StorPool 20.0 release notes
        • 20.0 revision 20.0.1095.734a81b7e
        • 20.0 revision 20.0.1032.aeda18feb
        • 20.0 revision 20.0.1005.c221f1691
        • 20.0 revision 20.0.987.e0aa2a0f7
        • 20.0 revision 20.0.953.b098a1c1a
        • 20.0 revision 20.0.920.e06a6829e
        • 20.0 revision 20.0.843.07f4dfc11
        • 20.0 revision 20.0.787.54d165f71
        • 20.0 revision 20.0.768.9d77ff11e
        • 20.0 revision 20.0.520.e3fc57b76
        • 20.0 revision 20.0.508.e2623b204
        • 20.0 revision 20.0.487.818a7cea4
        • 20.0 revision 20.0.473.ad8854a34
        • 20.0 revision 20.0.466.2a2e26fd9
        • 20.0 revision 20.0.441.61c5774c8
        • 20.0 revision 20.0.419.261c0e3b2
        • 20.0 revision 20.0.386.8ea42a7d8
        • 20.0 revision 20.0.372.4cd1679db
        • 20.0 revision 20.0.353.e5f7ee9c0
        • 20.0 revision 20.0.334.70c4fbda5
        • 20.0 revision 20.0.325.7d16d036f
        • 20.0 revision 20.0.276.b05695816
        • 20.0 revision 20.0.205.c5cbaeb49
        • 20.0 revision 20.0.202.79fc357c5
        • 20.0 revision 20.0.172.8ebac5311
        • 20.0 revision 20.0.93.78df908ec
        • 20.0 revision 20.0.38.c5178becb
        • 20.0 revision 20.0.19.1a208ffab
        • Previous release
      • StorPool 19.4 release notes
        • 19.4 revision 19.01.3152.2b7de29c0
        • 19.4 revision 19.01.3106.45589969c
        • 19.4 revision 19.01.3061.a03558598
        • 19.4 revision 19.01.3006.6d2a7fccf
        • 19.4 revision 19.01.2995.15aa353e8
        • 19.4 revision 19.01.2975.d4308b0d0
        • 19.4 revision 19.01.2930.57ca5627f
        • 19.4 revision 19.01.2894.c16b8c152
        • 19.4 revision 19.01.2888.26d18ba04
        • 19.4 revision 19.01.2878.075480123
        • 19.4 revision 19.01.2877.2ee379917
        • 19.4 revision 19.01.2795.61bf1bd1d
        • 19.4 revision 19.01.2794.6d8d69281
        • 19.4 revision 19.01.2778.3c99182fa
        • 19.4 revision 19.01.2741.1490a1793
        • 19.4 revision 19.01.2701.c2377e67a
        • 19.4 revision 19.01.2686.1f4cf6e1d
        • 19.4 revision 19.01.2646.0ec2ea57b
        • 19.4 revision 19.01.2627.d3811f42a
        • 19.4 revision 19.01.2624.ae6abe68f
        • 19.4 revision 19.01.2609.d51d58af3
        • Previous release
      • StorPool 19.3 release notes
        • 19.3 revision 19.01.2592.cf99471bd
        • 19.3 revision 19.01.2571.5eb9133c9
        • 19.3 revision 19.01.2562.7b73fb02d
        • 19.3 revision 19.01.2545.66f61a9cd
        • 19.3 revision 19.01.2539.30ba167e1
        • 19.3 revision 19.01.2465.6f77d00cd
        • 19.3 revision 19.01.2456.977787e12
        • 19.3 revision 19.01.2439.425b19e8d
        • 19.3 revision 19.01.2426.40f100363
        • 19.3 revision 19.01.2401.48d842f1d
        • 19.3 revision 19.01.2333.b24f05090
        • 19.3 revision 19.01.2321.a82720c45
        • 19.3 revision 19.01.2319.27c976e3e
        • 19.3 revision 19.01.2318.10e55fce0
        • 19.3 revision 19.01.2268.656ce3b10
        • 19.3 revision 19.01.2216.14df508f2
        • 19.3 revision 19.01.2199.8700d0744
        • Previous release
      • StorPool 19.2 release notes
        • 19.2 revision 19.01.2173.0ca29830b
        • 19.2 revision 19.01.2156.b87fc49d8
        • 19.2 revision 19.01.2137.71224a0ce
        • 19.2 revision 19.01.2120.40c434b67
        • 19.2 revision 19.01.2099.32074a501
        • 19.2 revision 19.01.2084.34da44f18
        • 19.2 revision 19.01.2068.b4804fe04
        • 19.2 revision 19.01.2016.c4bc17cda
        • 19.2 revision 19.01.2012.bc34dd830
        • 19.2 revision 19.01.1991.f5ec6de23
        • 19.2 revision 19.01.1957.1a9a9bb68
        • 19.2 revision 19.01.1949.1aea86986
        • 19.2 revision 19.01.1946.0b0b05206
        • 19.2 revision 19.01.1942.0464eda88
        • 19.2 revision 19.01.1907.2389092f3
        • 19.2 revision 19.01.1905.5a6c7e113
        • 19.2 revision 19.01.1879.4679753db
        • 19.2 revision 19.01.1816.16fa37c0d
        • 19.2 revision 19.01.1813.f4697d8c2
        • 19.2 revision 19.01.1795.5b374e835
        • 19.2 revision 19.01.1732.02297f62b
        • 19.2 revision 19.01.1720.8c71b2ec3
        • 19.2 revision 19.01.1689.6a0aa758b
        • 19.2 revision 19.01.1656.1d6d61d3f
        • Previous release
      • StorPool 19.1 release notes
        • 19.1 revision 19.01.1628.1b627d0
        • 19.1 revision 19.01.1548.00e5a5633
        • 19.1 revision 19.01.1511.0b533fb
        • 19.1 revision 19.01.1468.90a9873
        • 19.1 revision 19.01.1413.b807f92
        • 19.1 revision 19.01.1376.ce07826
        • 19.1 revision 19.01.1357.39c014c
        • 19.1 revision 19.01.1346.dd68fa2c6
        • 19.1 revision 19.01.1293.cfcb869
        • 19.1 revision 19.01.1217.1635af7
        • 19.1 revision 19.01.1108.02703b8c5
        • 19.1 revision 19.01.1025.0baac06a6
        • 19.1 revision 19.01.878.7b1f83e3d
        • 19.1 revision 19.01.759.024d1bd
        • 19.1 revision 19.01.742.47c6e9c
        • 19.1 revision 19.01.719.c40dd8c
        • 19.1 revision 19.01.544.39e62dbad
        • 19.1 revision 19.01.496.6d2c5bf83
        • 19.1 revision 19.01.385.fc63315ef
        • 19.1 revision 19.01.375.684a0da12
        • 19.1 revision 19.01.355.896c5ebaf
        • 19.1 revision 19.01.318.daa3c5938
        • 19.1 revision 19.01.301.f1b25e7
        • 19.1 revision 19.01.271.de5921845
        • 19.1 revision 19.01.212.66fed3091
        • 19.1 revision 19.01.199.b98e9d4
        • 19.1 revision 19.01.182.666093099
      • StorPool 18.0 release notes
        • 18.02.1030.2e4eab8
        • 18.02.953.79d8ee7
        • 18.02.944.478cc9f
        • 18.02.886.f6b2fcf20
        • 18.02.847.b09fd4bec
        • 18.02.763.0aa70d7
        • 18.02.458.ac2f823
        • 18.02.370.2b8c3c3
        • 18.02.334.c204ed2
        • 18.02.206.06d240e
        • 18.02.178.9d29cd7
        • 18.02.164.7277e8c
  • StorPool FAQ
    • Common
      • What is the upgrade policy and compatibility between versions?
      • How to calculate the estimate usable disk space for a hybrid cluster?
      • What is the current capacity, provisioned and available space in the cluster?
      • What is the current thin provisioning gain of the cluster?
      • How does StorPool handle writes?
      • Why are there partitions on all SATA drives used by StorPool?
      • Why the StorPool processes seem to be at 100% CPU usage all the time?
      • What addresses uses StorPool for monitoring
      • What is required when I add/change memory modules on a hypervisor?
      • What happened to the User guide
    • Exceptions
      • StorPool not working on vlan interface on I350 NIC
    • Erasure coding
      • Is Erasure Coding enabled for a whole cluster, or is it enabled on a per-volume basis?
      • Are there any expected performance issues that need to be considered?
      • What hardware configurations are supported?
      • What are the supported erasure coding schemes and their respective overheads?
      • Do the required regular snapshots offset some of the space savings?
      • Would I need additional space from triple replication to erasure coding during the conversion?
      • Is there an ideal minimum volume size?
      • How frequently does StorPool recalculate the parity blocks - with each change of the data blocks, or with another method?
      • What is the impact of erasure coding on the network load?
      • Is there a difference between the performance of the different erasure coding schemes?
      • Is there any delay in user operations when regular snapshots are being created?
      • What are the chances that erasure coding can’t recover from a disk failure?
    • How much is the used and free space?
      • How much is the current usage?
        • Short answer
        • Long answer
      • How do we calculate the usage charge?
      • How much space you can provision and how much data you can store on your cluster?
      • Why there is space used in a new/empty cluster?
    • Understanding your StorPool Storage report
      • Key terms explained
      • Attachment definitions
StorPool Knowledge Base
  • Search
TALK TO AN EXPERT

© Copyright 2012-, Visit our main site StorPool Storage <info@storpool.com>