Hardware requirements
All distributed storage systems are highly dependent on the underlying hardware. There are some aspects that will help achieve maximum performance with StorPool and are best considered in advance. Each node in the cluster can be used as server, client, iSCSI target or any combination; depending on the role, hardware requirements vary.
Note
The system parameters listed in the sections below are intended to serve as initial guidelines. For detailed information about the supported hardware and software, see the StorPool System Requirements document.
You can also contact StorPool’s Technical Account Management for a detailed hardware requirement assessment.
Minimum StorPool cluster
3 industry-standard x86 servers;
any x86-64 CPU with 4 threads or more;
32 GB ECC RAM per node (8+ GB used by StorPool);
any hard drive controller in JBOD mode;
3x SATA3 hard drives or SSDs;
dedicated 2x10GE LAN;
Recommended StorPool cluster
5 industry-standard x86 servers;
IPMI, iLO/LOM/iDRAC desirable;
Intel Nehalem generation (or newer) Xeon processor(s);
64GB ECC RAM or more in every node;
any hard drive controller in JBOD mode;
dedicated dual 25GE or faster LAN;
2+ NVMe drives per storage node;
How StorPool relies on hardware
CPU
When the system load is increased, CPUs are saturated with system interrupts. To avoid the negative effects of this, StorPool’s server and client processes are given one or more dedicated CPU cores. This significantly improves overall the performance and the performance consistency.
RAM
ECC memory can detect and correct the most common kinds of in-memory data corruption thus maintains a memory system immune to single-bit errors. Using ECC memory is an essential requirement for improving the reliability of the node. In fact, StorPool is not designed to work with non-ECC memory.
Storage (HDDs / SSDs)
StorPool ensures the best drive utilization. Replication and data integrity are core functionality, so RAID controllers are not required and all storage devices might be connected as JBOD. All hard drives are journaled either on an NVMe drive similar to Intel Optane series. When write-back cache is available on a RAID controller it could be used in a StorPool specific way in order to provide power-loss protection for the data written on the hard disks. This is not necessary for SATA SSD pools.
Network
StorPool is a distributed system which means that the network is an essential part of it. Designed for efficiency, StorPool combines data transfer from other nodes in the cluster. This greatly improves the data throughput, compared with access to local devices, even if they are SSD or NVMe.
Software compatibility
Operating systems
Linux (various distributions)
Windows and VMware, Citrix Xen through standard protocols (iSCSI)
File systems
Developed and optimized for Linux, StorPool is very well tested on CentOS, Ubuntu and Debian. Compatible and well tested with ext4 and XFS file systems and with any system designed to work with a block device, e.g. databases and cluster file systems (like GFS2 or OCFS2). StorPool can also be used with no file system, for example when using volumes to store VM images directly. StorPool is compatible with other technologies from the Linux storage stack, such as LVM, dm-cache/bcache, and LIO.
Hypervisors and cloud management/orchestration
You can use the following software:
KVM
LXC/Containers
OpenStack
OpenNebula
OnApp
CloudStack
You could also use any other technology compatible with the Linux storage stack.