Connecting a Windows Server 2016 host to StorPool iSCSI

When a StorPool volume needs to be accessed by hosts which cannot run the StorPool native client service (e.g. Windows Server), it may be accessed using the iSCSI protocol.

1. Short Overview of iSCSI

The iSCSI remote block device access protocol is a client-server protocol allowing clients (referred to as “initiators” or “hosts”) to read and write data to logical disks (referred to as “targets”) exported by iSCSI servers. The iSCSI servers listen on portals (TCP ports, usually 3260, on specific IP addresses); these portals can be grouped into the so-called portal groups (also identified by a TCP port and IP address) to provide fine-grained access control or load balancing for the iSCSI connections.

A short summary of the terms:

  • IQN - unique text identifier, used by initiators and targets;

  • Initiator - a client for the iSCSI protocol, identified by an IQN;

  • Target - a logical SCSI storage device, which provides one or more LUNs;

  • Portal - iSCSI network endpoint for targets, identified by IQN, IP address and TCP port;

  • Portal group - iSCSI network endpoint that handles the load-balancing and access-control to the portals, identified by IQN, IP address and TCP port;

The StorPool implementation of iSCSI provides portal groups to the initiators. The configuration is done via StorPool’s CLI and is described in the configuration chapter down below.

2. How does iSCSI function in StorPool

The StorPool implementation of iSCSI:

  • provides a way to mark StorPool volumes as accessible to iSCSI initiators;

  • defines iSCSI portals where the nodes running the StorPool iSCSI service listen for connections from initiators;

  • defines portal groups over these portals;

  • exports StorPool volumes (iSCSI targets) to iSCSI initiators in the portal groups.

To simplify the configuration of the iSCSI initiators, and also to provide load balancing and failover, each portal group has at least one floating IP address that is automatically brought up on only a single StorPool node at a given moment; the initiators are configured to connect to this floating address, authenticating if necessary, and then are redirected to the portal of the StorPool node that actually exports the target (volume) that they need to access.

In the simplest setup, there is a single portal group with a floating IP address and a single portal for each StorPool node that runs the iSCSI service. All the initiators connect to the floating IP address and are redirected to the correct node. For quality of service or fine-grained access control, more portal groups may be defined and some volumes may be exported via more than one portal group.

3. Configuring the StorPool iSCSI target

For a detailed configuration of StorPool iSCSI, please refer to the 9.15.  iSCSI section of our User Guide.

4. Configuring a Windows Server 2016 host

4.1. Reliability settings

The aim of these settings is twofold:
The first is in case of network or related problems, the initiators to be able to wait for the issues to be resolved, and then resume regular operations instead of returning errors to the end-users. This way, any intermittent problem will not have a lasting effect, as what has been observed otherwise is that after a failure in connectivity longer than the default timeouts, the initiators or the VMs running on them, need to be restarted to resume normal operations.
The second is in case of partial or brief failures, the initiators to be able to fail-over as quickly as possible to minimize any stalls of operations that may be felt by the end-users.


You should edit the following values (in hexadecimal, and decimal in brackets) found in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Microsoft iSCSI Initiator>\Parameters:
  • MaxRequestHoldTime to ffffffff (4294967295)

  • LinkDownTime to 00000005 (5)

  • SrbTimeoutDelta to 00000005 (5)

  • PortalRetryCount to 0000012c (300)

  • EnableNOPOut to 00000001 (1)

You should also edit the following values too:
  • TimeOutValue found in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk to 00015180 (86400)

  • PDORemovePeriod found in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\mpio\Parameters to 00015180 (86400)

  • MaxRequestHoldTime: The maximum time (in seconds) for which requests will be queued if connection to the target is lost and the connection is being retried. After this hold period, requests will be failed and the block device (disk) will be removed from the system. To prevent this from happening, setting the value to ffffffff (4294967295) will set the hold time to infinite.

  • LinkDownTime: Determines how long requests will be held in the device queue and retried if the connection to the target is lost. Contrary to MaxRequestHoldTime, you should set the value to 00000005 (5) as it is important to keep the “freeze” time of operations to a minimum in case of a link failure and a subsequent failover. If MPIO is installed, this value is used. If MPIO is not installed, MaxRequestHoldTime is used instead.

  • SrbTimeoutDelta: This value is used to increment the timeout set by class drivers. The value can be set as low as 5 seconds, if it is lower than that, the initiator will effectively use 15 seconds instead. Setting the value to 00000005 (5) lowers the maximum “freeze” time of operations during a link failure with an additional 10 seconds.

  • PortalRetryCount: This value is used to determine how many times a connect request to a target portal should be retried if the portal is down. We recommend setting the value to 0000012c (300).

  • EnableNOPOut: Setting the value to 00000001 (1) enables the initiator to send heartbeat packets periodically, checking connectivity to the target and forcing reconnect when interruptions occur.

  • TimeOutValue: The maximum waiting time (in seconds) for delayed disk operations before Windows produces errors. We recommend setting the value to 00015180 (86400).

  • PDORemovePeriod: Specifies a physical device object (PDO) removal period, in seconds. This period is the length of time the server waits after all paths to a PDO have failed before it removes the PDO. We recommend setting the value to 00015180 (86400).

Please make sure to restart Windows to ensure that the changes made to the registry have been applied.

4.2. Basic network settings

All network adapters to be used for iSCSI traffic should have only IP addresses and subnet masks set up - all other options/settings should be blank:


4.3. Basic connection to targets

Once the StorPool iSCSI target volume has been exported, and the network adapters on the Windows host are configured, it’s time to connect the initiator.

In Server Manager, go to Tools → iSCSI Initiator → Target: enter the floating IP address of StorPool’s iSCSI service and hit “Quick Connect …”:


An additional menu should pop-up with a discovered target name and a progress report message reading “Login Succeeded”.

Then, go to the “Volumes and Devices” tab and hit the “Auto Configure” button:


The exported volume(s) should appear in the “Volume List” as shown above.

After that, go to Server Manager → Tools → Computer Management → Storage → Disk Management:


The exported volume(s) should appear in the list as unallocated disk(s).

Note: In order to use the disks, they need to be brought online and initialized.

4.4. Configuring Multipath I/O (MPIO)

4.4.1. Prerequisites for Multipath I/O (MPIO)

  • An additional network interface for iSCSI should be reserved for MPIO

  • A second floating IP address should be defined for the portal group

  • An additional portal should be created with an IP address belonging to the same network as the initiator on the Windows host

4.4.2. Installation and connection to targets

In Windows Server 2016, Multipath I/O functionality need to be installed from the “Add Roles and Features Wizard” after installation.

To install the MPIO feature go to Server Manager → Manage → Add Roles and Features → Features and select Multipath I/O, then hit “Next” and “Install”.


After the installation is complete, a server restart is needed.

MPIO needs to have iSCSI support enabled, so we need go to Server Manager → Tools → MPIO → Discover Multi-Paths:


Select “Add support for iSCSI devices”, then click “add”. A reboot prompt will appear asking to reboot once more, and it should

be done in order to continue with the next steps.

Once the reboot is finished, go back to the “iSCSI Initiator”, select a target and click “Connect”:


In the pop-up menu, click on “Enable multi-path” and then hit “Advanced…”:


In the “Advanced Settings” window, select the Local adapter and the address belonging to the iSCSI interface, then select the floating IP address of StorPool’s iSCSI service:


Then, repeat the previous two steps, then selecting the second configured floating IP address of StorPool’s iSCSI service:


Note: More than two simultaneous connections can be configured by repeating the process described above.

To check if the Multipath I/O is working, select the connected target and click on “Devices…”:


Select a disk and click on “MPIO…”:


More than one path should be visible in the list of paths: