Connecting a VMware ESXi host to StorPool iSCSI

When a StorPool volume needs to be accessed by hosts which cannot run the StorPool client service (e.g., VMware hypervisors), it may be exported using the iSCSI protocol.

1. Introduction

For general information about using the iSCSI remote block device access protocol with StorPool, see Short overview of iSCSI.

2. Configuring a VMware ESXi host

Once the StorPool iSCSI target volume has been exported, it is time to connect the initiator. This action needs special privilege and iSCSI connectivity configured. For more information, refer to the official VMware documentation

Login to VMware vSphere web-client as administrator, and follow the steps below:

  1. Browse to the host in the vSphere Web Client navigator.

  2. Click the Configure > Storage Adapters > select vmhba# and configure.

  3. Under Adapter Details, click the Targets tab.

  4. Configure the Dynamic Discovery method.

    In Dynamic Discovery > Add.. > enter IP address or DNS name of the Storpool iSCSI system and click OK > Rescan the iSCSI adapter.

Note

If the iSCSI target is not visible under iSCSI Server Location, check the connectivity between the VMware ESXi host and the StorPool iSCSI portal.

After the iSCSI Server Location is visible under Dynamic Discovery, close the window, navigate to datastore > Select Storage > click new datastore.

  1. Select Type as VMFS > Next.

  2. Name and device selection > enter datastore name and select the host > Select the LUN to proceed > Next.

  3. Select the VMFS version (Select the latest version unless you have a specific reason to select the older one) > Next.

  4. Partition configuration Use the whole disk and go with default block size > Next.

  5. Click Finish to create the new datastore.

Note

Once a Datastore is available, virtual hard disks of VMs may be stored on it, thus utilizing the StorPool iSCSI service.

Note

After a new datastore has been created, some time is required before it comes online and becomes active.

3. Optimizing Multipath Performance

When using multipath, there is a considerable performance improvement when exclusively setting the path selection policy (PSP) to 1 IOPS instead of the default 1000 IOPS.

First, you’d need to switch to the Round Robin PSP, which balances the load across all active storage paths. One way you can do this is by executing the following via the ESXCLI:

# esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_ALUA

Note

The Multipathing Policy can be changed by going to the Host -> Configure -> Storage Devices -> checkbox at the volume, tab Properties -> Actions -> Edit Multipathing, and set the Path selection policy to Round Robin (VMware).

Then, set the PSP IOPS value of 1 IOPS to be the default for new volumes together with the Round Robin PSP by issuing the following command:

# esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "StorPool" -M "iSCSI DISK" -P "VMW_PSP_RR" -O "iops=1"

Please, note that the above does not change the PSP settings for all existing and already claimed volumes. What you’ll need to do to set the IOPS policy to 1 for a select StorPool volume is to execute the following:

# esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=<volume_name>

Or update the setting on all StorPool volumes at once via:

# for vol in $(esxcfg-scsidevs -c | awk '{print $1}' | grep t10.StorPool) ; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$vol ; done

4. Configuring vSwitch

The following is a breakdown of the commands that will allow you to set up a vSwitch for iSCSI with NIC teaming and failover via the ESXCLI.

Note

In this example, vSwitch is configured to have one active path in each port group to use the correct underlying path.

To create a new vSwitch with the desired name:

# esxcli network vSwitch standard add -v <vswitch_name>

To set its MTU to 9000 (in case jumbo frames are being used; otherwise, you can use 1500):

# esxcli network vswitch standard set -m 9000 -v <vswitch_name>

To add NICs to vSwitch:

# esxcli network vswitch standard uplink add -u <iface_name1> -v <vswitch_name>
# esxcli network vswitch standard uplink add -u <iface_name2> -v <vswitch_name>

To set the failover policy for the vSwitch:

# esxcli network vswitch standard policy failover set -a <iface_name1>,<iface_name2> -v <vswitch_name>

To create two port groups into the vSwitch:

# esxcli network vswitch standard portgroup add -p <portgroup_name1> -v <vswitch_name>
# esxcli network vswitch standard portgroup add -p <portgroup_name2> -v <vswitch_name>

To assign them suitable VLANs:

# esxcli network vswitch standard portgroup set -p <portgroup_name1> -v <vlan1>
# esxcli network vswitch standard portgroup set -p <portgroup_name2> -v <vlan2>

To set port group failover policy:

# esxcli network vswitch standard portgroup policy failover set -a <iface_name1> -p <portgroup_name1>
# esxcli network vswitch standard portgroup policy failover set -a <iface_name2> -p <portgroup_name2>

To set the MTU for the VMkernel interfaces, if applicable, to 9000:

# esxcli network ip interface add -m 9000 -p <portgroup_name1>
# esxcli network ip interface add -m 9000 -p <portgroup_name2>

To assign IP addresses to the VMkernel interfaces:

# esxcli network ip interface ipv4 set -i <$vmkiface_name1> -I <address1> -N <prefix> -t static
# esxcli network ip interface ipv4 set -i <$vmkiface_name2> -I <address2> -N <prefix> -t static