Ensuring that Queue Depths are Not the Bottleneck
Queue Depth settings need to be maximized for
1. Storage Adapter whose data needs to be cached.
2. SSD or PCIe Flash Card which will act as the Cache Device
3. Guest OS VM (refer next section)
Run esxtop, then hit key ‘u’ to see Queue Depth for Storage Adapters (field DQLEN).
Queue Depth for NVME / PCIe SSDs is set by the VMware NVME driver. It will always be very high (> 2000)
Queue Depth for SAS SSDs should typically be equal to or greater than 256.
Queue Depth for SAS Hard Drives should be equal to or greater than 32.
Queue Depth for iSCSI, FCoE or FC adapters should be greater than 512.
Queue Depth for SATA SSDs and Disks is 1 in Vmware, so even if the SSD is capable of much higher IOPS, the fact that the default SATA driver in VMware advertises a Queue Depth of 1 restricts caching performance considerably. So both SATA SSDs or Hard Drives (as backend disks) should be avoided, unless fronted by a Raid Controller or a HBA (HBA/Raid Controller Queue Depths are greater than 512).
You don’t typically need to do this, but if you want to see what the maximum storage IOPS your infrastructure is capable of, using Iometer or some such testing tool, you’d want to configure VMKernel with maximum llowed Queue Depth, Set the kernel parameter Disk.SchedNumReqOutstanding to 255.
Go to the ESXi host ‘Configuration’ Tab > ‘Software’ > ‘Advanced Settings’. Change the Disk.SchedNumReqOutstanding from 32 to the Queue Depth value of 255.
You also want to use VMware’s PVSCSI driver in the Windows VMs instead of the standard LSI driver. The PVSCSI driver is capable of pushing large amounts of IO through a Windows VM.