Scroll Top

The reason for high latency in Ontap Select and how to fix it?

The reason for high latency in Ontap Select and how to fix it?

Even when Ontap Select is using high-performance enterprise SSDs, it shows high latencies because there are quite a few file system layers that the storage IO must traverse. Within Ontap Select, you first create a separate VMFS Datastore on each locally attached SSD or HDD on each host in the ESXi cluster; thereafter Ontap Select pools together these individual VMware Datastores across hosts, and deploys Netapp’s file system on this clustered pool of VMware Datastores; it then presents this shared storage to VMware over iSCSI or NFS, over which you again create VMFS Datastores, this time for VM storage. Also, Ontap Select replicates all data to media in another ESXi host over a VMware network. The below picture depicts the storage I/O path for reads and writes in Ontap Select. Such a long-winded storage IO path is the reason for high latencies in Ontap Select.

Read and Write IO Path in Netapp Ontap Select when VirtuCache is NOT installed in ESXi host
Read and Write IO Path in Netapp Ontap Select when VirtuCache is NOT installed in ESXi host
VirtuCache configuration to improve Ontap Select performance.

VirtuCache works with Ontap Select storage in the same way that it works with traditional SAN storage appliances. All reads and writes from VMs that are on Ontap Select storage are cached by VirtuCache to in-ESXi host SSD or RAM. The only caveat is that Ontap Select needs to be configured over iSCSI and not NFS for VirtuCache to work.

In VirtuCache, you would apply the ‘Write-Back 1 Replica’ caching policy to Datastores or VMs. This policy caches all reads and writes from VMs to the in-host RAM / SSD that you assigned to VirtuCache. It also mirrors the write cache to cache media in another host in the ESXi cluster. This is to protect against data loss if a host were to fail.

The below pictures depicts the storage I/O path for reads and writes in Ontap Select when VirtuCache is installed in the ESXi hosts. As a result of the shorter IO path, VM storage latencies are much reduced.

Write IO Path in Ontap Select when VirtuCache is installed in ESXi hostWrite IO Path in Ontap Select when VirtuCache is installed in ESXi host

Read IO Path in Ontap Select when VirtuCache is installed in ESXi host
Read IO Path in Ontap Select when VirtuCache is installed in ESXi host
Before / After storage IO performance testing using Iometer.

Below are the results from an Iometer test performed from within a VM that resides on Ontap Select Datastore, with and without VirtuCache. It is a straightforward 75/25 Random Read/Random Write test using 4KB block size. Ontap Select was running on locally attached four 960GB Samsung PM863 Enterprise SATA SSDs in RAID5 config in each host in the cluster, and it was mirroring all data between hosts over a 40gb network.

VirtuCache configuration.

VM Read MBps

VM Write MBps

VM Read Latency

VM Write Latency

No VirtuCache.

35

8

5

7

VirtuCache in ‘Write-Back 1 Replica mode’, caching to RAM.

170

58

1

1

Summary

More valuable than the stats shown in the table above, is the fact that for real-life customer workload, VirtuCache ensures that VM level read and write latencies are always under 5ms, and thus VirtuCache allows for the deployment of latency sensitive workloads on Ontap Select.

Download Trial Contact Us