- 12 Dell R940 rack servers running ESXi 6.7.
- 240TB of in-use storage in HPE MSA 1050 and 2050.
- Customer’s main workload were electronic medical records applications that were write intensive.
- They had X-ray and MRI software that transacted large volumes of images in small block sizes.
- They had over two hundred terabytes of data. These conditions resulted in high latencies from the HPE MSA.
- VirtuCache along with one 3.2TB Samsung PM1725 PCIe SSD was installed in each host, running ESXi 6.7.
- It then automatically cached VM reads and writes to this 3.2TB SSD. So VirtuCache, in effect, was caching to 40TB of high speed NVME SSDs across the ESXi cluster.
- As a result almost all the reads and writes were being serviced from in-host PCIe SSDs. Thus VirtuCache ensured under 5ms VM read and write latencies at all times.
- VirtuCache caching 'hot' data to in-host PCIe SSD, reduced VM latencies to under 5ms at all times, thus considerably improving performance of applications running within VMs
The Virtunet DifferenceThe customer selected VirtuCache versus using MSA's internal Caching or Tiering because:
Only read performance is improved by HP MSA’s internal caching and tiering features.1 In comparison, VirtuCache improves the performance of reads and writes for MSA, and it does so by caching to in-VMware host SSD or RAM. There are other differences as well.
SSD in the ESXi host will perform better than SSD in the appliance. In the case of VirtuCache, the cache media is right on the motherboard of the VMware host CPU that consumes hot data. Versus in the case of storage appliance, where the SSD is behind the shared storage network and storage controllers.
You can cache to host RAM or NVME / PCIe SSD with VirtuCache. Both these options are much higher performing than the Seagate SAS SSDs that MSA’s internal Caching / Tiering uses.
Controller bottleneck results in high latencies for small block IO. HPE uses RAID controller processors even in their highest performing all-flash MSAs. These RAID controller processors are lower performing than x86 CPUs that are now the standard controller processors in storage appliances. RAID controller processors do make the MSA cheaper, but it also causes them to choke with large amounts of small block IO because small block IO is CPU intensive.2 Now VirtuCache uses ESXi host CPUs for caching operations, and not storage appliance CPUs. As a result, VirtuCache has access to larger amounts of CPU, than the MSA, which in turn makes VirtuCache very effective in accelerating small block IO.
VirtuCache can cache to much larger amounts of SSD. HPE MSA can cache to only 4TB SSD per pool. VirtuCache can cache to 6TB SSD per host. Since VirtuCache can cache to much larger SSDs capacities, we aim to service almost all the storage IO from cache.
1 – Pages 5 and 6 of this document (https://h20195.www2.hpe.com/v2/getpdf.aspx/A00015961ENW.pdf?) mention that the MSA has SSD read cache, and 4GB read+write memory cache. So there is no ability for the MSA to write to SSD, it does use the 2GB of RAM on the controller for caching writes, but this is grossly inadequate since your storage utilization is possibly in tens of terabytes and so you’d need cache capacity in TBs to achieve a high cache hit ratio.
2 – Three reasons why small block IO is CPU intensive: Firstly, the block size is small, so VMs can issue large amounts of small block IO quickly (compared to large block size IO). Secondly, whether the block is 1MB or 4KB, the same number of storage appliance processor cycles are used to process the block. Now if the IO is random, it further aggravates storage appliance processor usage since large amounts of metadata need to be scanned to read / write random blocks.
Signup for the VirtunetSystems Newsletter