Archive: Posts

VMware’s vFlash Read Cache (VFRC) versus Virtunet’s Read+Write Caching

The biggest difference is that we cache reads and writes, VMware’s VFRC caches only reads. Caching writes improves the performance of not only writes, but also of reads, and even in a read dominated workload.1

Then there are other differences as well –

  1. With VFRC, the in-host SSD has to be carved out amongst VMs and capacity manually assigned to each VMDK. Such manual assignment of cache capacity on a per VM basis is not required with VirtuCache;

  2. With VFRC, if a host fails or even during manual vmotion, VMs don’t migrate unless the target hosts have adequate spare SSD capacity to honor SSD reservations of incoming VMs. We support VMware HA and DRS without any restrictions;

  3. VFRC only supports SSDs, VirtuCache supports both SSD and RAM;

  4. VMware View or Xendesktop do not support VFRC, we do;

  5. In VFRC, SSD failure results in interruption of storage IO, not so with VirtuCache.

For a complete list of differences, please review the below table with accompanying cross-references:

Feature

VMware VFRC

Virtunet VirtuCache

Caching Reads and/or Writes

Caches only reads.1

Caches both reads and writes.1

Administrator Overhead

Requires manually carving out and attaching SSD capacity to each VM disk. Also requires Admin to know block sizes of all applications and configure those on a per VM disk basis. How is the Administrator supposed to know how much SSD capacity to assign to each VM? Also, figuring out application block size for every app is onerous, and requires using another storage utility.2

All frequently used data is automatically cached to the in-host caching media, without the Admin needing to assign SSD capacity and block size on a per VM basis.

Support for in host SSD or in-host RAM

Supports only SSD.2

Supports SSD, RAM or a combination of the two. If both RAM and SSD are used for caching on the same host, then RAM acts as Tier-1 and SSD as Tier-2.

What happens when SSD fails?

All storage IO from the VM fails.3

Storage IO does not fail, though VMs go back to being as slow as they were before VirtuCache was deployed.

What happens when a VMware host fails and VMware High Availability kicks in?

VMs with VFRC will restart on other hosts only if other hosts in the cluster have spare SSD capacity to honor VFRC SSD reservations of VMs that were on the failed host.4

VMware HA and DRS are seamlessly and automatically supported, without any restrictions. VMs will restart on other hosts regardless of any VirtuCache and SSD/RAM configuration.

vMotion

When you manually vMotion a VM, you need to specify if you want to move the cache with the VM or invalidate it before you vMotion. If you decide to move the cache, you need spare SSD capacity on the target host to honor SSD reservation of the incoming VM. Also vMotion will take more time to complete, since cache moves with the VM all at once.5

vMotion is seamlessly and automatically supported, without any restrictions. Cache is moved with the VM slowly, as and when data is requested by the applications running in the VM. This ensures that vMotion speed is not affected, and it also ensures that reads and writes are continuously served from local or remote cache.

Support for VDI

XenDesktop or Horizon View 6.x or 7x do not support VFRC.6

VDI with both XenDesktop and Horizon View are big use cases for us.

Caching misaligned blocks

This is the most important reason why you might not see improved storage performance. VFRC only caches blocks that are aligned to 4KB boundary. Most blocks in Windows and Linux Guest OSes are not aligned.7

VirtuCache caches all blocks regardless of whether they are aligned or not.

Need to specify block size

You need to specify block size for each VM that you want to accelerate. This is the block size that the application running within that VM issues IO requests in. Assigning a different block size results in degraded storage performance. Additional effort and another storage utility are required to find out the block size for each VM application and then assigning it to each VMDK.8,9

VirtuCache automatically figures out the block size for all the applications running in VMs and caches blocks in the same size that the IO requests are issued in, even when the application block size might vary wildly (for example in databases). There is no need to manually define block size in VirtuCache.

ESXi editions supported

VFRC is available in Enterprise Plus edition of ESXi only.

VirtuCache is supported on all ESXi editions.

1 VFRC is Write-Through software. VirtuCache is Write-Back caching software. Write-Through Caching accelerates only reads. Write-Back caching accelerates both reads and writes. VFRC is not the only read caching software. There are other vendors as well like Infinio, Atlantis Computing, and Datrium. The main arguments from these vendors for why read caching suffices is that most traditional IT workloads are read dominated, and secondly they contend that by only caching reads, the storage appliance is freed up to better service writes. Both these statements are true, but they are not strong enough arguments to not cache writes. In fact not caching writes also slows down reads. Here is the reason. In all real world applications, reads and writes are interspersed on the same thread. Since these software do not cache writes, pending writes in the queue will back up even the read requests behind them that are on the same thread. So even if these reads by themselves could be accelerated by read caching software, because reads and writes are interspersed on the same thread the reads too are slowed down. Now keep in mind that if you run a synthetic testing tool like IOmeter you can configure reads and writes to be on separate threads, which will result in reads being accelerated independent of writes, but this is not a real life use case.

Secondly, it so happens that applications that are write intensive are also those that are of the highest value to the customer, as is the case with transaction processing, ERP, healthcare EMR, order processing, and such other applications. Simply put, entering purchase orders (write intensive job) is higher value than running a report on historical purchase orders (read intensive job).

Hence both reads and writes need to be accelerated.

2 https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-211CE783-51A8-4B65-BFF6-434D1C272308.html

3 Search for term ‘Flash Read Cache is faulty or inaccessible’ on this link https://www.vmware.com/support/vsphere6/doc/vsphere-esx-vcenter-server-60-release-notes.html

4 https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-359B7C9F-DC7C-4E63-A36B-E0D8C92B1B10.html

5 https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-14B65E14-5179-4E39-8D59-4986885F693D.html

6 Search for term ‘Horizon 7 does not support vSphere Flash Read Cache’ on this link https://docs.vmware.com/en/VMware-Horizon-7/7.1/rn/horizon-71-view-release-notes.html

7 Search for term ‘Flash Read Cache filters out misaligned blocks’ in the release notes link below. The release note is for 5.5, however this aspect of the architecture hasn’t changed in subsequent releases, as can be proved by running IOmeter with misaligned blocks. Also this note from VMware incorrectly states that blocks are aligned to 4KB boundaries in newer versions of Windows and Linux. They are not, as can be easily cross referenced through various web articles on this topic. https://www.vmware.com/support/vsphere5/doc/vsphere-esx-vcenter-server-55-release-notes.html

8 Search for term ‘This should be made to be the same size as the most commonly used block of the application running inside the Guest OS.’ on this link https://cormachogan.com/2014/02/14/a-closer-look-at-vsphere-flash-read-cache-vfrc/

9 Search for term ‘Setting the correct block size’ on the below link. Even though this WP is for 5.5, this is still valid in later ESXi releases. There have been no more VFRC White Paper updates since 5.5. https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/vmware-vfrc-performance-vsphere55-white-paper.pdf