Scroll Top

VMware’s vFlash Read Cache (VFRC) versus Virtunet’s Read+Write Caching

VMware’s vFlash Read Cache (VFRC) versus Virtunet’s Read+Write Caching

VMware has discontinued VFRC in ESXi 7.x  0

Despite the end-of-life for VFRC, if you still want to review the differences between VFRC and VirtuCache, below are the four most important ones.

  1. We cache reads (‘Write-Through’ Caching) and writes (‘Write-Back’ Caching), VMware’s VFRC caches only reads.

  2. We improve the performance of any and all VMs (server and VDI). VFRC doesn’t support VDI.

  3. We require no ongoing administration. Caching in our case is fully automated, and all VMware features are seamlessly supported. VFRC requires administrator intervention when doing vmotion, for creating a new VM, for maintenance mode, for VM restore from backup, requires knowledge of application block size, requires SSD capacity assignment per vdisk. Many other tasks require admin oversight.

  4. We provide VM, cache, network, and storage appliance level metrics for throughput, IOPS, and latencies, and alerts to forewarn of failure events. VFRC doesn’t.

Below is a longer list of differences, cross-referenced with VMware authored content:

Feature

VMware VFRC

Virtunet VirtuCache

Caches Reads / Writes

Caches only reads.

Caches both reads and writes. Caching writes improves the performance of not only writes but also of reads.1

Support for in-host SSD or RAM

Can cache to only SSD.2

Can cache to SSD, RAM, or a combination of the two.

Administrator Overhead

Requires manually carving out and attaching SSD capacity to each VM disk. Also requires Admin to know block sizes of all applications and configure those on a per VM disk basis. 2

SSD/RAM is assigned to VirtuCache at the host level (not VM level). VirtuCache automatically understands application block size and dynamically allocates cache capacity per VM. No ongoing admin overhead.

What happens when SSD fails?

VMs stop working since all storage operations from VMs fail.3

VMs continue to work, though VMs go back to being as slow as they were before VirtuCache was deployed. Also, there is no data loss when the SSD fails.

What happens when a host fails and HA kicks in?

VMs with VFRC will restart on other hosts only if other hosts in the cluster have spare SSD capacity to honor VFRC SSD reservations of VMs that were on the failed host.4

VMware HA is seamlessly and automatically supported, without any restrictions.

vMotion and DRS

When you manually vMotion a VM, you need to specify if you want to move the cache with the VM or invalidate it before you vMotion. If you decide to move the cache, you need spare SSD capacity on the target host to honor SSD reservation of the incoming VM. Also, vMotion will take more time to complete, since cache moves with the VM all at once.5

DRS is only supported for mandatory reasons like maintenance mode and just like with vMotion, the VM will only be moved to another host that has spare SSD capacity to honor SSD reservation of the incoming VM, else the VM will not move.6

vMotion and DRS are seamlessly and automatically supported, without any restrictions.

Read Cache is moved with the VM slowly, as and when data is requested by the applications running in the VM. This ensures that vMotion speed is not affected, and it also ensures that reads are continuously served from host cache.

VirtuCache automatically syncs the write cache on the source host with the backend storage when vmotion is initiated. Once the VM is on the destination host, Virtucache automatically starts to cache writes to cache media in the new host.

More details on how VirtuCache supports live / cold migration for storage, compute, and vswitch across hosts,  clusters, datacenters, sites, vcenters, and datastores are on this link.

Support for VDI

VFRC will not work with XenDesktop or Horizon View.7

VDI (XenDesktop, Horizon View, Windows Terminal Servers, Xenapp) is a big use case for us.

Caching misaligned blocks

This is the most important reason why you might not see improved storage performance. VFRC only caches blocks that are aligned to 4KB boundary. Most storage IO in Windows and Linux Guest OSes is not aligned.8

VirtuCache caches all blocks regardless of whether they are aligned or not.

Need to specify block size

You need to find out the block size that the application (running within that VM) issues storage IO requests in, and then assign this block size to the corresponding VMDK (that the application is installed in). You have to do this for each VM that you want to accelerate with VFRC.9,10  Considerable extra effort is required to do this.

VirtuCache automatically figures out the block size for all the applications running in VMs and caches blocks in the same size that the IO requests are issued in, even when the application block size might vary wildly (for example in databases). There is no need and no way to manually define block size in VirtuCache. This is automatically done for you.

ESXi editions supported

VFRC is available in Enterprise Plus edition only.

VirtuCache is supported on all ESXi editions.

 

Cross-References:

0 – Refer to this link on vmware.com that says that VFRC is deprecated in vSphere 7.0.

1 – VFRC is Write-Through software. VirtuCache is Write-Back caching software. Write-Through Caching accelerates only reads. Write-Back caching accelerates both reads and writes. VFRC is not the only read caching software. There are other vendors as well like Infinio, Jetstream Software, and Datrium, that cache only reads. The main argument from these vendors for why read caching suffices is that most traditional IT workloads are read dominated, and secondly, they contend that by only caching reads, the storage appliance is freed up to better service writes. Both these statements though true, are not strong enough arguments to not cache writes. In fact not caching writes also slows down reads. Here is the reason. In all real-world applications, reads and writes are interspersed on the same thread. Since these software do not cache writes, pending writes in the queue will slow down the read requests behind them that are on the same thread. So even if these reads by themselves could be accelerated by read caching software, because reads and writes are interspersed on the same thread the reads too are slowed down.

Secondly, it so happens that applications that are write intensive are also those that are of the highest value to the customer, as is the case with transaction processing, ERP, healthcare EMR, order processing, and such other applications. As a simple example, entering purchase orders (write intensive job) is possibly a higher value operation than running a report on historical purchase orders (read intensive job).

Hence both reads and writes need to be accelerated.

2 – Search for ‘host-resident flash devices as a cache’ on this link on vmware.com

3 – Search for the term ‘Flash Read Cache is faulty or inaccessible’ on this link on VMware.com.

4 – This link on VMware.com says this about the cluster – ‘If unreserved flash is insufficient to meet the virtual flash reservation, vSphere HA does not restart a virtual machine.’

5 – This link on VMware.com says this ‘If you plan to migrate Flash Read Cache contents, configure a sufficient virtual flash resource on the destination host.’

6 – This VMware link mentions that VM with VFRC has soft affinity to the host and there has to be spare SSD capacity in the cluster for DRS to work for VFRC enabled VMs.

7 – Search for the term ‘Horizon 7 does not support vSphere Flash Read Cache’ on this link on VMware.com 

8 – Search for the term ‘Flash Read Cache filters out misaligned blocks’ in the release notes link here. The release note is for 5.5, however, this aspect of the architecture hasn’t changed in subsequent releases, as can be proved by running IOmeter with misaligned blocks. Also, this note from VMware incorrectly states that blocks are aligned to 4KB boundaries in newer versions of Windows and Linux. They are not, as can be easily cross-referenced through various web articles on this topic.

9 – Search for the term ‘This should be made to be the same size as the most commonly used block of the application running inside the Guest OS.’ on this link.

10 – Search for the term ‘Setting the correct block size’ on this link on VMware.com. Even though this WP is for 5.5, this is still valid in later ESXi releases. There have been no more VFRC White Paper updates since 5.5.

Download Trial Contact Us