Scroll Top

Versus Hyper-Converged Infrastructure, SSD Arrays, and Other Host Side Caching Vendors

Versus Hyper-Converged Infrastructure, SSD Arrays, and Other Host Side Caching Vendors

How VirtuCache competes with other host side caching software vendors?

We compete with PernixData FVP, Infinio Accelerator, and VMware VFRC.

Versus PernixData FVP (click on link for details). [End-of-lifed in 2018] We are most similar to PernixData. Both of us cache reads and writes; we have data protection strategies in place to prevent against data loss in case of multiple simultaneous host failure; and we do not require networking or storage to be reconfigured.

Versus Infinio Accelerator (click on link for details). Infinio only accelerates reads, we accelerate both reads and writes. Infinio doesn’t support Linked Clones and non persistent VDI. We support all VDI features in XenDesktop and Horizon. Infinio is in the ESXi userspace versus we are in the ESXi kernel, so we can accelerate both VM and kernel IO whereas Infinio can accelerate only VM IO. Because we are in the VMware kernel, we are also lower latency for the same caching media.

Versus VMware VFRC (click on link for details). [End-of-lifed in ESXi 7.0]. VFRC accelerates only reads, we accelerate reads and writes. We can use RAM and/or SSD. VFRC only SSD. There is considerable administrative overhead with VFRC like the fact that SSD capacity has to be manually assigned to each VM, there are restrictions to vMotion with VFRC, SSD failure results in IO interruption.

 

How VirtuCache competes with hyper-converged infrastructure vendors?

VirtuCache offers the single best feature that each of the competing architectures – HCI and converged infrastructure have to offer, in the sense that VirtuCache, like HCI, puts high speed storage back in the ESXi hosts, which results in great storage performance and like traditional SAN based architecture, VirtuCache allows storage and compute to be scaled and maintained independently of each other, something that HCI cannot fundamentally offer.

Other differentiators between HCI and VirtuCache are listed below.

  1. VirtuCache requires no ongoing management, installs in minutes, and doesn’t even require maintenance mode to install. On the other hand HCI is more complicated to deploy and most likely will require a full hardware refresh for compute and storage. If caching to host RAM, VirtuCache requires no new hardware. If caching to SSD, VirtuCache will require only a SSD in each host.
  2. VirtuCache recovers from a failure situation much faster than HCI. In HCI, because the same motherboard is responsible for storage and compute operations, host failure/maintenance mode results in degraded storage, and vice-versa. For instance, host shutdown takes longer since all the local data needs to be redistributed to other hosts; SSD failure might result in host down, etc. On the other hand, in case of host side caching, since in-host cache has transient data only and the backend storage array is still the system of record, recovery times in case of host or SSD failure are only marginally longer. More details on differences in handling of failure situations are at this link, that explains the storage IO path in VirtuCache and VSAN.

  3. Less Vendor Lock-in with VirtuCache. Converged infrastructure has always had less vendor lock-in than HCI because compute connects to storage using standards based protocols like iSCSI or FC, as a result a storage array or server vendor can be replaced for another. Installing host side caching doesn’t change this dynamic. Conversely in HCI, vendor specific protocol is used to connect locally attached storage to ESXi, and so you are locked-in to the HCI vendor for both storage and compute.

  4. If VirtuCache uses host RAM, it will be much faster than any HCI, since no HCI solution is capable of using RAM for storage IO acceleration.

How VirtuCache competes with SSD based storage arrays?

VirtuCache competes with caching/tiering functionality in hybrid storage arrays and all-flash arrays.

Parameter Tiering in Hybrid Array e.g. HP 3PAR, HP VSA Caching in Hybrid Array e.g. HP Nimble Storage, EMC VNX All Flash Array VirtuCache
RAM. RAM is not used in storage arrays. Only SSDs are used in arrays. If VirtuCache is configured to use host RAM, VirtuCache will be the highest performing storage solution on the market, since RAM is higher performing than any SSD.
SSD Latencies. SSDs are behind the storage controller and network, so SSD latencies will be higher in the case of storage arrays, than if the same SSD was in the host, as is the case with VirtuCache. SSD is in each ESXi host, connected to the host CPU via a dedicated (not shared) SATA/SAS/PCIe bus. PCIe/NVME bus is especially low latency, with bus speeds of 128gbps.
Accelerates Writes. No. Depends on the array. Even if the appliance caches writes, it does so to only small amounts of NVRAM (4GB/8GB). Yes. Yes. Larger volumes of writes can be cached with VirtuCache than with a hybrid storage array.
Accelerates Reads. Yes. Though tiering is not real-time. Frequently accessed data is moved to SSD tier over time. Tiering is good for predictable, repetitive workload patterns, but not for random, bursty workloads. Depends on the array, some cache in real-time, others don’t. Yes. Yes. If a block is accessed once, we move it to cache immediately (real-time), because we assume that its more likely to be accessed again versus other blocks.
Support for bursty, high volume, random workload, that’s typical in VMware. Tiering is not real-time. So for workloads where data (blocks) is suddenly read by VMs in millisecond bursts, data might still be on HDDs while these bursty reads come and go, since in tiering, blocks need to be accessed repeatedly over a longer period of time (days) for tiering algorithms to move those to SSDs. Yes. Though lower end hybrid and all-flash appliances might get CPU constrained. In this type of workload, the storage controller CPUs need to churn through data in the SSDs at a fast clip, and if your storage appliance has only two low core count CPUs for instance, storage controllers might get CPU bound. VirtuCache distributes cache processing across hosts, and it uses CPUs on the ESXi hosts. As a result, it has access to lot more CPU than storage appliance CPUs, and so VirtuCache is not CPU constrained.
Administrator Overhead. Admin needs to set tiering schedule and aggressiveness. Some understanding of workload pattern is required. No. No. No.

$/GB and $/IOPS comparison with Hybrid and All-Flash Storage Arrays.

The below table assumes that 8 ESXi hosts are connected to a storage array.

Prices are from 2021.

Parameter. Hybrid Array.

100TB (usable) HDD storage with 2 controllers, each controller with 8TB tiering/caching SSDs.

All Flash Array.

100TB (usable) SAS / NVME SSD storage.

VirtuCache.

100TB (usable) HDD only storage array + VirtuCache with 2TB NVME SSD in each of the 8 ESXi hosts.

Cost. $150,000 $250,000 $75,000
$/GB of storage capacity. $1.5/GB $2.5/GB $0.8/GB
IOPS. 400,000 1,200,000 2,400,000 [each NVME SSD does 300K IOPS, there are 8 SSDs spread across 8 hosts. So 8x300K IOPS = 2.4M IOPS.]
$/IOPS. $0.38 $0.21 $0.03

The table above shows that if you pair VirtuCache (caching to NVME SSD) with a cheap enterprise grade Hard Drive based SAN array like Dell Compellent, EMC Unity, HP MSA, Seagate, Synology, etc. it will be the cheapest $/GB capacity and cheapest $/IOPS performance.

Download Trial Contact Us