Versus Other Host Side Caching Vendors and SSD Based Arrays

We compete primarily with all-flash arrays and controller based caching or tiering in hybrid arrays. We also compete with other host side caching software vendors.

How VirtuCache competes with SSD based storage arrays?

Parameter Tiering in Hybrid Array
e.g. HP 3PAR, HP VSA
Caching in Hybrid Array
e.g. HP Nimble Storage, EMC VNX
All Flash Array VirtuCache
RAM. RAM is not used in storage arrays. Only SSDs are used in arrays. By caching to RAM, VirtuCache will be the highest performing solution on the market, since RAM is higher performing than any NVME SSD.
SSD Latencies. SSDs are behind the storage controller and network, so SSD latencies will be higher than if the same SSD was in the host. SSD is in each ESXi host, connected to the host CPU via a dedicated (non-shared) SATA/SAS/PCIe bus. PCIe/NVME SSDs are especially low latency, with bus speeds of 128gbps.
Accelerates Writes. No. Depends on the array. However even if the appliance caches writes, it does so to only small amounts of NVRAM (4GB/8GB). Yes. Yes. Larger volumes of writes can be cached with VirtuCache than with a hybrid storage array.
Accelerates Reads. Yes. Though tiering is not real-time. Frequently accessed data is moved to SSD tier over time. Tiering is good for predictable, repetitive workload patterns, but not for random, bursty workloads. Some arrays cache in real-time, others don’t. Yes. Yes. If a block is accessed once, we move it to cache immediately (real-time), because we assume that its more likely to be accessed again versus other blocks.
Support for bursty, high volume, random workload, that’s typical in VMware. Tiering is not real-time. So for workloads where new blocks are suddenly accessed in millisecond bursts, data might still be on HDDs before tiering algorithms get the chance to move that data to SSDs. Yes. Though lower end hybrid and all-flash appliances might get CPU constrained. In this type of workload, the storage controller CPUs need to churn through data in the SSDs at a fast clip, and if your storage appliance has only two low core count CPUs for instance, storage controllers might get CPU bound. VirtuCache distributes cache processing across hosts, and it uses CPUs on the ESXi hosts. As a result, it has access to lot more CPU than storage appliance CPUs, and so VirtuCache is not CPU constrained.
Administrator Overhead. Admin needs to set tiering schedule and aggressiveness. Some understanding of workload pattern is required. No. No. No.

$/GB and $/IOPS comparison with Hybrid and All-Flash Storage Arrays.

Prices from 2018.

Parameter. Hybrid Array.
100TB usable HDD storage with 4 controllers, each controller with 4TB tiering/caching SSDs.
All Flash Array.
100TB usable SSD storage.
VirtuCache
100TB usable HDD storage with VirtuCache and 2TB NVME/PCIe SSD in each of the 8 ESXi hosts.
Cost. $150,000 $250,000 $75,000
$/GB of storage capacity. $1.5/GB $2.5/GB $0.8/GB
IOPS. 200,000 600,000 1,200,000 [each NVME SSD does 150K IOPS, there are 8 SSDs spread across 8 hosts. So 8x150K IOPS = 1.2M IOPS.]
$/IOPS. $0.75 $0.40 $0.06

How VirtuCache competes with other host side caching software vendors?

We primarily compete with PernixData FVP and Infinio Accelerator.

Versus PernixData FVP.[Click for more details]. We are most similar to PernixData. We are both kernel mode software; both of us cache writes in addition to reads; we have data protection strategies in place to prevent against data loss in case of multiple simultaneous hardware failure; and we do not require networking or storage to be reconfigured.

Versus Infinio Accelerator.[Click for more details] We are very different from Infinio. Infinio only accelerates reads, we accelerate both reads and writes. Infinio is in the ESXi userspace versus we are in the ESXi kernel, so we can accelerate both VM and kernel IO whereas Infinio can accelerate only VM IO. Because we are in the VMware kernel, we are also lower latency for the same caching media.

Signup for the VirtunetSystems Newsletter