We compete primarily with all-flash arrays and storage controller based caching in hybrid arrays, essentially SSDs in storage arrays. We also compete with other host side caching software vendors.
How VirtuCache competes with SSD based storage arrays?
We are lower latency for the same SSD. VirtuCache caches to a SSD installed in the ESXi host. In our case the SSD is connected to the host CPU that consumes data, on a dedicated SATA, SAS, or PCIe bus, and so we will be lower latency than if the same SSD was in the storage appliance, since SSDs in storage arrays are behind the storage network and behind the storage controllers.
Enterprise grade SSDs bought retail for the VMware host vs. SSDs bought from storage OEM for the array – newer, cheaper, and longer warranty. Since most of our customers buy enterprise grade SSDs from retailers like Amazon.com, these SSDs cost less than if the same SSDs were bought from the storage OEM for the array. Retailers also pass through the SSD OEM warranty of 5 years to end users, versus the same SSD is warranted for only 3 years if bought from the storage OEM. Lastly, retailers sell the latest SSDs from the SSD OEM, versus storage OEM SSDs that are 12-18 months old since qualification cycles at storage OEMs are that long. So storage array SSDs are older and hence might be slower than the SSDs you would buy from retailers.
We throw more CPUs at the caching problem than a storage appliance can. We distribute caching workload across all the host CPUs in the ESXi cluster. Hence our caching is more effective than storage appliance based caching that uses only the two or four storage controllers (CPUs) in the appliance to process the caching workload.
Price/performance comparison. Below is a quick chart of how we compare versus SSDs in hybrid appliances and all-flash arrays.
|Storage Type||Features||Dollar per GB||Dollar per IOPS||Latencies|
|EMC FastCache controller based caching*||Accelerates only reads. Max usable SSD capacity is 1TB (raw) on the high end VNX 7X, and a few hundred GB on VNX 5X. Cost $8K for 100GB* SSD.||$ 80||$ 1||Determined by network and controller workload.|
|VirtuCache with SSD||Caches both reads and writes. Max 6TB per host. $5000 for VirtuCache on 1 host with 1.8TB (raw) SSD.||$ 3||5 cents||Consistently < 5ms since SSD is in the ESXi host, closer to CPU.|
|All flash arrays||$60,000 for 4 TB (raw) all flash array. Accelerates both reads and writes.||$ 15||$ 1||If total workload from all hosts connected to the array is less than storage network b/w and array MBps specs, then VM level latency < 10ms.|
How VirtuCache competes with other host side caching software vendors?
We compete with PernixData FVP, Infinio Accelerator, and VMware VFRC.
Versus PernixData FVP. We are most similar to PernixData. We are both kernel mode software; both of us cache writes in addition to reads; we have data protection strategies in place to prevent against data loss in case of multiple simultaneous hardware failure; and we do not require networking or storage to be reconfigured.This link has more details on how we compare with PernixData.
Versus Infinio Accelerator. We are very different from Infinio. Infinio only accelerates reads, we accelerate both reads and writes. Infinio is in the ESXi userspace versus we are in the ESXi kernel, and so we can accelerate both VM and kernel IO whereas Infinio can accelerate only VM IO, and we are lower latency for the same caching media. This link has more details on how we compare with Infinio.
Versus VMware VFRC. VFRC is the weakest offering in this space. VFRC accelerates only reads, we accelerate reads and writes. We can use RAM and/or SSD. VFRC only SSD. There is considerable administrative overhead with VFRC like the fact that SSD capacity has to be manually assigned to each VM, there are restrictions to vMotion with VFRC, SSD failure results in IO interruption. This link has more details on how we compare with VMware VFRC.