The Virtunet Difference
The customer selected VirtuCache versus other solutions because:Dell’s PowerEdge VRTX hyper-converged appliance can either have all hard drive datastores or all SSD datastores, but you can’t have SSDs act as tiering or caching media for VRTX volumes. That’s where VirtuCache comes in.
Storage bottleneck in VRTX.
Dell VRTX is a reasonably priced hyperconverged box from Dell, specifically designed for small to medium sized businesses. Each VRTX can hold four blade modules. All VRTX blades share locally attached storage on an internal shared SAS bus. The customer has one Dell VRTX in each of their remote clinics. They had multiple datastores backed by SAS HDDs being shared by ESXi 6.7 installed on each VRTX blade. They ran their Electronic Medical records (EMR) applications in VMs on the Dell VRTX. Their EMR application required consistently low latencies, since it was in continuous use by their doctors and nurses. Since their datastores were using Hard Drives (and not SSDs), their storage latencies were consistently high, resulting in high latencies in their EMR applications.
VirtuCache deployed with Intel NVME SSDs.
In VRTX, you have two half height PCIe slots per blade. We decided to install a 2TB Intel P4600 NVME SSD that comes in a PCIe form factor in each blade. These SSDs do 200K random write IOPS and 600K random read IOPS, so they are extremely fast. VirtuCache was installed on each host, and configured to cache to this PCIe SSD.
Cost to Customer
VirtuCache costs $3000/host for a perpetual license, add another $1000 for the 2TB Intel P4600 NVME SSD. All prices as of 2019. So the total cost worked out to $16000 for 8TB SSD cache capacity and four VirtuCache licenses. 8TB cache capacity was sufficient to cache 80TB of VRTX shared storage at CVHN.
Before/After Tests with VirtuCache
The images below show before/after results for Iometer tests, first without VirtuCache and then with VirtuCache caching to an Intel P4600 NVME SSD, and a third iteration with VirtuCache caching to host RAM.
Iometer test specs: 100% Random IO, 75-25% Read-Write ratio, 4KB block size, 128 simultaneous IO requests hitting storage, with the Iometer test file residing on a hard drive based VRTX datastore.