Dell's PowerEdge VRTX hyper-converged appliance can either have all hard drive datastores or all SSD datastores, but you can't have SSDs act as tiering or caching media for VRTX volumes. That's where VirtuCache comes in.
Creation Museum in Kentucky, USA is a museum about Bible history and creationism. Their storage needs were typical of a museum, requiring large amounts of storage for digital multimedia content related to the various exhibits at the museum.
The Ark Encounter, in Williamstown, Kentucky, features a full-size Noah’s Ark built according to the dimensions of the Bible. Answers in Genesis (AiG) is the Christian ministry responsible for The Ark Encounter.
AiG's IT department had a few ESXi hosts connected to their HP Store VSA. As a result of increased attendance at the Ark, their VMware workload increased dramatically, which in turn resulted in performance issues within VMs.
AiG turned to VirtuCache to mitigate their storage latency issues. By caching frequently and recently used data (both reads and writes) to in-host SSDs+RAM, Virtunet resolved their storage performance issues. We competed with HP Store VSA's Adaptive Optimization(AO) feature, which is HP's tiering functionality for the VSA.
Here is how VirtuCache competes with the Store VSA's tiering functionality.
CEPH storage from Virtunet has all the features of traditional iSCSI SAN with the exception that it is reasonably priced because it uses commodity servers with all off-the-shelf hardware. And so it is ideally suited for backup and DR storage which needs to be cheap above all else.
By caching hot data to in-VMware host NVME SSD and RAM, VirtuCache was able to improve the performance of Jenkins based Continuous Integration(CI) process, which in turn resulted in quicker build-test cycles.
Microsoft Dynamics AX transaction processing is write intensive and its reporting is read intensive, thus putting pressure on customer's existing storage infrastructure. By caching frequently used data from customer's existing storage to in-host SSDs, VirtuCache was able to reduce read and write latencies for AX running within VMware VMs.
EMC Fast Cache Storage Controller Based Caching for VNX Appliances versus VirtuCache Host Side Caching for Any SAN based Appliance
In case of FastCache or any storage appliance based caching, the SSD is more expensive than if the same SSD were bought retail. Also the SSD performs better in the host than in the storage appliance, since the SSD in the appliance is constrained by the network and controller.
EMC FastCache is SSD based caching functionality sold by EMC for their VNX appliances. It tiers frequently used data, mainly reads, to SSDs in the appliance from slower HDDs on the appliance.
VirtuCache is software sold by Virtunet (and competing with EMC's FastCache) that is installed in the VMware kernel along with a SSD in the same VMware host, and it caches frequently and recently used data, both reads and writes, to the in-host SSD from any SAN based storage appliance.
This blog article lists the differences between VirtuCache and EMC's FastCache.
VirtuCache accelerates storage appliances to the same extent regardless of whether they have faster SAS hard drives or slower SATA drives, and regardless of the age and speed of SAN. Thus we help postpone a SAN upgrade, or if the customer is looking at a capacity upgrade, selecting an appliance with cheaper SATA hard drives would suffice. We save them capex dollars either way.
By improving storage performance for VMs, host side caching facilitates P2V of IO intensive bare-metal servers. And it saves capex because there is no storage upgrade involved.
If you have not yet virtualized your physical servers due only to perceived storage performance issues in VMs, then deploying VirtuCache will help. Since VirtuCache caches frequently used reads and all recent writes to in-host SSD and/or in-host RAM, from any back-end storage appliance, the storage performance from within a VM will now be considerably higher than from within your existing physical Linux or Windows server. As a result, P2V of database servers and other storage IO intensive applications is a big use case for VirtuCache.
This blog article talks about how to assure yourself BEFORE you do the P2V that the VirtuCache accelerated storage + VMware infrastructure will perform better than your existing bare-metal servers.
Also a customer use case illustrates that VirtuCache accelerated bare-metal server (when VirtuCache is deployed in bare-metal Linux) performs at the same level as VirtuCache accelerated VMware VM (when VirtuCache is installed in the VMware kernel), thus proving that virtualization in itself does not reduce application performance.
Cloud.com provides cloud and infrastructure orchestration software for service providers. They have an extensive internal dev/ops environment.
Cloud.com wanted to re-purpose few of their expensive Dell C6220 servers to run additional applications, which meant that they needed to increase the number of existing VMs deployed on each host. As is often the case, there was plenty of CPU, memory, and networking capacity available on each one of the servers, and it was only storage latencies that started to increase disproportionately with higher VM densities.
Cloud.com decided to look for the cheapest possible solution that would improve storage throughput and latencies, which in turn would facilitate the migration of additional VMs to each VMware host.IT Infrastructure
- VMs and Physical Servers - Cloud.com’s IT had four Dell 4-node C6220 servers running Windows Server VMs on VMware 5.5. Before VirtuCache, there were about 80 VMs running MS Exchange, MS Dynamics, and other enterprise applications in this cluster.
- Storage - 18 TB of storage on a gigabit iSCSI Hitachi storage appliance was used by these servers.
- Workload Characteristics - On an average, less than 4 TB of data changed every day and read-write mix varied widely between 40-60 to 80-20 read-write ratio.
- VMware’s Distributed Resource Scheduler (DRS) functionality was configured to be automatic and aggressive, which ensured that workloads were equally distributed at all times across these 4 physical hosts.
|GAVG as measured using ESXTOP||Before VirtuCache||After VirtuCache|
|Read GAVG||35-1500 ms||0.1 – 6 ms|
|Write GAVG||20 – 600 ms||0.1 – 6 ms|