The Virtunet Difference
Cloud.com compared VirtuCache with VMware VFRC. Unlike VFRC, VirtuCache supported automatic and aggressive DRS. VirtuCache also resulted in 2X VM densities versus VFRC. These were the two main reasons that Cloud.com decided to deploy VirtuCache.
Cloud.com provides cloud and infrastructure orchestration software for service providers. They have an extensive internal dev/ops environment.
Cloud.com wanted to re-purpose few of their expensive Dell C6220 servers to run additional applications, which meant that they needed to increase the number of existing VMs deployed on each host. As is often the case, there was plenty of CPU, memory, and networking capacity available on each one of the servers, and it was only storage latencies that started to increase disproportionately with higher VM densities.
Cloud.com decided to look for the cheapest possible solution that would improve storage throughput and latencies, which in turn would facilitate the migration of additional VMs to each VMware host.
- VMs and Physical Servers – Cloud.com’s IT had four Dell 4-node C6220 servers running Windows Server VMs on VMware 5.5. Before VirtuCache, there were about 80 VMs running MS Exchange, MS Dynamics, and other enterprise applications in this cluster.
- Storage – 18 TB of storage on a gigabit iSCSI Hitachi storage appliance was used by these servers.
- Workload Characteristics – On an average, less than 4 TB of data changed every day and read-write mix varied widely between 40-60 to 80-20 read-write ratio.
- VMware’s Distributed Resource Scheduler (DRS) functionality was configured to be automatic and aggressive, which ensured that workloads were equally distributed at all times across these 4 physical hosts.
Cloud.com decided to deploy VirtuCache on two of the four physical servers in the cluster. A 430 GB Virident Flashmax II PCIe Flash card was used by VirtuCache to cache data from LUNs.
VirtuCache along with the PCIe Flash card was installed in the ESXi host in under 30 minutes.
Steady state Cache Hit Ratio (ratio of IO served from the in-server SSD to the IO served from backend LUNs) was at 75-80%, with warm-up time of 10 minutes.
Guest Average Latency (GAVG) was measured before and after VirtuCache, using the standard vmware utility called esxtop. The below chart shows reduced GAVG after deploying VirtuCache, which resulted in higher VM densities. Since auto-DRS was enabled on the VMware cluster, VMware automatically sensed improvements in storage performance on the server that had VirtuCache installed and moved VMs from the other servers to these two VirtuCache accelerated servers, increasing the number of VMs from 20 before VirtuCache to 42 after VirtuCache.
|GAVG as measured using ESXTOP||Before VirtuCache||After VirtuCache|
|Read GAVG||35-1500 ms||0.1 – 6 ms|
|Write GAVG||20 – 600 ms||0.1 – 6 ms|
Benefit to Cloud.com
Using VirtuCache, Cloud.com was able to reduce the number of physical servers in their VMware cluster from four to two, thus reducing VMware licensing costs and hardware costs.