Improving Jenkins Performance

November 22, 2017:

By caching hot data to in-VMware host NVME SSD and RAM, VirtuCache was able to improve the performance of Jenkins based Continuous Integration(CI) process, which in turn resulted in quicker build-test cycles.

Improving Performance of Microsoft Dynamics AX ERP

Jan 13, 2011:

Microsoft Dynamics AX transaction processing is write intensive and its reporting is read intensive, thus putting pressure on customer's existing storage infrastructure. By caching frequently used data from customer's existing storage to in-host SSDs, VirtuCache was able to reduce read and write latencies for AX running within VMware VMs.

EMC Fast Cache Storage Controller Based Caching for VNX Appliances versus VirtuCache Host Side Caching for Any SAN based Appliance

In case of FastCache or any storage appliance based caching, the SSD is more expensive than if the same SSD were bought retail. Also the SSD performs better in the host than in the storage appliance, since the SSD in the appliance is constrained by the network and controller.

EMC FastCache is SSD based caching functionality sold by EMC for their VNX appliances. It tiers frequently used data, mainly reads, to SSDs in the appliance from slower HDDs on the appliance.

VirtuCache is software sold by Virtunet (and competing with EMC's FastCache) that is installed in the VMware kernel along with a SSD in the same VMware host, and it caches frequently and recently used data, both reads and writes, to the in-host SSD from any SAN based storage appliance.

This blog article lists the differences between VirtuCache and EMC's FastCache.

Caching VDI VM Reads and Writes to In-Host Intel Optane SSD Improves VM Latencies

April 29, 2020:

By caching all VM read and write operations to Intel Optane NVME SSDs installed in VMware hosts, latencies of VDI VMs was much reduced. This resulted in improved VM performance and quicker VM boot times.

VirtuCache Helped Reduce Storage Capex at a SaaS Provider

December 2, 2012:

VirtuCache accelerates storage appliances to the same extent regardless of whether they have faster SAS hard drives or slower SATA drives, and regardless of the age and speed of SAN. Thus we help postpone a SAN upgrade, or if the customer is looking at a capacity upgrade, selecting an appliance with cheaper SATA hard drives would suffice. We save them capex dollars either way.

How To Address Storage Performance Concerns Before Migrating Physical Servers To Virtual

May 24, 2013:

By improving storage performance for VMs, host side caching facilitates P2V of IO intensive bare-metal servers. And it saves capex because there is no storage upgrade involved.

If you have not yet virtualized your physical servers due only to perceived storage performance issues in VMs, then deploying VirtuCache will help. Since VirtuCache caches frequently used reads and all recent writes to in-host SSD and/or in-host RAM, from any back-end storage appliance, the storage performance from within a VM will now be considerably higher than from within your existing physical Linux or Windows server. As a result, P2V of database servers and other storage IO intensive applications is a big use case for VirtuCache.

This blog article talks about how to assure yourself BEFORE you do the P2V that the VirtuCache accelerated storage + VMware infrastructure will perform better than your existing bare-metal servers.

Also a customer use case illustrates that VirtuCache accelerated bare-metal server (when VirtuCache is deployed in bare-metal Linux) performs at the same level as VirtuCache accelerated VMware VM (when VirtuCache is installed in the VMware kernel), thus proving that virtualization in itself does not reduce application performance.

VirtuCache Doubles VM Density at

January 23, 2014: provides cloud and infrastructure orchestration software for service providers. They have an extensive internal dev/ops environment. wanted to re-purpose few of their expensive Dell C6220 servers to run additional applications, which meant that they needed to increase the number of existing VMs deployed on each host. As is often the case, there was plenty of CPU, memory, and networking capacity available on each one of the servers, and it was only storage latencies that started to increase disproportionately with higher VM densities. decided to look for the cheapest possible solution that would improve storage throughput and latencies, which in turn would facilitate the migration of additional VMs to each VMware host.

IT Infrastructure
  • VMs and Physical Servers -’s IT had four Dell 4-node C6220 servers running Windows Server VMs on VMware 5.5. Before VirtuCache, there were about 80 VMs running MS Exchange, MS Dynamics, and other enterprise applications in this cluster.
  • Storage - 18 TB of storage on a gigabit iSCSI Hitachi storage appliance was used by these servers.
  • Workload Characteristics - On an average, less than 4 TB of data changed every day and read-write mix varied widely between 40-60 to 80-20 read-write ratio.
  • VMware’s Distributed Resource Scheduler (DRS) functionality was configured to be automatic and aggressive, which ensured that workloads were equally distributed at all times across these 4 physical hosts.
VirtuCache Deployment decided to deploy VirtuCache on two of the four physical servers in the cluster. A 430 GB Virident Flashmax II PCIe Flash card was used by VirtuCache to cache data from LUNs. VirtuCache along with the PCIe Flash card was installed in the ESXi host in under 30 minutes. Steady state Cache Hit Ratio (ratio of IO served from the in-server SSD to the IO served from backend LUNs) was at 75-80%, with warm-up time of 10 minutes. Guest Average Latency (GAVG) was measured before and after VirtuCache, using the standard vmware utility called esxtop. The below chart shows reduced GAVG after deploying VirtuCache, which resulted in higher VM densities. Since auto-DRS was enabled on the VMware cluster, VMware automatically sensed improvements in storage performance on the server that had VirtuCache installed and moved VMs from the other servers to these two VirtuCache accelerated servers, increasing the number of VMs from 20 before VirtuCache to 42 after VirtuCache.
GAVG as measured using ESXTOP Before VirtuCache After VirtuCache
Read GAVG 35-1500 ms 0.1 – 6 ms
Write GAVG 20 – 600 ms 0.1 – 6 ms
Benefit to Using VirtuCache, was able to reduce the number of physical servers in their VMware cluster from four to two, thus reducing VMware licensing costs and hardware costs.

Virtunet’s Write-Back (Read+Write) Caching Competing with Write-Through (Read-Only) Caching at a Large School District

Host side caching software needs to accelerate both reads and writes especially in light of increasing competition from all-flash arrays. Caching writes is important even for workloads that are read intensive. If we were not to accelerate writes, the reads behind the writes on the same thread will not be accelerated, thus slowing reads as well.

Using TPC-C benchmark, we showed 2x improvement in performance versus a read caching software vendor at a large district.

VirtuCache Improves Hadoop Performance within VMs at Stanford

November 11, 2012:

Typically Hadoop workloads are run on bare-metal servers. However since Stanford’s School of Medicine was 100% virtualized, and because their security, monitoring & management tools were integrated within VMware, it was easier for them to deploy Hadoop within VMs, instead of provisioning new physical servers.

The biggest challenge Stanford faced with Hadoop VMs was low throughput and high latencies for writes to disk.

Deploying VirtuCache resulted in write latencies in Hadoop reducing to an eighth of what they were before.

VirtuCache Improves Performance of Healthcare Imaging Application deployed in VMware VMs

EHR, imaging, analytics, desktop & application virtualization applications are prone to storage performance issues in VMware, that are easily solved with VirtuCache.

Page 5 of 7« First...34567