Blog

Improving Performance of Log Management Application at a Service Provider

Business Intelligence, Log Management, Security Information & Event Management (SIEM), Search and Analytic software like Splunk, Elastic Search, Cognos, HP Vertica, HP Autonomy, need to provide real-time visibility into large volumes of fast changing data. When these applications are deployed in traditional VMware VMs connected to centralized storage, such large volume of write and read operations puts pressure on existing storage infrastructure resulting in much slower than real-time ingest and analysis speeds that are expected of such applications.

GUI comparison, PernixData FVP vs Us – VMware Write Back Caching Software.

For the most part both our GUI and workflows are similar. This article compares, with screenshots, steps to install and configure VirtuCache and PernixData FVP.

The only differences between us and Pernix stem from the fact that we leverage VMware's own capability in the areas of clustering, license management, and logging, whereas Pernix programmed these features separately within their software. Overall these additional screens add a few clicks and pages in Pernix versus us, but again I want to emphasize that we are more similar than different, in terms of the GUI and workflow.

Replaced PernixData for vGPU based VDI

First of all - PernixData was a good host side caching software. Unfortunately for their customers, after they were acquired by Nutanix, Nutanix end-of-lifed their software. Our software, called VirtuCache, directly competes with PernixData FVP.

PernixData’s FVP vs. Virtunet – Both VMware Kernel Write Caching Software

More similar than different

Both us and PernixData differentiate from rest of the host side caching vendors in similar ways - that we are kernel mode software; both of us cache writes in addition to reads; have data protection strategies in place to prevent against data loss in case of multiple simultaneous hardware failure; do not require networking or storage to be reconfigured; and do not require agents per VM or VM per host.

This article is the first in a series of two articles that compares our software versus PernixData FVP. The second article compares (with screenshots) GUI and configuration steps for PernixData FVP and us.

Below is how we compare on important criteria.

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest difference is that we accelerate both reads and writes, Infinio accelerates only reads. A few others are - with us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio; we accelerate creation of VMs, snapshots, and other VMware kernel operations, which they don't. More details in the table below.

Virtunet Systems

Infinio

Accelerates both reads and writes.1

Accelerates only reads.

By not caching writes, not only are writes not accelerated but the reads that are behind writes on the same thread are not accelerated, and so reads are slowed down as well. 1

Caching policy can be applied at the VM and/or Datastore level. Since the number of Datastores is typically much less than VMs, its quicker to configure caching for all the VMs in the cluster by assigning caching policy at the Datastore level than at the VM level. All VMs within the Datastore inherit the Datastore wide caching policy by default. You can of course apply a different caching policy at the VM level.2

Caching policy can be applied only at the VM level. So if you have large number of VMs, the process of configuring caching for all the VMs in the cluster becomes onerous.2

When new VMs (server or desktop VMs) are created, these VMs automatically inherit the Datastore caching policy.

For all new server or desktop VMs, policy has to be manually applied to these VMs. This is especially problematic in non-persistent VDI, where new VMs are being continuously created and without admin notification.

All storage IO, whether it originates within VMs or VMware kernel, is accelerated.3

Storage IO originating in the VMware kernel is not accelerated. So creation/deletion of VMs, snapshots, and linked clone operations are not accelerated, since these operations are initiated from within the VMware kernel and not from within any VM.3

Supports all ESXi editions - Essentials, Essentials Plus, Standard, Enterprise, and Enterprise Plus, across all ESXi 5.x and 6.x versions.

Doesnot support Essentials or Essentials Plus.

Improving the performance of Dell Compellent SAN for VMware VMs

VirtuCache is software that improves the performance of Dell Compellent appliances without requiring you to upgrade the appliance or the SAN network. The performance improvement you will get from your Compellent appliance will rival an upgrade to an all-flash array.

Compellent appliances were the workhorses of the enterprise storage market a few years ago. They were cost effective at high capacities. The only drawback was that they are slower since they are primarily hard drive based, and when connected to VMware they exhibit all the 'IO blender' symptoms resulting in high VM level storage latencies.

Reducing Write Latencies in CEPH Storage

CEPH is a popular open source storage software. However its write latencies are high. VirtuCache caches CEPH volumes to in-host SSDs and by doing so reduces VM level latencies considerably.

Improving the performance of Equallogic SAN for VMware VMs

VirtuCache is software that improves the performance of Equallogic appliances without requiring you to upgrade the appliance or the SAN network. The performance improvement you will get from your Equallogic appliance will rival an upgrade to an all-flash array.

Equallogic appliances were the workhorses of the enterprise storage market a few years ago. They were cost effective at high capacities. The only drawback was that they are slower since they are primarily hard drive based, and when connected to VMware they exhibit all the 'IO blender' symptoms resulting in high VM level storage latencies.

Reducing latencies in vGPU assisted VDI

VirtuCache is installed in VMware vSphere kernel. It then automatically caches frequently and recently used data from any backend storage to any high speed media (RAM/SSD) in the VMware host. By bringing large amounts of 'hot' data closer to the VMware host GPU and CPU, VirtuCache improves the performance of all applications running within VMs including GPU assisted operations.

VMware’s vFlash Read Cache (VFRC) versus Virtunet’s Read+Write Caching

The biggest difference is that we cache reads and writes, VMware's VFRC caches only reads. Caching writes improves the performance of not only writes, but also of reads, and even in a read dominated workload.1

Then there are other differences as well –

  1. With VFRC, the in-host SSD has to be carved out amongst VMs and capacity manually assigned to each VMDK.2 Such manual assignment of cache capacity on a per VM basis is not required with VirtuCache;

  2. With VFRC, if a host fails or even during manual vmotion, VMs don't migrate unless the target hosts have adequate spare SSD capacity to honor SSD reservations of incoming VMs.4,5 We support VMware HA and DRS without any restrictions;

  3. The biggest reason you might not see performance improvement with VFRC is because it only caches blocks aligned at 4KB boundaries. 7 Most storage IO in Windows and Linux VMs is not aligned;

  4. VMware View or Xendesktop do not support VFRC.6 We do;

  5. In VFRC, SSD failure results in VM failure, because all storage IO is interrupted.3 Not so with VirtuCache.

For a complete list of differences, please review the below table with accompanying cross-references:

Page 1 of 41234