Scroll Top

GUI comparison, PernixData FVP vs Us – VMware Write Back Caching Software.

GUI comparison, PernixData FVP vs Us – VMware Write Back Caching Software.

PernixData FVP was end-of-lifed in 2019.

For the most part, both our GUI and workflows are similar. This article compares, with screenshots, steps to install and configure VirtuCache and PernixData FVP.

The only differences between us and Pernix stem from the fact that we leverage VMware’s own capability in the areas of clustering, license management, and logging, whereas Pernix built these features separately within their software. Overall these additional screens add a few clicks and pages in Pernix versus us, but again I want to emphasize that we are more similar than different, in terms of the GUI and workflow.

More details with screenshots below. The reader does need to be familiar with the concept of Write-Back caching and PernixData FVP.

This is the second article in the series that compares our software versus Pernix. The first article explained the similarities and differences in functionality.

Similar Step 1 – Install VIBs and Import Management Appliance.

With both PernixData FVP and us, this step involves installing the caching driver (.vib file) on hosts and deploying the management VM that lets you manage caching drivers across the cluster. The caching driver installed on the host is a PSP to VMware MPP and is the only software that is in the storage IO path. The management VM is not in the storage IO path. So the installation process is similar with both Pernix and us.

Diverging step 2 – Cluster definition. VirtuCache automatically maps to VMware cluster without any user intervention. With PernixData FVP you need to explicitly create a cluster within FVP.

A cluster (pool) of in-VMware host caching media spanning multiple hosts is required for write-back caching (not for write-through). With write-back caching, writes are accelerated by caching to in-VMware host caching media without synchronously writing to the backend SAN. If the local host were to fail, these writes would be lost, which is not good. To prevent such data loss, writes are mirrored to another SSD on another host. Now when the local host fails, the mirrored copy of writes from its peer host is synced with the backend appliance immediately. This process requires the ability to group caching media across hosts as a single entity so mirrored writes can be automatically distributed across SSDs across hosts, and corrective action can be taken in case of failure situations. This is the definition of clustering, and it is implemented in both FVP and VirtuCache in a similar fashion.

Now with PernixData FVP the end-user has to specifically create a cluster and add resources to it. With VirtuCache, this step is eliminated. Our thinking was that VMware does a great job of clustering VMware hosts, with storage, compute, and networking, already added so why do we need to create a new cluster in our software. We thought we should simply leverage VMware clustering and map to it automatically. So in our case, the VirtuCache cluster is the VMware cluster. All the end-user has to do is add in-VMware host caching media (SSD and/or DRAM) to the existing VMware cluster.

Creating a cluster in FVP Pernix Screenshot 1: Creating a cluster in FVP.

Add resources to FVP clusterPernix Screenshot 2: Add resources to FVP cluster.

Virtunet VirtuCache maps to VMware cluster by default. No need to create a separate cluster in VirtuCache.
Virtunet Screenshot 1: VirtuCache maps to VMware cluster by default. No need to create a separate cluster in VirtuCache.

Similar Step 3 – Assign In-VMware Host Caching Media.

Assign Cache Device on a per host basis. This step remains the same with Pernix and us. In this step, you assign in-VMware host RAM and/or SSD  as caching media. Pernix calls it Acceleration Resources. We call it Caching Media. With VirtuCache, you can choose an in-host SSD, or some amount of in-host RAM, or a combination of two.

In FVP, Add Acceleration Resources. Pernix Screenshot 3: In FVP, add Acceleration Resources.

In VirtuCache, Add in-VMware host SSD and/or RAM as caching media. Virtunet Screenshot 2: In VirtuCache, add in-VMware host SSD and/or RAM as caching media.

Similar Step 4 – Select Network To Mirror Writes. VirtuCache Write Replication = PernixData Peering.

As explained above, in the case of write-back caching, we need to mirror writes to another SSD on another host. There is no way around this step, if you want to avoid data loss in case of host failure. Such write replication needs to happen over a network.  Both Pernix and we use the vMotion network by default with the option of using another network for this purpose.

In FVP, select write replication network (Pernix calls is peering).Pernix Screenshot 4: In FVP, select write replication network (Pernix calls is peering).

In VirtuCache, select write replication networkVirtunet Screenshot 3: In VirtuCache, select write replication network.

Similar Step 5 – Select Datastores To Cache.

This is the same with us and PernixData. Both of us cache at the datastore level. By default, all VMs in that datastore inherit the caching policy of the datastore. Then there is the ability to assign a different caching policy to the VMs if required.

The caching policy you can select can be either write-back or write-through. Only we further qualify write-back policy with the number of write replicas. In our case write back policy with 1 replica means that one copy of write cache is made to another host, and write back policy with 2 replicas means that 2 copies of writes are made to two different hosts.

In FVP, assign caching policy to Datastores.
Pernix Screenshot 5: In FVP, assign caching policy to Datastores.

In VirtuCache, assigning caching policy to Datastores Virtunet Screenshot 4: In VirtuCache, assigning caching policy to Datastores.

Till this point, other than creating a cluster in Pernix FVP separately from VMware cluster that Pernix requires and that we don’t, the steps are more or less the same.

What we don’t do that Pernix did, and why?

These are relatively minor differences, but in the interest of completeness of this article, I thought these were worth mentioning.

Treating backup traffic differently.

Backup traffic generated by backup VMs (Veeam, Commvault, Avamar, Nakivo, etc.) should be excluded from being cached, else this traffic will use up all the cache media on the host. Both Pernix and VirtuCache exclude backup traffic from being cached. However, in VirtuCache, we have a separate caching policy called ‘backup-vm’ policy that needs to be applied to VMs that run backup software that use VMware snapshots as a first step to copy the production VM data to the backup storage target. VirtuCache accelerates the snapshot creation/deletion process by creating/deleting these snapshots in the host cache media but takes care to not cache the backup traffic that is sent to the backup storage target.

Doing away with Hub.

Pernix Hub functionality lets you manage licensing, logs, download support bundles, etc. We did away with this since we integrate with VMware’s support bundles, and our logs are written to VMware’s own kernel logs so again we rely on VMware support bundles and logging for troubleshooting issues in our software.

Doing away with license management.

We have kept our licensing simple to per host licensing model. Also, our license is tied to VMware assigned unique ID for the host, that VMware’s own key is tied to. So again we have tied ourselves to VMware’s own license management feature.

Doing away with a few stats pages.

If the information is not actionable, it shouldn’t be presented.

The only charts that matter are latency and throughput charts broken out by reads and writes at the VM, host, and cluster level.

Pernix has reports around the amount of storage IO,  bandwidth offloaded from the SAN,  ‘acceleration rate’, and ‘population vs eviction’ rate. These charts need an explanation to the end user and the information is not actionable, and so we decided to not display these charts.

What we do that Pernix doesn’t.

As it relates to functionality, we do a few more things than Pernix in the areas of supporting newer ESXi releases, all hosts down situation, and supporting 3rd party MPPs. Our first article in this series elaborates on these differences in more detail.

Conclusion.

On the whole, the configuration steps and GUI between us and Pernix are similar. Since we have leveraged VMware for some functionality and decided to not display stats that were not actionable by the end-user, our GUI is a bit simpler.

Download Trial Contact Us