Blog

VMware’s vFlash Read Cache (VFRC) versus Virtunet’s Read+Write Caching

VMware will discontinue VFRC starting in ESXi 7.0 to be released in Q4 2019. 0

Despite the end-of-life announcement for VFRC, if you still want to review the differences between VFRC and VirtuCache, below are the three most important ones.

  1. We cache reads and writes, VMware's VFRC caches only reads. Caching writes improves the performance of not only writes, but also of reads.1

  2. We require no ongoing administration. Caching in our case is fully automated, and all Vmware features are seamlessly supported. Versus VFRC that requires administrator intervention when doing vmotion, for creating a new VM, for maintenance mode, for VM restore from backup, requires knowledge of application block size, requires SSD capacity assignment per vdisk. Many other tasks require admin oversight as well.

  3. We provide easy to understand VM, cache, network, and storage appliance level metrics for throughput, IOPS, and latencies, and alerting to forewarn of failure events. VFRC doesn't.

Below is a longer list of differences, cross-referenced with VMware authored content:

View Storage Accelerator vs. VirtuCache – VMware Host Cache

The big difference between the two is that VSA caches only 2GB of reads from the Master VM1,2. VirtuCache caches reads + writes from all server & desktop VMs, and it can cache to TBs of in-host SSD/RAM, so all storage IO is serviced from in-host cache.

More details in the table below.

To Improve CEPH performance for VMware, Install SSDs in VMware hosts, NOT OSD hosts.

SSDs deployed for caching in CEPH OSD servers are not very effective. The problem lies not in the SSDs, but because they are deployed at a point in the IO path that is downstream (in relation to VMs that run end user applications) of where the IO bottleneck is. This post looks at the performance shortcoming of CEPH and its solution.

There are two options for improving the performance of CEPH.

Option 1 is to deploy SSDs in CEPH OSD servers for journaling (write caching) and read caching.

Option 2 is to deploy SSDs in the VMware hosts (that connect to CEPH over iSCSI) along with host side caching software, that then automatically caches reads and writes to the in-VMware host SSD from VMware Datastores created on CEPH volumes.

Below are reasons for why we recommend that you go with Option 2.

How to Select SSDs for Host Side Caching for VMware – Interface, Model, Size, Source and Raid Level ?

In terms of price/performance, enterprise NVME SSDs have now become the best choice for in-VMware host caching media. They are higher performing and cost just a little more than their lower performing SATA counterparts. The Intel P4600/P4610 NVME SSDs are my favorites. The Samsung PM1725a is my second choice. If you don’t have a spare 2.5” NVME or PCIe slot in your ESXi host, which precludes you from using NVME SSDs, you could use enterprise SATA SSDs. If you choose to go with SATA SSDs, you will also need a high queue depth RAID controller in the ESXi host. In enterprise SATA SSD category, the Intel S4600/S4610 or Samsung SM863a are good choices. If you don't have a spare PCIe, NVME, SATA, or SAS slot in the host, then the only choice is to use the much more expensive but higher performing host RAM as cache media.

This blog article will cover the below topics.

- Few good SSDs and their performance characteristics.

- Write IOPS rating and lifetime endurance of SSDs.

- Sizing the SSD.

- How many SSDs are needed in a VMware host and across the VMware cluster?

- In case of SATA SSDs, the need to RAID0 the SSD.

- Queue Depths.

- Where to buy SSDs?

CEPH Storage for VMware vSphere

CEPH is a great choice for deploying large amounts of storage. It's biggest drawbacks are high storage latencies and the difficulty of making it work for VMware hosts.

The Advantages of CEPH.

CEPH can be installed on any ordinary servers. It clusters these servers together and presents this cluster of servers as an iSCSI target. Clustering (of servers) is a key feature so CEPH can sustain component failures without causing a storage outage and also to scale capacity linearly by simply hot adding servers to the cluster. You can build CEPH storage with off the shelf components - servers, SSDs, HDDs, NICs, essentially any commodity server or server components. There is no vendor lock-in for hardware. As a result, hardware costs are low. All in all, it offers better reliability and deployment flexibility at a lower cost than big brand storage appliances.

CEPH has Two Drawbacks - High Storage Latencies and Difficulty Connecting to VMware.

Improving Storage Performance of Dell VRTX

Dell's PowerEdge VRTX hyper-converged appliance can either have all hard drive datastores or all SSD datastores, but you can't have SSDs act as tiering or caching media for HDD volumes. That's where VirtuCache comes in.

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest difference is that we accelerate both reads and writes, Infinio accelerates only reads. A few others are - with us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio; we accelerate creation of VMs, snapshots, and other VMware kernel operations, which they don't.

For existing Infinio customers, we are offering VirtuCache for just the price of Infinio's annual support payment. Offer valid till December 31, 2019

Citrix MCS IO vs. VirtuCache – Server Side Storage IO Caching

Both cache 'hot' data to in-host RAM, but the differences between Citrix MCS Storage Optimization and VirtuCache are many. The top three are:

- MCSIO works only for non-persistent Xenapp / Xendesktop VMs.1 VirtuCache works for all VMs on the ESXi host;

- Citrix MCSIO can cache only VM writes and to VM RAM only.1 VirtuCache can cache VM reads and writes to in-host SSD or RAM;

- With MCSIO there will be VM data loss / instability if the host fails or the RAM cache is full,1 not so with VirtuCache.

For detailed differences, please review the table below.

Improving Performance of Log Management Application at a Service Provider

Business Intelligence, Log Management, Security Information & Event Management (SIEM), Search and Analytic software like Splunk, Elastic Search, Cognos, HP Vertica, HP Autonomy, need to provide real-time visibility into large volumes of fast changing data. When these applications are deployed in traditional VMware VMs connected to centralized storage, such large volume of write and read operations puts pressure on existing storage infrastructure resulting in much slower than real-time ingest and analysis speeds that are expected of such applications.

GUI comparison, PernixData FVP vs Us – VMware Write Back Caching Software.

For the most part both our GUI and workflows are similar. This article compares, with screenshots, steps to install and configure VirtuCache and PernixData FVP.

The only differences between us and Pernix stem from the fact that we leverage VMware's own capability in the areas of clustering, license management, and logging, whereas Pernix programmed these features separately within their software. Overall these additional screens add a few clicks and pages in Pernix versus us, but again I want to emphasize that we are more similar than different, in terms of the GUI and workflow.

Page 1 of 512345