Blog

High Speed Storage is Back in the VMware Host with Hyper Converged Infrastructure and Host Side Caching, but the similarities end there….

The main advantage that hyper-converged infrastructure(HCI) has over traditional converged infrastructure (separate hardware for compute and storage) is that HCI has put high speed storage back in the compute nodes. This is also true for Host Side Caching software, with the added benefit that host side caching maintains the flexibility that Converged Infrastructure (CI) always had over HCI, that of being able to scale and do maintenance on compute and storage hardware independently of each other.

Other pros of host side caching + converged versus hyper-converged infrastructure are listed below.

Storage IO path in VSAN versus VirtuCache Host Side Caching.

Storage IO path in VSAN and VirtuCache are similar to a large extent, since both service storage IO from in-VMware host media. Though with VirtuCache, storage latencies are lower than with VSAN for four reasons:

  1. Reads are almost always serviced from local cache media in VirtuCache. In VSAN there is a high chance that all reads might be serviced over the network from another host;

  2. In addition to SSD, with VirtuCache you can cache to RAM which is the highest performing media there is, something that's not possible in VSAN;

  3. Write cache flush rate will typically be higher for backend storage array than for locally attached storage. As a result write latencies will be lower with VirtuCache, because its flushing writes to SAN array;

  4. VirtuCache is block based, VSAN is object based;

VMware’s vFlash Read Cache (VFRC) versus Virtunet’s Read+Write Caching

VMware will discontinue VFRC starting in ESXi 7.0 to be released in Q4 2019. 0

Despite the end-of-life announcement for VFRC, if you still want to review the differences between VFRC and VirtuCache, below are the three most important ones.

  1. We cache reads and writes, VMware's VFRC caches only reads. Caching writes improves the performance of not only writes, but also of reads.1

  2. We require no ongoing administration. Caching in our case is fully automated, and all Vmware features are seamlessly supported. Versus VFRC that requires administrator intervention when doing vmotion, for creating a new VM, for maintenance mode, for VM restore from backup, requires knowledge of application block size, requires SSD capacity assignment per vdisk. Many other tasks require admin oversight as well.

  3. We provide easy to understand VM, cache, network, and storage appliance level metrics for throughput, IOPS, and latencies, and alerting to forewarn of failure events. VFRC doesn't.

Below is a longer list of differences, cross-referenced with VMware authored content:

View Storage Accelerator vs. VirtuCache – VMware Host Side Storage Caching

The big difference between the two is that VSA caches only 2GB of reads from the Master VM1,2. VirtuCache caches reads + writes from all server & desktop VMs, and it can cache to TBs of in-host SSD/RAM, so that all storage IO is serviced from in-host cache.

More details in the table below.

To Improve CEPH performance for VMware, Install SSDs in VMware hosts, NOT OSD hosts.

SSDs deployed for caching in CEPH OSD servers are not very effective. The problem lies not in the SSDs, but because they are deployed at a point in the IO path that is downstream (in relation to VMs that run user applications) of where the IO bottleneck is. This post looks at this performance shortcoming of CEPH and its solution.

There are two options for improving the performance of CEPH.

Option 1 is to deploy SSDs in CEPH OSD servers for journaling (write caching) and read caching.

Option 2 is to deploy SSDs and host side caching software in the VMware hosts (that connect to CEPH over iSCSI). The host side caching software then automatically caches reads and writes to the in-VMware host SSD from VMware Datastores created on CEPH volumes.

Below are reasons for why we recommend that you go with Option 2.

How to Select SSDs for Host Side Caching for VMware – Interface, Model, Size, Source and Raid Level ?

In terms of price/performance, enterprise NVME SSDs have now become the best choice for in-VMware host caching media. They are higher performing and cost just a little more than their lower performing SATA counterparts. The Intel P4600/P4610 NVME SSDs are my favorites. If you don’t have a spare 2.5” NVME or PCIe slot in your ESXi host, which precludes you from using NVME SSDs, you could use enterprise SATA SSDs. If you choose to go with SATA SSDs, you will also need a high queue depth RAID controller in the ESXi host. In enterprise SATA SSD category, the Intel S4600/S4610 or Samsung SM863a are good choices. If you don't have a spare PCIe, NVME, SATA, or SAS slot in the host, then the only choice is to use the much more expensive but higher performing host RAM as cache media.

This blog article will cover the below topics.

- Few good SSDs and their performance characteristics.

- Write IOPS rating and lifetime endurance of SSDs.

- Sizing the SSD.

- How many SSDs are needed in a VMware host and across the VMware cluster?

- In case of SATA SSDs, the need to RAID0 the SSD.

- Queue Depths.

- Where to buy SSDs?

CEPH Storage for VMware vSphere

CEPH is a great choice for deploying large amounts of storage. It's biggest drawbacks are high storage latencies and the difficulty of making it work for VMware hosts.

The Advantages of CEPH.

CEPH can be installed on any ordinary servers. It clusters these servers together and presents this cluster of servers as an iSCSI target. Clustering (of servers) is a key feature so CEPH can sustain component failures without causing a storage outage and also to scale capacity linearly by simply hot adding servers to the cluster. You can build CEPH storage with off the shelf components - servers, SSDs, HDDs, NICs, essentially any commodity server or server components. There is no vendor lock-in for hardware. As a result, hardware costs are low. All in all, it offers better reliability and deployment flexibility at a lower cost than big brand storage appliances.

CEPH has Two Drawbacks - High Storage Latencies and Difficulty Connecting to VMware.

Improving Storage Performance of Dell VRTX

Dell's PowerEdge VRTX hyper-converged appliance can either have all hard drive datastores or all SSD datastores, but you can't have SSDs act as tiering or caching media for VRTX volumes / virtual disks. That's where VirtuCache comes in.

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest differences are:

  1. We accelerate both reads and writes, Infinio accelerates only reads.

  2. Infinio doesn't support Linked Clones or Non Persistent VDI. We support all VDI features.

  3. With us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio.

  4. We accelerate IO originating in VMware kernel and VMs, versus only VM generated IO is accelerated by Infinio.

Citrix MCS IO vs. VirtuCache – Server Side Storage IO Caching

Both cache 'hot' data to in-host RAM, but the differences between Citrix MCS Storage Optimization and VirtuCache are many. The top three are:

- MCSIO works only for non-persistent Xenapp / Xendesktop VMs.1 VirtuCache works for all VMs on the ESXi host;

- Citrix MCSIO can cache only VM writes and to VM RAM only.1 VirtuCache can cache VM reads and writes to in-host SSD or RAM;

- With MCSIO there will be VM data loss / instability if the host fails or the RAM cache is full,1 not so with VirtuCache.

For detailed differences, please review the table below.

Page 1 of 512345