Archive: Posts

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest difference is that we accelerate both reads and writes, Infinio accelerates only reads. A few others are – with us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio; we accelerate creation of VMs, snapshots, and other VMware kernel operations, which they don’t. More details in the table below.

Virtunet Systems

Infinio

Accelerates both reads and writes.1

Accelerates only reads.

By not caching writes, not only are writes not accelerated but the reads that are behind writes on the same thread are not accelerated, and so reads are slowed down as well. 1

Caching policy can be applied at the VM and/or Datastore level. Since the number of Datastores is typically much less than VMs, its quicker to configure caching for all the VMs in the cluster by assigning caching policy at the Datastore level than at the VM level. All VMs within the Datastore inherit the Datastore wide caching policy by default. You can of course apply a different caching policy at the VM level.2

Caching policy can be applied only at the VM level. So if you have large number of VMs, the process of configuring caching for all the VMs in the cluster becomes onerous.2

When new VMs (server or desktop VMs) are created, these VMs automatically inherit the Datastore caching policy.

For all new server or desktop VMs, policy has to be manually applied to these VMs. This is especially problematic in non-persistent VDI, where new VMs are being continuously created and without admin notification.

All storage IO, whether it originates within VMs or VMware kernel, is accelerated.3

Storage IO originating in the VMware kernel is not accelerated. So creation/deletion of VMs, snapshots, and linked clone operations are not accelerated, since these operations are initiated from within the VMware kernel and not from within any VM.3

  1. Why caching writes is necessary?

    The main arguments from read caching vendors like Infinio, Atlantis Computing, and Datrium, for why read caching suffices is that most traditional IT workloads are read dominated and secondly they contend that by caching only reads, the storage appliance is freed up to better service writes. Both these statements are true, but they are not strong enough arguments to not cache writes. In fact not caching writes also slows down reads. Here is the reason. In all real world applications, reads and writes are interspersed on the same thread. Since Infinio does not cache writes, pending writes in the queue will back up even the read requests behind them that are on the same thread. So even if these reads by themselves could be accelerated by Infinio, because reads and writes are interspersed on the same thread the reads too are slowed down. Now keep in mind that if you run a synthetic testing tool like IOmeter you can configure reads and writes to be on separate threads, which will result in reads being accelerated independent of writes, but this is not a real life use case.

    Secondly, many storage appliances, even the older ones, do OK on reads but not on writes, so accelerating even small amount of writes disproportionately improves VM performance.

    Thirdly, write intensive applications are also those that are of the higher value to the customer, as is the case with transaction processing, ERP, healthcare EMR, order processing, and such other applications. For instance, entering purchase orders (write intensive job) is possibly higher value than running a report on historical purchase orders (read intensive job).

    Hence both reads and writes need to be accelerated.

  2. Infinio caches at the VM level, we do so at the Datastore level. It’s quicker for the Administrator to enable datacenter wide caching by selecting Datastores than VMs since Datastores are fewer in number.

    With Infinio, the administrator configures caching on a per VM basis. So if you have hundreds of VMs, that will be a lot of work. As shown in the first screenshot below, by default, we cache at the datastore level. Since the number of datastores is much less than the number of VMs, it is quicker to configure datacenter wide caching at the datastore level. With VirtuCache, all the VMs in the datastore inherit the datastore wide caching policy. Now you can exclude VMs from being cached or assign VMs a different caching policy than its datastore policy as shown in the second screenshot below.

  3. Virtunet Screenshot 1: In VirtuCache, assign caching policy at the Datastore levelVirtunet Screenshot 1: In VirtuCache, assign caching policy at the Datastore level.

    Virtunet Screenshot 2: In VirtuCache, ability to override Datastore level caching policy at the VM levelVirtunet Screenshot 2: In VirtuCache, ability to override Datastore level caching policy at the VM level.

  4. We accelerate creation/deletion process of VMs, VDI linked clones, and VMware snapshots that Infinio doesn’t. This is because Infinio accelerates storage IO that originates in VMs only, we accelerate IO that originates in VMs and VMware Kernel.

    Since Infinio caches on a per VM basis, they can only accelerate storage IO originating within VMs. Now some important storage operations in VMware, like creation and deletion of VMs, VDI linked clones, VDI Appvolumes, and snapshots, do not originate in VMs, these operations originate in the VMware kernel, and so Infinio cannot accelerate these operations. Also in the case of VDI or snapshots if the snapshot or linked clone hierarchy is many levels deep, storage performance gets worse as the hierarchy gets deeper, and so it is beneficial that snapshot and linked clone operations be accelerated.