Blog

To Improve CEPH performance for VMware, Install SSDs in VMware hosts, NOT OSD hosts.

SSDs deployed for caching in CEPH OSD servers are not very effective. The problem lies not in the SSDs, but because they are deployed at a point in the IO path that is downstream (in relation to VMs that run user applications) of where the IO bottleneck is. This post looks at this performance shortcoming of CEPH and its solution.

There are two options for improving the performance of CEPH.

Option 1 is to deploy SSDs in CEPH OSD servers for journaling (write caching) and read caching.

Option 2 is to deploy SSDs and host side caching software in the VMware hosts (that connect to CEPH over iSCSI). The host side caching software then automatically caches reads and writes to the in-VMware host SSD from VMware Datastores created on CEPH volumes.

Below are reasons for why we recommend that you go with Option 2.

CEPH Storage for VMware vSphere

CEPH is a great choice for deploying large amounts of storage. It's biggest drawbacks are high storage latencies and the difficulty of making it work for VMware hosts.

The Advantages of CEPH.

CEPH can be installed on any ordinary servers. It clusters these servers together and presents this cluster of servers as an iSCSI target. Clustering (of servers) is a key feature so CEPH can sustain component failures without causing a storage outage and also to scale capacity linearly by simply hot adding servers to the cluster. You can build CEPH storage with off the shelf components - servers, SSDs, HDDs, NICs, essentially any commodity server or server components. There is no vendor lock-in for hardware. As a result, hardware costs are low. All in all, it offers better reliability and deployment flexibility at a lower cost than big brand storage appliances.

CEPH has Two Drawbacks - High Storage Latencies and Difficulty Connecting to VMware.

Improving Storage Performance of Dell VRTX

Dell's PowerEdge VRTX hyper-converged appliance can either have all hard drive datastores or all SSD datastores, but you can't have SSDs act as tiering or caching media for VRTX volumes / virtual disks. That's where VirtuCache comes in.

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest differences are:

  1. We accelerate both reads and writes, Infinio accelerates only reads.

  2. Infinio doesn't support Linked Clones or Non Persistent VDI. We support all VDI features.

  3. With us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio.

  4. We accelerate IO originating in VMware kernel and VMs, versus only VM generated IO is accelerated by Infinio.

Citrix MCS IO vs. VirtuCache – Server Side Storage IO Caching

Both cache 'hot' data to in-host RAM, but the differences between Citrix MCS Storage Optimization and VirtuCache are many. The top three are:

- MCSIO works only for non-persistent Xenapp / Xendesktop VMs.1 VirtuCache works for all VMs on the ESXi host;

- Citrix MCSIO can cache only VM writes and to VM RAM only.1 VirtuCache can cache VM reads and writes to in-host SSD or RAM;

- With MCSIO there will be VM data loss / instability if the host fails or the RAM cache is full,1 not so with VirtuCache.

For detailed differences, please review the table below.

Improving Performance of Log Management Application at a Service Provider

Business Intelligence, Log Management, Security Information & Event Management (SIEM), Search and Analytic software like Splunk, Elastic Search, Cognos, HP Vertica, HP Autonomy, need to provide real-time visibility into large volumes of fast changing data. When these applications are deployed in traditional VMware VMs connected to centralized storage, such large volume of write and read operations puts pressure on existing storage infrastructure resulting in much slower than real-time ingest and analysis speeds that are expected of such applications.

GUI comparison, PernixData FVP vs Us – VMware Write Back Caching Software.

For the most part both our GUI and workflows are similar. This article compares, with screenshots, steps to install and configure VirtuCache and PernixData FVP.

The only differences between us and Pernix stem from the fact that we leverage VMware's own capability in the areas of clustering, license management, and logging, whereas Pernix programmed these features separately within their software. Overall these additional screens add a few clicks and pages in Pernix versus us, but again I want to emphasize that we are more similar than different, in terms of the GUI and workflow.

Virtunet Read + Write Caching Versus Datrium Read Only Caching, Part II.

In our first article, we explained the differences between our host side caching software (VirtuCache) and Datrium's (DVX DiESL). To summarize the first article, VirtuCache differs from DVX DiESL in 3 ways - (1) VirtuCache caches reads and writes to in-host cache media, Datrium caches only reads; (2) DVX DiESL only works for Datrium's own appliance, VirtuCache works for any appliance; and (3) VirtuCache can cache to in-host RAM and SSD, DVX DiESL can cache to in-host SSD only.

Now Datrium, on this link, puts forth arguments to not cache writes to in-host cache media. In this post we counter Datrium's arguments and argue for caching both reads and writes to in-host cache media.

Replaced PernixData for vGPU based VDI

First of all - PernixData was a good host side caching software. Unfortunately for their customers, after they were acquired by Nutanix, Nutanix end-of-lifed their software. Our software, called VirtuCache, directly competes with PernixData FVP.

PernixData’s FVP vs. Virtunet – Both VMware Kernel Write Caching Software

More similar than different

Both us and PernixData differentiate from rest of the host side caching vendors in similar ways - that we are kernel mode software; both of us cache writes in addition to reads; have data protection strategies in place to prevent against data loss in case of multiple simultaneous hardware failure; do not require networking or storage to be reconfigured; and do not require agents per VM or VM per host.

This article is the first in a series of two articles that compares our software versus PernixData FVP. The second article compares (with screenshots) GUI and configuration steps for PernixData FVP and us.

Below is how we compare on important criteria.

Page 2 of 612345...Last »