Archive: Posts

CEPH Storage for VMware vSphere

CEPH is a great choice for deploying large amounts of storage. It’s biggest drawbacks are high storage latencies and the difficulty of making it work for VMware hosts.

The Advantages of CEPH.

CEPH can be installed on any ordinary servers. It clusters these servers together and presents this cluster of servers as an iSCSI target. Clustering (of servers) is a key feature so CEPH can sustain component failures without causing a storage outage and also to scale capacity linearly by simply hot adding servers to the cluster. You can build CEPH storage with off the shelf components – servers, SSDs, HDDs, NICs, essentially any commodity server or server components. There is no vendor lock-in for hardware. As a result, hardware costs are low. All in all, it offers better reliability and deployment flexibility at a lower cost than big brand storage appliances.

CEPH has Two Drawbacks – High Storage Latencies and Difficulty Connecting to VMware.

The reason CEPH is cheap is because you can use high capacity hard drives to build your storage. Higher capacity HDDs though cheaper on a per GB basis are slow, and so CEPH performs poorly. Also CEPH has a high overhead in its replication logic, so even if you build CEPH storage with only SSDs, the storage latencies are much higher than big brand SSD arrays.

A lesser drawback is its (in)ability to interface with VMware. Linux vendors like Red Hat and SUSE who promote CEPH, compete with VMware in the Operating Systems space, and so it could be that its not in their interest to promote connectivity of CEPH to VMware.

Our primary differentiator from Red Hat and SUSE is when CEPH storage is connected to VMware vSphere using iSCSI, and it is to make CEPH perform at the same low latencies and high throughput for VMware VMs, as major brand all flash arrays.

We do this by installing our host side caching software called VirtuCache in the VMware host along with any SSD in the same VMware host. VirtuCache improves the performance of iSCSI based CEPH by automatically caching frequently used data (both reads and writes) from CEPH to any in-VMware host SSD (or in-VMware host RAM). For more details on VirtuCache, please refer to this link

Here is another blog post that’s a must read if you have connected CEPH to VMware hosts. This post explains why SSDs installed in VMware hosts for use as cache media with VirtuCache will work better than if the same SSDs were deployed in CEPH OSD nodes for caching.

Our second differentiator is the fact that we were first to market to make CEPH work with VMware. This required us to write a VAAI (VMware API for Array Integration)2 plugin and iSCSI initiator for CEPH. This is less of a differentiator now since VAAI and iSCSI components are now available in opensource as well.


CEPH is ideally suited for large amounts of enterprise grade storage. Now if you want low latencies or high throughput from CEPH for VMware VMs, you are well served by deploying VirtuCache caching to in-VMware host SSDs (or in-VMware host RAM).

Customer Case Studies for CEPH and VMware.

Primary Production Storage. Backup and Replication Target.1 Video Surveillance.1
At Klickitat Valley Hospital, a 72TB CEPH cluster is connected over iSCSI to 3 ESXi hosts. VirtuCache is caching to 3TB in-host SSDs. At St. James Hospital, a 24TB CEPH cluster is a backup and DR target for Veeam. Each video  surveillance ‘pod’ has 2 hosts connected to 200TB CEPH, with 9TB SSDs in each host serving as VirtuCache caching media.

1 – Video is already compressed on the camera and dedupe is not very effective with video. Backup data too is already compressed and also deduped in Veeam or similar backup software. So big brand storage vendors who tout dedupe and compression, don’t add much value. So in both these cases, you’d need large amounts of raw storage, for which CEPH is very cost effective.

2 – VAAI integration reduces the storage burden on VMware CPUs by offloading these tasks to storage CPUs, and is required for any storage appliance vendor to connect to VMware.