CEPH Storage for VMware vSphere

CEPH Storage for VMware vSphere

CEPH is a great choice for deploying large amounts of storage. It’s biggest drawback is high storage latency.

The Advantages of CEPH.

CEPH can be installed on any ordinary servers. It clusters these servers together and presents this cluster of servers as an iSCSI target. Clustering (of servers) is a key feature so CEPH can sustain component failures without causing a storage outage and also scale capacity linearly by simply hot adding servers to the cluster. You can build CEPH storage with off the shelf components – servers, SSDs, HDDs, NICs, essentially any commodity server or server components. There is no vendor lock-in for hardware. As a result, hardware costs are low. All in all, it offers better reliability and deployment flexibility at a lower cost than big brand storage appliances.

the main problem with ceph – High Storage Latencies.

The reason CEPH is cheap is that you can use high capacity hard drives to build your storage. Higher capacity HDDs though cheaper on a per GB basis are slow, and so CEPH performs poorly. Also, CEPH has a high overhead in its replication logic (replication is a process in CEPH where two copies of data are kept on two different physical servers, to guard against data loss in case of server failure), so even if you build CEPH storage with only SSDs, the storage latencies are much higher than big brand SSD arrays.

Our primary differentiator in the field of CEPH storage is when CEPH is connected to VMware vSphere using iSCSI, and it is to make CEPH perform at the same low latencies and high throughput for VMware VMs, as major brand all-flash arrays.

We do this by installing our host side caching software called VirtuCache in the VMware host along with an SSD in the same VMware host. VirtuCache improves the performance of iSCSI based CEPH by automatically caching frequently used data (both reads and writes) from CEPH to any in-VMware host SSD (or in-VMware host RAM).

Here is another blog post that’s a must read if you have connected CEPH to VMware hosts. This post explains why SSDs installed in VMware hosts for use as cache media with VirtuCache will work better than if the same SSDs were deployed in CEPH OSD nodes for caching or storage.


CEPH is ideally suited for large amounts of enterprise-grade storage. Now if you want low latencies or high throughput from CEPH for VMware VMs, you are well served by deploying VirtuCache caching to in-VMware host SSDs (or in-VMware host RAM).

Customer Case Studies for CEPH and VMware.
Primary Production Storage. Backup and Replication Target.1 Video Surveillance.1
At Klickitat Valley Hospital, a 72TB CEPH cluster is connected over iSCSI to 3 ESXi hosts. VirtuCache is caching to 3TB in-host SSDs. At St. James Hospital, a 24TB CEPH cluster is a backup and DR target for Veeam. Each video surveillance ‘pod’ has 2 hosts connected to 200TB CEPH, with 9TB SSDs in each host serving as VirtuCache caching media.

1 – Video is already compressed on the camera and dedupe is not very effective with video. Backup data too is already compressed and also deduped in Veeam or similar backup software. So big brand storage vendors who tout dedupe and compression, don’t add much value. So in both these cases, you’d need large amounts of raw storage, for which CEPH is very cost effective.


Download Trial Contact Us