Blog

How to Select SSDs for Host Side Caching for VMware – Interface, Model, Size, Source and Raid Level ?

In terms of price/performance, enterprise NVME SSDs have now become the best choice for in-VMware host caching media. They are higher performing and cost just a little bit more than their lower performing SATA counterparts. The Intel P4600/P4610 NVME SSDs are my favorites. The Samsung PM1725a is my second choice. If you don’t have a spare 2.5” NVME or PCIe slot in your ESXi host, which precludes you from using NVME SSDs, you could use enterprise SATA SSDs. If you choose to go with SATA SSDs, you will also need a high queue depth RAID controller in the ESXi host. In enterprise SATA SSD category, the Intel S4600/S4610 or Samsung SM863a are good choices. If you don't have a spare PCIe, NVME, SATA, or SAS slot in the host, then the only choice is to use the much more expensive and higher performing host RAM as cache media.

This blog article will cover the below topics.

- Write IOPS rating and lifetime endurance of SSDs.

- Sizing the SSD.

- How many SSDs are needed in a VMware host and across the VMware cluster?

- In case of SATA SSDs, the need to RAID0 the SSD.

- Queue Depths.

- Where to buy SSDs?

CEPH Storage for VMware vSphere

CEPH is a great choice for deploying large amounts of storage. It's biggest drawback is high latency.

The advantages of CEPH.

CEPH can be installed on any ordinary servers. It clusters these servers together and presents this cluster of servers as an iSCSI target. You can build CEPH storage with off the shelf components - servers, SSDs, HDDs, NICs, essentially any server or server part. There is no vendor lock-in for hardware. As a result, hardware costs are low. Storage capacity scales linearly as you add servers. All in all, it offers better reliability and deployment flexibility at a lower cost than big brand storage appliances.

CEPH has two drawbacks.

High storage latencies. The reason CEPH is cheap is because you can use high capacity (12TB) hard drives to build your storage. Higher capacity HDDs though cheaper on a per GB basis are slow, and so CEPH performs poorly. Even if you build CEPH storage with only SSDs, the latencies are much higher than a big brand SSD based array.

A lesser drawback is its (in)ability to interface with VMware, which is a consequence of it being promoted by the Linux community, which is not really VMware friendly. Linux vendors like RedHat and SUSE who promote CEPH, compete with VMware in the operating systems space, and so it could be that its not in their interest to promote connectivity of CEPH to VMware.

Our primary differentiator from RedHat and SUSE in the world of CEPH+VMware is to make Hard Drive based CEPH perform at the same low latencies and/or high throughput as major brand all flash arrays.

We do this by installing our host side caching software called VirtuCache in the VMware host along with any SSD in the same VMware host. VirtuCache improves the performance of iSCSI based CEPH by caching frequently used data (both reads and writes) from CEPH to any in-VMware host SSD (or RAM). VirtuCache is signed and certified by VMware.

For more details on VirtuCache, please refer to this link

Our second differentiator is the fact that we were first to market to make CEPH work with VMware. This required us to write a VAAI (Vmware API for Array Integration) plugin and iSCSI initiator for CEPH. VAAI integration reduces the storage burden on VMware CPUs by offloading these tasks to storage CPUs, and is required for any storage appliance vendor to connect to VMware.

Summary.

CEPH is ideally suited for large amounts of enterprise grade storage. Now if you want to deploy workload that requires low latencies or high throughput, you are well served by deploying VirtuCache caching to in-VMware host SSDs.

Customer Case Studies for CEPH and VMware.

Primary Production Storage. Backup and Replication Target. Video Surveillance.
At Klickitat Valley Hospital, a 72TB CEPH cluster is connected over iSCSI to 3 ESXi hosts. VirtuCache is caching to 3TB in-host SSDs. At St. James Hospital, a 24TB CEPH cluster are backup and DR target for Veeam. Each video  surveillance 'pod' has 2 hosts connected to 200TB CEPH, with 9TB SSDs in each host serving as VirtuCache caching media.

Video is already compressed on the camera and dedupe is not very effective with video. Backup data too is already compressed and also deduped in Veeam or similar software. So big brand storage vendors who tout dedupe and compression, don't add much value. So in both these cases, you need large amounts of raw storage, for which CEPH is very cost effective.

Improving Performance of Log Management Application at a Service Provider

Business Intelligence, Log Management, Security Information & Event Management (SIEM), Search and Analytic software like Splunk, Elastic Search, Cognos, HP Vertica, HP Autonomy, need to provide real-time visibility into large volumes of fast changing data. When these applications are deployed in traditional VMware VMs connected to centralized storage, such large volume of write and read operations puts pressure on existing storage infrastructure resulting in much slower than real-time ingest and analysis speeds that are expected of such applications.

GUI comparison, PernixData FVP vs Us – VMware Write Back Caching Software.

For the most part both our GUI and workflows are similar. This article compares, with screenshots, steps to install and configure VirtuCache and PernixData FVP.

The only differences between us and Pernix stem from the fact that we leverage VMware's own capability in the areas of clustering, license management, and logging, whereas Pernix programmed these features separately within their software. Overall these additional screens add a few clicks and pages in Pernix versus us, but again I want to emphasize that we are more similar than different, in terms of the GUI and workflow.

Replaced PernixData for vGPU based VDI

First of all - PernixData was a good host side caching software. Unfortunately for their customers, after they were acquired by Nutanix, Nutanix end-of-lifed their software. Our software, called VirtuCache, directly competes with PernixData FVP.

PernixData’s FVP vs. Virtunet – Both VMware Kernel Write Caching Software

More similar than different

Both us and PernixData differentiate from rest of the host side caching vendors in similar ways - that we are kernel mode software; both of us cache writes in addition to reads; have data protection strategies in place to prevent against data loss in case of multiple simultaneous hardware failure; do not require networking or storage to be reconfigured; and do not require agents per VM or VM per host.

This article is the first in a series of two articles that compares our software versus PernixData FVP. The second article compares (with screenshots) GUI and configuration steps for PernixData FVP and us.

Below is how we compare on important criteria.

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest difference is that we accelerate both reads and writes, Infinio accelerates only reads. A few others are - with us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio; we accelerate creation of VMs, snapshots, and other VMware kernel operations, which they don't. More details in the table below.

Virtunet Systems

Infinio

Accelerates both reads and writes.1

Accelerates only reads.

By not caching writes, not only are writes not accelerated but the reads that are behind writes on the same thread are not accelerated, and so reads are slowed down as well. 1

Caching policy can be applied at the VM and/or Datastore level. Since the number of Datastores is typically much less than VMs, its quicker to configure caching for all the VMs in the cluster by assigning caching policy at the Datastore level than at the VM level. All VMs within the Datastore inherit the Datastore wide caching policy by default. You can of course apply a different caching policy at the VM level.2

Caching policy can be applied only at the VM level. So if you have large number of VMs, the process of configuring caching for all the VMs in the cluster becomes onerous.2

When new VMs (server or desktop VMs) are created, these VMs automatically inherit the Datastore caching policy.

For all new server or desktop VMs, policy has to be manually applied to these VMs. This is especially problematic in non-persistent VDI, where new VMs are being continuously created and without admin notification.

All storage IO, whether it originates within VMs or VMware kernel, is accelerated.3

Storage IO originating in the VMware kernel is not accelerated. So creation/deletion of VMs, snapshots, and linked clone operations are not accelerated, since these operations are initiated from within the VMware kernel and not from within any VM.3

Improving the performance of Dell Compellent SAN for VMware VMs

VirtuCache is software that improves the performance of Dell Compellent appliances without requiring you to upgrade the appliance or the SAN network. The performance improvement you will get from your Compellent appliance will rival an upgrade to an all-flash array.

Compellent appliances were the workhorses of the enterprise storage market a few years ago. They were cost effective at high capacities. The only drawback was that they are slower since they are primarily hard drive based, and when connected to VMware they exhibit all the 'IO blender' symptoms resulting in high VM level storage latencies.

Reducing Write Latencies in CEPH Storage

CEPH is a popular open source storage software. However its write latencies are high. VirtuCache caches CEPH volumes to in-host SSDs and by doing so reduces VM level latencies considerably.

Improving the performance of Equallogic SAN for VMware VMs

VirtuCache is software that improves the performance of Equallogic appliances without requiring you to upgrade the appliance or the SAN network. The performance improvement you will get from your Equallogic appliance will rival an upgrade to an all-flash array.

Equallogic appliances were the workhorses of the enterprise storage market a few years ago. They were cost effective at high capacities. The only drawback was that they are slower since they are primarily hard drive based, and when connected to VMware they exhibit all the 'IO blender' symptoms resulting in high VM level storage latencies.

Page 1 of 41234