Archive: Posts

How to Select SSDs for Host Side Caching for VMware – Type, Model, Size, Source and Raid Level ?

Go with enterprise grade and not consumer grade SSDs. And deploy the SSD behind a good Raid controller (queue depth > 128). Samsung SM863 and Intel S3710 are good choices. The SM863 is better value for money. Also, don’t skimp on the Raid controller since you don’t want it to choke before the SSD does.

Though this article is in the context of using SSDs with VirtuCache (our Host Side Caching software for VMware that improves the performance of any SAN based storage appliance), these principles apply to the larger category of host side storage software.

This blog article will cover the below topics

– Select SSD type – SATA, SAS, OR PCIe?

– Size the SSD?

– How many SSDs are needed in a VMware Host and across the VMware cluster?

– The need to locally RAID the SSD?

– Where to buy the SSD from?

Question – What problem are you trying to solve?

Are you constrained by throughput or are you seeing high latencies even at low throughput? Also are you having these issues with reads or with both reads and writes?

Most VMware installations have poor latencies even at low throughput. In these situations, it is more than likely that a cheap SATA SSD will suffice. Even the cheapest SATA SSD shows better performance than an entire mid-range storage appliance. So say you are trying to improve the performance of your midrange storage array, some examples are Dell Equallogic, Dell Compellent, HP MSA, HP EVA, HP Left Hand Networks, HP 3PAR, EMC VNX, Netapp FAS. For such an array, the approximate IOPS rating is around 40,000 IOPS (containing 100 odd hard disks). We arrive at this IOPS number by multiplying the number of disks by 400 IOPS/disk. A 15k rpm Fiber channel SAS Hard Drive is an example of a high-end enterprise grade Hard Disk and it does 400 IOPS. Now even a cheap enterprise grade SATA SSD (costing < $1/GB) does around 50,000 IOPS. So a single enterprise grade SATA SSD installed in the VMware Host, for the purposes of caching data from such a mid-range storage array is sure to boost the storage performance of the array many times.

To select an appropriate SSD, SSD specs for random IOPS at 4KB block size is the most relevant parameter. This is because most storage IO from within VMware is random and at small block size.

The next most relevant parameter is endurance of the SSD. Endurance is measured in terms of total amount of lifetime writes to the SSD warranted by the SSD OEM. Whether we are accelerating reads or both reads and writes, Host Side Caching does involve large amounts of writes to the SSD, since older less frequently used data is continuously replaced with newer data. You can find out how much you are writing to the SSD on a daily basis using VirtuCache statistics screen. This will give you a good idea of the life expectancy for the SSD for your workload.

My favorite SSD to boost both the read and write performance of mid-range arrays using VirtuCache is the new (as of January 2016) Samsung SM 863. The SM863 is cheap (70 US cents/GB) and does 97,000 IOPS reads and 29,000 IOPS writes, for random 4KB block size. The highest capacity SM863 is at 1.92 TB. Samsung warrants them for the earlier of 5 years or 12 PetaBytes of lifetime writes. A comparable SSD from Intel is the S3710. It does 85,000 IOPS reads and 45,000 IOPS writes for random 4KB blocks. Intel warrants it for the lesser of 5 years or 24 PetaBytes lifetime writes. The maximum capacity for the S3710 is 1.2 TB. Of the two, I prefer the Samsung SM863 because it is half the price of the Intel S3710 for comparable performance, and its endurance at 12 PBs of lifetime writes, though half of the Intel S3710, is sufficiently high for the SSD to last the full 5 years, for typical IT workloads deployed in VMware.

What about other SSD types – SAS or PCIe based SSDs?

SAS SSDs tout lower failure rates versus SATA SSDs. For the purposes of VirtuCache where we implement techniques within our software to ensure lower SSD failure rates, SAS SSDs do not add additional value versus the much cheaper Samsung or Intel SATA SSDs. A year or two ago, SAS SSDs were also higher performing than SATA, but that’s not true anymore.

PCIe SSDs are the highest performing SSDs on the market. However for most VMware customers, we would recommend PCIe SSDs not for performance reasons but when either there are no SATA slots available on the Host or when the Host has a Raid Controller that cannot keep up with high speed SSDs like the Samsung 845 DC Pro. This latter problem is more common than you would think. We have encountered bottlenecks in the Raid Controller in older IBM and Fujitsu servers. PCIe SSDs are best known for supporting hundreds of thousands of IOPS per SSD. However most VMware deployments are not benefited by such performance. One reason is that there is only so much IO you can do from a VM or a Host. VMware’s flow control mechanisms ensure that a single VM or Host does not consume large amounts of storage bandwidth. Queue Depths at the VM, Host and Adapter level adjust across the VMware cluster and storage appliance to ensure that storage bandwidth is shared equally between VMs and Hosts. Now even if you were to bypass VMware flow control features, by improving storage performance to the extent that a PCIe SSD does, you would have shifted the bottleneck to another infrastructure component, most likely Memory, which in turn would prevent you from getting the 200,000+ IOPS from the PCIe SSD. Lastly even if you got the 200,000+ IOPS from the PCIe SSD, which then lets you deploy 300+ VMs on each Host. Such high VM densities are not practical since you would run into network issues in case of Host failure, when all 300+ VMs would move to other Hosts all at the same time. To summarize, go with PCIe SSDs if you don’t have a SATA slot or if the Raid Controller prevents you from getting the most out of your Intel or Samsung SATA SSDs.

Consumer SSDs are even cheaper and perform well, so why not consumer SSDs?

Consumer SSDs have low endurance. Most consumer SSDs are warranted for 3 years and for lifetime writes of less than 100 TB.
Also, by comparing the specs for some consumer SSDs, you might walk away with the impression that they are higher performing than enterprise SSDs. For instance the consumer grade Samsung 850 Pro data sheet touts higher IOPS than the enterprise grade Samsung SM863. However please keep in mind that in a VMware environment, you are better served by lower latencies and consistency of latencies regardless of how random or high the IOPS gets. Unfortunately, these latency aspects of the SSD cannot be gleaned by comparing SSD data sheets from any SSD OEM. The Samsung SM863 is half the latencies of the Samsung 850 Pro and far more consistent as well. Also in TPC-C tests (I like these tests because it mimics a real life transaction processing workload using an actual database as opposed to synthetic load generation utilities like fio or Iometer), the Samsung SM863 shows 2x the transactions per minute versus the Samsung 850 Pro.

If we must use a PCIe SSD, which one?

The PCIe SSDs that are reasonably priced and those that have drivers for VMware version 5.x are the Micron 420m ($2/GB) or Fusionio Iodrive ($4/GB). If you are on a Blade and without a SATA slot, a PCIe SSD in Mezzanine form factor though expensive ($ 6/GB) is the only choice.

What about NVME SSDs?

NVME SSDs are PCIe SSDs that support the new NVME standard.

Unlike SATA SSDs, where any SATA SSD works with VMware, since VMware has a generic in-box SATA driver that supports any SATA SSD or the RAID controller on the server supports any SATA device, current generation PCIe SSDs need a custom software driver developed by the SSD vendor. NVME standard solves this inconvenience for PCIe SSDs. Going forward a single in-box software driver within VMware will support all NVME SSDs. The new NVME SSDs from tier-1 OEMs like Samsung and Intel are both cheap (< $2/GB), and high performing, but only supported starting in VMware 5.5 and in Hosts that have PCIe Gen3 slots.

Where should I buy SSDs from?

You can buy Host side SSDs from your server vendor or from retailers like, CDW, Microland, Ingram Micro etc. The advantages of buying SSDs retail are that the SSD costs a fourth of the price of the same SSD bought from the server vendor. The retailers also pass through the OEM warranty of 5 years versus the same SSD branded by the server vendor is now warranted by the server vendor for 3 years only. Also and other retailers sell the latest highest performing SSD like the Samsung SM863 or the Intel S3710. The server vendors sell SSDs that are 12-18 months old, since the qualification cycles at the server vendor are 12-18 months long. Since SSD technology is evolving at a rapid clip, SSDs that are 18 months old are substantially lower performing than more recent models. The one advantage with a SSD that is server OEM branded is that it does make the server management console light go green vs. the same SSD you buy from might not.

What size SSD?

The rule of thumb is that if we do a good job of caching, 20% of media should serve 80% of storage requests. So 20% of storage used by all VMs on that Host should be the ideal SSD capacity. Now while evaluating VirtuCache in your production environment, (using VirtuCache stats screen) if you notice that the cache hit ratio is low and the SSD is full, you would need to bump up the SSD capacity to get to > 80% cache hit ratio.

How many SSDs do you need?

If you are caching only reads, then you need only one SSD per Host and only for those Hosts that need to be accelerated. If you are caching writes as well, you will need one SSD per Host and for all the Hosts in the VMware cluster. This is because in case of write caching VirtuCache commits the writes to multiple SSDs in multiple hosts. Writes are mirrored in this fashion to protect the writes in case of Host failure. And so you need a SSD in all Hosts, to be assured that all copies of the Writes are committed to the Hosts at the same speed, thus ensuring low write latencies.

Do I need to RAID SSDs?

Yes ideally you would RAID the SSD in each Host but not for the conventional reason of protecting data. You need to RAID-0 the SSD only, and this is to assign the SSD a higher Queue Depth than what the default VMware SATA driver is capable of assigning to the SSD. By assigning the SATA SSD a higher Queue Depth, larger number of requests can be processed by the SSD, thus improving throughput and reducing latencies. A higher Queue Depth than what is possible by the default VMware SATA driver can only be assigned to the SSD in this fashion.

You don’t need to protect data on the SSD using RAID-1+. You don’t need to do this in the case of read caching, since reads are always kept in sync between local SSD cache and the storage array at all times. And in the case of write caching, VirtuCache protects the writes on the SSD by mirroring writes over the network to two more SSDs on two more Hosts, much like RAID 1 but only over the network. So even if SSD(s) were to fail, you will not lose any data, though storage performance for the Hosts will revert back to when you didn’t have VirtuCache.


-Use enterprise SSDs not consumer.

-Use Samsung SM863 SATA SSDs – They do well for both read or write intensive workloads and will more than likely last you the full 5 years when used with our Host Side Caching software

-Use PCIe SSD only if no SATA slot on Host or Host RAID controller is a bottleneck.

-For accelerating reads + writes, you need one SSD per Host for every Host in cluster.

-For accelerating only reads, only one SSD needed per Host for only those Hosts needing acceleration.

-RAID SSD as RAID 0 using local RAID controller and check Queue Depth of device. QD of RAIDed device should be > 128 else the RAID controller becomes a bottleneck.

-RAID 1+ for SSD not required.

SATA SSDs from Tier-1 OEMs are the best bang for the buck and provide sufficiently high performance for storage infrastructure to no longer be the bottleneck. Stating this opinion differently, SATA SSDs from Tier-1 OEMs are sufficiently high performing to transfer the infrastructure bottleneck to another hardware component (Memory, Network, CPU) meaning that an even higher performing SSD will not necessarily result in better application performance or higher VM densities, since some other H/W component now becomes the bottleneck.

Disclaimer: Author has no affiliation with Samsung, Intel or any other SSD OEM. There was no monetary compensation made or free SSD samples sent to the author or Virtunet System from Samsung or Intel.