Best Price to Performance Ratio for Storage IO
The standard Industry practice to solve storage bottlenecks in VMs is to serve storage IO from solid state drives (SSDs) instead of hard drives. This is because SSDs are capable of serving up much higher throughput and at consistently lower latencies than hard disks. For instance a single high end HDD does 400 IOPS, with latencies varying from 10 ms for sequential data to hundreds of milliseconds for random data. In comparison, a single SATA SSD, for both sequential and random IO is capable of supporting 100,000 IOPS at consistently less than 10 millisecond latencies. Also since the IO from within VMware is mostly random, SSDs are especially well suited for VMware workloads. However even the cheap enterprise SATA SSDs (that cost $1 /GB) are 20X the cost of HDDs, so smaller amounts of SSDs need to be used to reduce cost.
SAN vendors advocate installing SSDs in the array. However we think that caching software along with the SSD should instead be deployed in the ESXi host. This is for two reasons. First, the SSD will show considerably higher latencies when deployed in the array, since this SSD is now behind the network and behind typically a maximum of two storage controllers. By comparison, with server-side caching, the SSD is on a dedicated PCIe, SAS or SATA bus closer to the CPU. Also, each ESXi host processor acts like a storage controller for the local caching workload. By distributing caching workload across ESXi hosts, near raw SSD latency and throughput can be achieved. Secondly, server side SSDs can be bought directly from SSD manufacturers like Samsung, Toshiba, Micron or Seagate, instead of buying the same SSD at much higher prices from server or storage vendors like EMC, Netapp, Cisco, HP, or Dell.
Below is a use case where Global Foundries compared the costs and performance gains between the below three options:
Controller based caching (called Fast Cache from EMC) for an EMC CX-4 appliance,
Controller based tiering solution from HP 3 PAR for the StoreServ 7200 appliance, and
ESXi host based VirtuCache with Micron SAS SSDs installed in the ESXi servers.
Global Foundries’ IT was looking at moving their dev/ops workload from physical servers to VMs. For such a migration to be successful, they wanted assured latencies and IOPS from within VMs.
Global Foundries’ Server and Storage Infrastructure
Physical Servers – Global Foundries provisioned six HP BL460c G7 blades with 144GB RAM running VMware 5.1.
Storage – A total of 36 TB of storage on LUNs was provisioned across a 8 gbps Fiber Channel Clariion CX4 appliance and another 10gbps FCoE HP 3PAR 7200 appliance
On an average, less than 8 TB of data changed every day. Global Foundries’ dev/ops application was using Oracle 11G for the underlying database and had 60-40 read-write ratio.
Comparing VirtuCache with EMC Fast Cache and HP 3PAR Tiering
The selection process involved comparing price per IOPS, price per GB, and latencies between the below three competing approaches:
EMC proposed deploying their FastCache functionality with SAS SSDs within the CX4 appliance. 2 TB of Fast Cache SSDs were deployed for the evaluation.
HP proposed deploying SSDs within the StoreServ appliance and tiering data to SSDs from disks. Again 2 TB of SSDs were deployed for the evaluation.
VirtuCache along with a 400 GB of Micron SAS SSD was installed in each of the six ESXi blade servers.
*Cost Calculated Based On Publicly Available Info Listed Below
Benefit to Global Foundries
Using VirtuCache, Global Foundries was able to successfully ensure over 250MBps storage throughput to each VMware host with consistently less than 10 millisecond latencies for a price that was considerably less than storage vendor provided caching or tiering modules.