Scroll Top

Seagate Chooses VirtuCache for its 1200 Accelerator Product Line to Increase VM Densities for VDI Deployments

Use Case:
VDI Performance

Minneapolis, MN and Cupertino, CA


  • Seagate manufacturers SSDs. They are in a very competitive space competing with larger SSD OEMs like Intel and Samsung. Especially since enterprise SSDs are increasingly becoming a commodity, Seagate was interested in taking a higher value solution to the market.


  • By bundling VirtuCache with it's SSDs, Seagate was able to add more value to their SSDs versus their larger competitors in the SSD space.

  • Seagate was able to sell the combined solution directly to IT organizations, versus selling their SSDs to storage and server OEMs only.

Seagate Chooses VirtuCache for its 1200 Accelerator Product Line to Increase VM Densities for VDI Deployments

The Virtunet Difference

Seagate's SSD business unit sells enterprise grade SSDs to server and storage OEMs. By bundling VirtuCache with their SSDs, Seagate was now able to sell the combined solution to end user IT organizations, as a way for IT organizations to improve the performance of their existing storage appliances at a fraction of the cost of a storage appliance upgrade.

Seagate’s Enterprise SSD business unit manufacturers and sells high performance enterprise grade Solid State Drives (SSDs). Seagate realized that enterprise SSDs were well suited for the fast growing Virtual Desktop Infrastructure (VDI) space. Since most VDI deployments use VMware vSphere, Seagate was interested in partnering with a software vendor that could compliment their SSDs to address storage IO issues in VMware based VDI deployments.

Listed below are the specific criteria that Seagate was looking for in a software solution.

  1. Since their SAS or SATA SSDs could be readily deployed in any server, Seagate was looking for a server side software solution that could be bundled with their SSDs.
  2. The software had to be as easy to install in the server as the SSD itself.
  3. The overall cost of the solution needed to be quite a bit cheaper than other alternatives.
  4. Since latencies are a big issue in VDI deployments, such a solution needed to achieve at a minimum the same latencies even at high 2gbps throughput, as would be achieved if the VMware server was connected to an all-flash array.
  5. This server side solution had to eke throughput and latencies from the Seagate SSD that were closer to raw SSD throughput and latencies than rest of the competition. This was especially so because Seagate had recently announced 12 gbps SAS SSDs, whose latencies were comparable to the more expensive PCIe Flash cards. So if a high performance server side SAN acceleration software could be paired with these new Seagate SSDs, then this combination could effectively compete with all-flash storage appliances on the one hand and in-server PCIe Flash cards on the other.
  6. Lastly, increased VM densities had to be demonstrated versus competition.

Below is a point-by-point summary of how Virtunet Systems’ VirtuCache Server Side Caching Software was able to satisfy each of Seagate’s requirements.

  • VirtuCache is installed in the ESXi host. Much like the Seagate SSD, it can be installed in any server running ESXi. There are no other pre-conditions.
  • VirtuCache is a kernel mode driver and like any other driver software, VirtuCache can be installed in under 15 minutes. Most of Virtunet Systems’ competition, like EMC’s XtremeSW, Netapp’s Flash Accel, or Fusion IO’s Ioturbine solutions require kernel mode software and also agents installed in each Guest VM. Virtual Storage Appliance(VSA) software from VMware, Netapp, Datacore, Falconstor, Stormagic, to name a few require installing a VM on each VMware server. Local and shared storage is pooled to this VM. This VM in turn acts like an iSCSI target to rest of the VMs on the host. Such a solution is disruptive to the existing storage architecture since it requires storage to be re-provisioned through the Virtual Storage Appliance. The VSA itself becomes a single point of failure. Lastly, in VSAs since storage processing happens in the VM, the storage IO path that goes from the VMs on the host to the VMware kernel in the normal course of how storage IO flows now gets redirected to the VSA VM, and because the IO path traverses the user space / kernel space boundary, the latencies from the in-host SSD are degraded considerably. In comparison, since VirtuCache is a kernel mode driver only with no VM level components, it provides closest to raw throughput and latencies versus rest of the server side caching and VSA vendors.
  • Price of VirtuCache + Seagate SSDs versus publicly available pricing for other alternatives that improve storage performance to the same extent that VirtuCache does are listed below.
    • One of the options is deploying an all-flash array. All-flash arrays do provide high throughput and low latencies, but they are expensive. Also in many AFA deployments, the SAN switch and HBA on each server need to be upgraded as well. The smallest all-flash array units are from Solidfire (starting price is $60K for a 3TB array that does 50,000 IOPS), and one of the highest performance arrays are from Violin Memory (starting price $250K for 15TB raw storage that does 250,000 IOPS). Pricing for these arrays works out to $15-20/GB and $ 1-1.50/IOPS. All-Flash arrays provide higher throughput and lower latencies than other types of storage appliances, however their latencies are still higher than the latencies that can be achieved with in-server SSDs. This is because of a few reasons. First, the in-server SSD is in the host on a dedicated SAS or SATA bus to the CPU, whereas the SSDs on the all-flash array are behind the network and behind the storage controller. Second, in the case of server side caching, caching workload is distributed across ESXi hosts, with each ESXi host CPU acting like a storage controller for its local caching workload, this results in higher caching efficiencies. Now the argument made by all-flash array vendors is that the latencies across workloads remains consistently low since all the data is on flash. However our argument is that for enterprise IT workloads, having your most recently or frequently used data on flash that is closer to the CPU is as effective as having all the data on flash, and it is a lot cheaper as well.
    • Another option is to augment the existing disk based storage appliance with storage vendor provided SSD and caching functionality deployed on the array. EMC’s Fast Cache, Netapp’s PAM cards or HP 3PAR’s SSD tiering functionality are the most common examples of such storage array based caching or tiering. Listed below are prices of SSDs from various storage vendors. Price for HP 3PAR SSD = $20/GB . Price for EMC Fast Cache SSD = $80/GB . Now compare these prices to the price of VirtuCache bundled with 800 GB 12 gbps SAS SSDs at $6/GB. Also, the Seagate SAS SSD installed in the server will be lower latency and higher thoughput than the HP or the EMC SSD because the in-server SSD is on dedicated 6 or 12 gbps SAS bus to the CPU, and it is inherently a lower latency SSD than the 3PAR and the EMC SSDs listed above.

Testing done by Seagate to show that VirtuCache latencies were closest to raw SSD latencies

VirtuCache is a kernel mode driver only, and so it provides closest to raw throughput and latencies versus rest of the server side caching vendors that are VM based. The way Seagate tested this was by first base lining raw throughput and latencies using IOmeter tests on a VM whose VMDK was on the SSD. Then the same IOmeter tests were run, with the VMDK now on a LUN on a hard disk based EMC storage appliance and VirtuCache caching the LUN data to the SSD.

The Iometer test involved running 10 workers (IOmeter terminology for threads) simultaneously generating 100% random read workload at 4KB transfer size. The VM had 10 cores and 8 GB RAM. Iometer was configured to push 256 simultaneous requests at a time.

Below are the results of those tests.

    • Raw SAS SSD throughput was 90K IOPS, avg response time as recorded in iometer was 30 ms, max response time varied between 50 and 180 ms.
    • VirtuCache turned on with caching now happening from LUN on the EMC CX 3-10 appliance to the in-server SAS SSD. After 10 minutes of running the test, VirtuCache GUI showed that cache hit ratio was at 95%. At this point IOPS and latency numbers were recorded. IOPS was 85K, avg response time was 32 ms, and the max response times varied between 40 and 250 ms.

These tests proved that VirtuCache deteriorated raw SSD throughput and latency by no more than 5%, provided of course that the cache hit ratio was high.

Testing done by Seagate to measure improved VM densities as a result of improved storage performance

The ability to increase the number of VMs on an ESXi host is a direct benefit of reduced storage latencies.

To test for increased VM densities, Seagate ran a well defined 80% read and 20% write workload that did about 200 IOPS from within a Windows 7 virtual desktop VM.

VMware View 5.1 was used to spin up  virtual desktops (linked clones).

The test involved noting down the number of VMs that could be successfully deployed on the server, with VMware server latency staying under 40 ms, before deploying VirtuCache and repeating this test after deploying VirtuCache.

As shown below, the number of VMs quadrupled..

At 80 VMs, the bottleneck had now shifted from storage IO to the 48GB Server DRAM.


1200 Accelerator Bundle – Seagate 12gbps Pulsar SAS SSD with VirtuCache software. FC SAN – When the VMDKs were on LUNs on a Clariion CX3-10 without any SSD boost.

In the words of Rich Vignes, Director of Enterprise SSD Product Management, “VirtuCache is arguably the best caching solution on the market optimized for VMware environment. There are many things which make VirtuCache better than other products in the market, but here are just a few.   First, VirtuCache is a hypervisor kernel based implementation, making all caching decisions within the VMware kernel AND without modifying the native VMware kernel. This is unlike competitive approaches that utilize agent software that must be loaded into each Guest VM, or others that modify the native VMware kernel. Second, VirtuCache only caches one copy of the IO block, even if that block is being called by many VMs at different times. Third, in the case of linked clones, where many linked clones use the same parent image, Virtunet ensures that parent image are cached only once. In comparison, some of the other server side caching vendors copy the parent image to the local SSD as many times as there are linked clones created from the same parent image, thus filling up the SSD with large amounts of redundant data.  These last two features are especially useful for VDI which has a large percentage of duplicate data.  Fourth, VirtuCache fully supports VMware features – vMotion, High Availability, Failover, DRS, Snapshots, storage vendor supported Multi-Pathing, or native VMware Multi-Pathing, X-vMotion and storage vMotion.”


The Seagate 1200 Accelerator for VMware that bundles VirtuCache with Seagate’s 12gbps SAS SSD is an outstanding, cost effective way to increase the performance and scalability of enterprise VMware environments, at a fraction of the cost of high-end enterprise SAN equipment, more expensive PCIe Flash products, or All-Flash arrays.  The Seagate 1200 Accelerator improves VM response times and increases the VM to host ratio.

Download Trial Contact Us