Archive: Posts

VirtuCache Performance Benchmarks Using Iometer

Performance benchmark using Iometer comparing Storage I/O with and without VirtuCache, with multi-threaded 67% read 33% write workload that saturated the disk.

Multi-threaded mixed read-write Iometer tests are a good way to simulate real life OLTP applications for testing both the in-server Flash Card and our Software

I ran Iometer on a relatively small server ( 4core Xeon 3400 with 8GB RAM ). My workload involved simultaneous reads and writes, with a constant proportion of 67% reads and 33% writes spread across 6 threads on a 4 core server. The VM was assigned 4 cores to 1 vCPU, and 4 GB RAM. I used queue depths of 128 and 256 to generate large amounts of load. Windows 2008 Server R2 was installed in a VMware 5.0 VM.

Caching device was a PCIe based Flash Card (costing about $ 8000 per TB) and the data to be cached was on a cheap SCSI disk.

Highlights of the test results are

  1. Throughput improvements range from 2X to 8X, for both read and write throughput.
  2. Application response times are reduced by between 30% and 85%.

Comparing these Iometer results to the Sqlio results on my last blog article, it is easily concluded that our caching solution with a high IOPS Flash Card over PCIe improves read performance 40-50X. This performance gain reduces to 2-4X when I increase the write workload to 33% and saturate the disk with a combination of high queue depth (256) and many threads (6 threads on a 4 core CPU).

Iometer charts follow.

Figure 1 : Total read + write  throughput (MBps)

Figure 2  : Read throughput in my mixed 67%read+33% write tests (MBps)

Figure 3  : Write throughput in my mixed 67%read+33% write tests (MBps)

Figure 4 : Average response times for both reads and writes (ms)

Figure 5 : Average read response times in my 67%read+33% write tests (ms)

Figure 6 : Average write response times in my 67%read + 33% write tests (ms)