Category Archive for: ‘Performance Benchmarks’

How to simulate production workloads running in VMware VMs using FIO or Iometer?

Here is a quick way to reproduce your entire ESXi cluster-wide production workload using only one VM running a storage IO testing tool like FIO or Iometer. This exercise is useful when you evaluate new storage technologies to see how they might perform with your real-life workload, but without actually deploying those in production. The focus of this post is to do this in under 30 minutes and using freely available Iometer or FIO tools. Step 1: If most of your workload is running in Linux...

Steps to Run Iometer in a VMware VM?

Iometer is a great storage IO testing tool. It is easy to use, flexible, accurate, and free. Below are steps to run Iometer from within a Windows VM running on VMware. Step1: Install the older 2006.07.27 edition. Don’t use the latest 1.1.0 edition which has bugs. Download link is here. Change only the parameters listed below, keep all else default. Step 2: After you install Iometer, run it with Windows ‘run as Administrator’ option. Each ‘Worker’ (screenshot below) is a process (or...

Virtunet’s Write-Back (Read+Write) Caching Competing with Write-Through (Read-Only) Caching at a Large School District

Host side caching software needs to accelerate both reads and writes especially in light of increasing competition from all-flash arrays. Caching writes is important even for workloads that are read intensive. If we were not to accelerate writes, the reads behind the writes on the same thread will not be accelerated, thus slowing reads as well. Using TPC-C benchmark, we showed 2x improvement in performance versus a read caching software vendor at a large...

An Order of Magnitude Improvement in IOPS and Latencies using SQLIO Benchmark

Performance Benchmark using SQLIO that compares Storage I/O with and without VirtuCache

VirtuCache Performance Benchmarks Using Iometer

Performance benchmark using Iometer comparing Storage I/O with and without VirtuCache, with multi-threaded 67% read 33% write workload that saturated the disk.