Archive: Posts

View Storage Accelerator vs. VirtuCache – VMware Host Cache

The big difference between the two is that VSA caches only 2GB of reads from the Master VM1,2. VirtuCache caches reads + writes from all server & desktop VMs, and it can cache to TBs of in-host SSD/RAM, so all storage IO is serviced from in-host cache.

More details in the table below.

To Improve CEPH performance for VMware, Install SSDs in VMware hosts, NOT OSD hosts.

SSDs deployed for caching in CEPH OSD servers are not very effective. The problem lies not in the SSDs, but because they are deployed at a point in the IO path that is downstream (in relation to VMs that run end user applications) of where the IO bottleneck is. This post looks at the performance shortcoming of CEPH and its solution. There are two options for improving the performance of CEPH. Option 1 is to deploy SSDs in CEPH OSD servers for journaling (write caching) and read...

How to Select SSDs for Host Side Caching for VMware – Interface, Model, Size, Source and Raid Level ?

In terms of price/performance, enterprise NVME SSDs have now become the best choice for in-VMware host caching media. They are higher performing and cost just a little more than their lower performing SATA counterparts. The Intel P4600/P4610 NVME SSDs are my favorites. The Samsung PM1725a is my second choice. If you don’t have a spare 2.5” NVME or PCIe slot in your ESXi host, which precludes you from using NVME SSDs, you could use enterprise SATA SSDs. If you choose to go with SATA SSDs,...

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest difference is that we accelerate both reads and writes, Infinio accelerates only reads. A few others are - with us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio; we accelerate creation of VMs, snapshots, and other VMware kernel operations, which they don't. For existing Infinio customers, we are offering VirtuCache for just the price of Infinio's annual support payment. Offer valid till December 31, 2019...

Reducing Build Times for DevOps

ServiceNow's Itapp Dev/Ops team wanted to improve storage performance from their existing HP 3PAR storage appliance and iSCSI storage network without requiring a hardware refresh. VirtuCache Deployment: Virtucache was installed on 3 ESXi hosts caching to 1.6TB PM1725 PCIE flash cards. In our tests the PM1725 SSD did 250MBps at 1ms VM level latencies. VirtuCache was configured to cache both reads and writes for all VMs (Write-Back caching). Writes were replicated to another SSD on another...

PetaByte Scale Enterprise Grade Server SAN Storage for The Creation Museum

Creation Museum in Kentucky, USA is a museum about Bible history and creationism. Their storage needs were typical of a museum, requiring large amounts of storage for digital multimedia content related to the various exhibits at the museum.

VirtuCache Doubles VM Density at Cloud.com

Cloud.com provides cloud and infrastructure orchestration software for service providers. They have an extensive internal dev/ops environment. Cloud.com wanted to re-purpose few of their expensive Dell C6220 servers to run additional applications, which meant that they needed to increase the number of existing VMs deployed on each host. As is often the case, there was plenty of CPU, memory, and networking capacity available on each one of the servers, and it was only storage latencies...

Virtunet’s Write-Back (Read+Write) Caching Competing with Write-Through (Read-Only) Caching at a Large School District

Host side caching software needs to accelerate both reads and writes especially in light of increasing competition from all-flash arrays. Caching writes is important even for workloads that are read intensive. If we were not to accelerate writes, the reads behind the writes on the same thread will not be accelerated, thus slowing reads as well. Using TPC-C benchmark, we showed 2x improvement in performance versus a read caching software vendor at a large...

VirtuCache Improves Hadoop Performance within VMs at Stanford

Typically Hadoop workloads are run on bare-metal servers. However since Stanford’s School of Medicine was 100% virtualized, and because their security, monitoring & management tools were integrated within VMware, it was easier for them to deploy Hadoop within VMs, instead of provisioning new physical servers. The biggest challenge Stanford faced with Hadoop VMs was low throughput and high latencies for writes to disk. Deploying VirtuCache resulted in write latencies in Hadoop reducing...

VirtuCache Improves Performance of Healthcare Imaging Application deployed in VMware VMs

EHR, imaging, analytics, desktop & application virtualization applications are prone to storage performance issues in VMware, that are easily solved with VirtuCache.

Page 1 of 3123