Archive: Posts

View Storage Accelerator vs. VirtuCache – VMware Host Side Storage Caching

The big difference between the two is that VSA caches only 2GB of reads from the Master VM1,2. VirtuCache caches reads + writes from all server & desktop VMs, and it can cache to TBs of in-host SSD/RAM, so that all storage IO is serviced from in-host cache.

More details in the table below.

To Improve CEPH performance for VMware, Install SSDs in VMware hosts, NOT OSD hosts.

SSDs deployed for caching in CEPH OSD servers are not very effective. The problem lies not in the SSDs, but because they are deployed at a point in the IO path that is downstream (in relation to VMs that run user applications) of where the IO bottleneck is. This post looks at this performance shortcoming of CEPH and its solution. There are two options for improving the performance of CEPH. Option 1 is to deploy SSDs in CEPH OSD servers for journaling (write caching) and read...

How to Select SSDs for Host Side Caching for VMware – Interface, Model, Size, Source and Raid Level ?

In terms of price/performance, enterprise NVME SSDs have now become the best choice for in-VMware host caching media. They are higher performing and cost just a little more than their lower performing SATA counterparts. The Intel P4600/P4610 NVME SSDs are my favorites. If you don’t have a spare 2.5” NVME or PCIe slot in your ESXi host, which precludes you from using NVME SSDs, you could use enterprise SATA SSDs. If you choose to go with SATA SSDs, you will also need a high queue depth RAID...

Infinio’s Read Caching versus Virtunet’s Read+Write Caching Software

The biggest differences are: We accelerate both reads and writes, Infinio accelerates only reads. Infinio doesn't support Linked Clones or Non Persistent VDI. We support all VDI features. With us you can apply caching policy at the datastore and/or VM level versus only at the VM level with Infinio. We accelerate IO originating in VMware kernel and VMs, versus only VM generated IO is accelerated by...

Reducing Build Times for DevOps

ServiceNow's Itapp Dev/Ops team wanted to improve storage performance from their existing HP 3PAR storage appliance and iSCSI storage network without requiring a hardware refresh. VirtuCache Deployment: Virtucache was installed on 3 ESXi hosts caching to 1.6TB PM1725 PCIE flash cards. In our tests the PM1725 SSD did 250MBps at 1ms VM level latencies. VirtuCache was configured to cache both reads and writes for all VMs (Write-Back caching). Writes were replicated to another SSD on another...

Reducing write latencies in a VMware stretched SAN cluster

There are only a few applications, financial trading software being one example, that require very low latencies, lower even than what’s possible with an all-flash array (AFA). VirtuCache caching to in-host RAM results in lower VM latencies than an AFA. This is because RAM latencies are an order of magnitude lower than NVME SSDs, and in the case of VirtuCache the cache media (RAM) is connected to the host CPU through a high speed memory bus, versus in the case of an AFA where the NVME SSDs...

PetaByte Scale Enterprise Grade Server SAN Storage for The Creation Museum

Creation Museum in Kentucky, USA is a museum about Bible history and creationism. Their storage needs were typical of a museum, requiring large amounts of storage for digital multimedia content related to the various exhibits at the museum.

VirtuCache Doubles VM Density at Cloud.com

Cloud.com provides cloud and infrastructure orchestration software for service providers. They have an extensive internal dev/ops environment. Cloud.com wanted to re-purpose few of their expensive Dell C6220 servers to run additional applications, which meant that they needed to increase the number of existing VMs deployed on each host. As is often the case, there was plenty of CPU, memory, and networking capacity available on each one of the servers, and it was only storage latencies...

Virtunet’s Write-Back (Read+Write) Caching Competing with Write-Through (Read-Only) Caching at a Large School District

Host side caching software needs to accelerate both reads and writes especially in light of increasing competition from all-flash arrays. Caching writes is important even for workloads that are read intensive. If we were not to accelerate writes, the reads behind the writes on the same thread will not be accelerated, thus slowing reads as well. Using TPC-C benchmark, we showed 2x improvement in performance versus a read caching software vendor at a large...

VirtuCache Improves Hadoop Performance within VMs at Stanford

Typically Hadoop workloads are run on bare-metal servers. However since Stanford’s School of Medicine was 100% virtualized, and because their security, monitoring & management tools were integrated within VMware, it was easier for them to deploy Hadoop within VMs, instead of provisioning new physical servers. The biggest challenge Stanford faced with Hadoop VMs was low throughput and high latencies for writes to disk. Deploying VirtuCache resulted in write latencies in Hadoop reducing...
Page 1 of 3123