Blog

Improving the performance of Dell Compellent SAN for VMware VMs

VirtuCache is software that improves the performance of Dell Compellent appliances without requiring you to upgrade the appliance or the SAN network. The performance improvement you will get from your Compellent appliance will rival an upgrade to an all-flash array.

Compellent appliances were the workhorses of the enterprise storage market a few years ago. They were cost effective at high capacities. The only drawback was that they are slower since they are primarily hard drive based, and when connected to VMware they exhibit all the 'IO blender' symptoms resulting in high VM level storage latencies.

Reducing Write Latencies in CEPH Storage

CEPH is a popular open source storage software. However its write latencies are high. VirtuCache caches CEPH volumes to in-host SSDs and by doing so reduces VM level latencies considerably.

Improving the performance of Equallogic SAN for VMware VMs

VirtuCache is software that improves the performance of Equallogic appliances without requiring you to upgrade the appliance or the SAN network. The performance improvement you will get from your Equallogic appliance will rival an upgrade to an all-flash array.

Equallogic appliances were the workhorses of the enterprise storage market a few years ago. They were cost effective at high capacities. The only drawback was that they are slower since they are primarily hard drive based, and when connected to VMware they exhibit all the 'IO blender' symptoms resulting in high VM level storage latencies.

Reducing Build Times for DevOps

Problem definition: ServiceNow's Itapp Dev/Ops team wanted to improve storage performance from their existing storage appliance and storage network without requiring a hardware refresh.

Solution: Virtunet’s VirtuCache software improves the performance of any SAN based storage appliance and storage network, no matter how old or slow. With VirtuCache, the performance improvement that customers get from their existing storage infrastructure rivals the performance of all-SSD arrays.

VirtuCache details: VirtuCache is installed in the VMware physical server along with any SSD in the same host. It then automatically caches frequently used data (both reads and writes) from any SAN based storage appliance to this in-host SSD. Subsequently, by automatically serving more and more data from this much faster in-host SSD (instead of backend storage appliance), VirtuCache improves storage performance considerably, thus allowing higher VM consolidation ratios and improving application performance from within VMs.

VirtuCache competes with SSD based storage appliances. Since the SSD is closer to the CPU in the case of VirtuCache, storage latencies will be lower than any SSD based storage appliance. Also it’s a cheaper alternative than upgrading to an all-flash array.

Virtunet solution: Virtucache was installed on 3 ESXi hosts caching to 1.6TB PM1725 PCIE flash cards. In our tests the PM1725 SSD does 250MBps at 1ms VM level latencies.

VirtuCache was configured to cache both reads and writes for all VMs. The writes were replicated to another SSD on another host (caching policy of Write Back One Replica). All caching and replication related operations in VirtuCache are automatic. Write replication is done to prevent data loss in case of host failure. If a host were to fail, then VirtuCache immediately syncs the SAN from backup copy of writes that are on another host.

Benefits: Using VirtuCache, ServiceNow was successfully able to reduce code compile times to a third of what they were experiencing before.

Reducing latencies in vGPU assisted VDI

VirtuCache is installed in VMware vSphere kernel. It then automatically caches frequently and recently used data from any backend storage to any high speed media (RAM/SSD) in the VMware host. By bringing large amounts of 'hot' data closer to the VMware host GPU and CPU, VirtuCache improves the performance of all applications running within VMs including GPU assisted operations.

VMware’s vFlash Read Cache (VFRC) versus Virtunet’s Read+Write Caching

Here are the three biggest differences.

  1. We cache reads and writes, VMware's VFRC caches only reads. Caching writes improves the performance of not only writes, but also of reads, and even in a read dominated workload.1

  2. We require no ongoing administration. Caching in our case is fully automated. All Vmware features are seamlessly supported. Versus VFRC that requires administrator intervention when doing vmotion, for creating a new VM, for maintenance mode, for VM restore from backup, VFRC requires application block size assignment, requires SSD capacity assignment per vdisk. Many other tasks require admin oversight as well.

  3. We provide easy to understand VM to appliance level metrics for read/write throughput, IOPS, and latencies, with associated alerting to forewarn of failure events. Monitoring vendors charge more for just this set of features than what we charge for the combined functionality of improving performance + storage IO path monitoring. VFRC doesn't provide any of this.

Below is a longer list of differences, cross-referenced with VMware authored content:

A Counter Intuitive Approach to Solve the High Capacity, Low Latency Requirements for Video Storage

By not using dedupe, compression, or RAID, using slow HDDs in centralized storage, and moving SSDs to compute hosts, we arrived at low price per capacity and performance for video storage.

Here are unique requirements of video storage, some are obvious and others not so much, that inspired us to put together a different architecture than the conventional storage OEM design.

PetaByte Scale Enterprise Grade Server SAN Storage for The Creation Museum

Creation Museum in Kentucky, USA is a museum about Bible history and creationism. Their storage needs were typical of a museum, requiring large amounts of storage for digital multimedia content related to the various exhibits at the museum.

They were looking for the below list of features from their new storage:

  1. The ability to sustain a loss of any one component – server, HDD, SSD, NIC card, software instance etc;

  2. The ability for this storage to interface with VMware over iSCSI;

  3. They wanted to reuse their HP and SuperMicro servers, each of these servers was of a different vintage and configuration;

  4. Ability to quickly search and present content that was stored on their fileserver and content management system;

  5. It needed to be considerably low cost, since they already had server hardware that had all the Hard Drives and SSDs to meet their raw storage needs;

  6. Virtunet’s VirtuStor software was installed on each one of Creation Museum’s servers. The servers were then clustered together with VirtuStor’s clustering functionality. In all, 9 HP and SuperMicro servers were part of the VirtuStor cluster with total raw capacity of 1.2PetaBytes.

    VirtuStor was configured to replicate data between servers, as a result usable capacity was reduced to 600TB. VirtuStor replicates data between servers to be able to sustain a loss of any one component, all the way up to loss of an entire server. Also VirtuStor is typically configured without RAID, since RAID rebuild times are very high for high capacity drives. Instead of RAID, VirtuStor replicates data to two different hard drives in two different servers.

    These servers had roughly 90/10 HDD/SSD capacity mix. The SATA SSDs were being used as a write journal (cache) for VirtuStor. Data was stored on enterprise grade 8TB SATA HDDs.

    Three networks were configured on each VirtuStor server, one each for iSCSI, VirtuStor replication, and management. Both iSCSI and replication were on teamed 10gbps ports. The replication network facilitates replication of data between servers and clustering of these servers.

    The VirtuStor cluster was connected to ESXi hosts using iSCSI. VirtuStor iSCSI gateway was running on three of the VirtuStor servers. Since each of the three VirtuStor servers running the iSCSI gateway had two teamed iSCSI ports, the cluster had 6 paths to each ESXi host, with the ability to sustain a loss of 4 paths.

    To improve the performance of VirtuStor when end users were working in their content management and search applications that managed their digital content, VirtuCache was installed in each of their 6 ESXi hosts along with a 2TB Intel P4600 PCIe SSD. By caching frequently used reads and all recent writes from and to the VirtuStor cluster to this PCIe SSD, we were able to deliver great performance for their search and CMS applications.

    In summary, we were able to assemble 600TB of usable storage capacity with 12TB of caching SSDs, with all the high availability and performance features expected of enterprise grade storage, for less than a fourth of the cost of OEM branded iSCSI storage appliances.

Improving Storage Performance at The Ark

The Ark Encounter, in Williamstown, Kentucky, features a full-size Noah’s Ark built according to the dimensions of the Bible. Answers in Genesis (AiG) is the Christian ministry responsible for The Ark Encounter.

AiG's IT department had a few ESXi hosts connected to their HP Store VSA. As a result of increased attendance at the Ark, their VMware workload increased dramatically, which in turn resulted in performance issues within VMs.

AiG turned to VirtuCache to mitigate their storage latency issues. By caching frequently and recently used data (both reads and writes) to in-host SSDs+RAM, Virtunet resolved their storage performance issues. We competed with HP Store VSA's Adaptive Optimization(AO) feature, which is HP's tiering functionality for the VSA.

Here is how VirtuCache competes with the Store VSA's tiering functionality.

CEPH Storage is Ideally Suited for Disaster Recovery (DR) Infrastructure

CEPH storage from Virtunet has all the features of traditional iSCSI SAN with the exception that it is reasonably priced because it uses commodity servers with all off-the-shelf hardware. And so it is ideally suited for backup and DR storage which needs to be cheap above all else.

Page 2 of 512345