Archive: Posts

How To Address Storage Performance Concerns Before Migrating Physical Servers To Virtual

By improving storage performance for VMs, host side caching facilitates P2V of IO intensive bare-metal servers. And it saves capex because there is no storage upgrade involved.

If you have not yet virtualized your physical servers due only to perceived storage performance issues in VMs, then deploying VirtuCache will help. Since VirtuCache caches frequently used reads and all recent writes to in-host SSD and/or in-host RAM, from any back-end storage appliance, the storage performance from within a VM will now be considerably higher than from within your existing physical Linux or Windows server. As a result, P2V of database servers and other storage IO intensive applications is a big use case for VirtuCache.

This blog article talks about how to assure yourself BEFORE you do the P2V that the VirtuCache accelerated storage + VMware infrastructure will perform better than your existing bare-metal servers.

Also a customer use case illustrates that VirtuCache accelerated bare-metal server (when VirtuCache is deployed in bare-metal Linux) performs at the same level as VirtuCache accelerated VMware VM (when VirtuCache is installed in the VMware kernel), thus proving that virtualization in itself does not reduce application performance.

Reasons why bare-metal servers are not virtualized:

Even in the most virtualized Enterprises, there are almost always a few bare-metal servers. These physical servers run either mainstream server OSes like Linux or Windows that can easily run in VMs or they run OSes like HP-UX that cannot be easily virtualized. We won’t dwell on storage performance issues for legacy OSes, since we don’t address those. For bare-metal Linux or Windows servers, the most common reason these are not virtualized is due to perceived storage performance problem if these servers were P2Ved. The other reason servers stay bare-metal is due to expensive core based licensing for databases (especially Oracle) where the database needs to be licensed for all cores in the VMware cluster, regardless of the number of cores in use by the database. A third and relatively rare reason is active-active clustering software (from Oracle, Microsoft, and Veritas) that are not supported by VMware.

Host side caching facilitates virtualizing bare-metal servers:

If you have not yet virtualized your physical servers due only to perceived storage performance issues in VMs, then deploying VirtuCache will help. Since VirtuCache caches frequently used reads and all recent writes to in-host SSD and/or in-host RAM, the storage performance from within a VM will be considerably higher than from within a physical Linux or Windows server. As a result, P2V of database servers and other storage IO intensive applications is a big use case for VirtuCache.

However if you insist on not virtualizing your bare-metal servers, then VirtuCache can be deployed on bare-metal Linux servers as well.

Steps to P2V applications where storage performance concerns are preventing you from virtualizing physical servers

The single most important step is to ensure that the increased pressure on storage from within VMs, as a result of P2V of storage IO intensive applications, does not adversely impact storage performance, in turn resulting in poorly performing apps. And to make sure of this before the P2V actually happens, instead of being surprised after the fact.

To do this we need to first profile customer’s production workload running on physical servers, then simulate that workload from within VMs running on hosts that have VirtuCache running and show the customer before and after latency numbers to assure them that application level latencies are lower at all times within VMs than when the application is running on physical servers.

Step 1: Find out the frequency at which the workload repeats on the customer’s bare-metal servers that they plan to virtualize. Does the customer’s workload repeat weekly, bi-weekly, etc? For physical Linux servers, run Linux Top command, and for physical Windows servers, run Perfmon, both as a cronjob to collect read and write throughput and latencies, ideally for the entire duration that the workload is unique and non-repeating.

Step 2: Now identify VMware hosts to do the P2V to. Install VirtuCache in the VMware kernel along with a SSD in those hosts and configure VirtuCache to cache Datastores (both reads and writes will be cached) from customer’s storage appliance to this in-host SSD.

Step 3: Simulate customer’s production workload within VMs using an IO simulator. You can use FIO in Linux VMs and Iometer in Windows VMs. This exercise should show that read and write latencies are consistently lower in VMware VMs when the host is accelerated by VirtuCache compared to their Linux or Windows physical servers.

Step 4: Migrate applications from physical servers to virtual once customer is assured that VM level latencies will be lower in VMs than what they were experiencing in physical servers.

Customer use case – Accelerating Oracle DB in bare-metal Linux and Linux VM

At Dunn & Bradstreet, we used these same steps to assure D&B’s system administrators that P2V of their IO intensive applications when VirtuCache was running in the VMware host resulted in far better throughput and much lower latencies than they were experiencing in their physical servers.

In addition, we deployed VirtuCache in bare-metal Linux servers that were running Oracle that they did not want to virtualize for Oracle licensing cost reasons.

Below charts show Linux level throughput and latencies when the application was running within a Linux VM when the VMware host was accelerated by VirtuCache and when it was not, and when the application was running on bare-metal Linux server first without VirtuCache and then with VirtuCache installed in the bare-metal Linux server.

Throughput of VM vs Baremetal Server vs both accelerated by VirtuCache

Latencies in VM vs Baremetal Server vs both accelerated by VirtuCache

Summary: As you can see from the above charts, latencies and throughput for the application running in bare-metal Linux and a Linux VM in VMware is about the same. However when VirtuCache accelerates the performance of underlying storage either in bare-metal Linux or for the VMware host hosting the Linux VM, the performance improvement is considerably higher at 4X improved throughput and 3X improved latencies.

ps: As of June 2015, Depending on the workload characteristics like read/write mix, sequential/random IO mix, application block size, etc. and storage architecture like SAN, NAS, local HDD, etc, we could also choose an open source Linux based caching software like bcache, flashcache, and dmcache that we can deploy and support as alternatives to VirtuCache for customers who are comfortable deploying open source software, including contributing workload specific modifications and bug fixes that we make back to open source. Keep in mind that these software packages work only in bare-metal Linux, and not in Linux VMs. They will corrupt data if deployed in Linux VMs since they don’t support features like live migration, storage offload from host CPU to storage controllers, linked clones, VM based snapshots, clustering, and high availability.