- Overcoming poor performance, due to Objectivity's storage IO intensive environment
- Upgrading to an All-Flash array did not meet Objectivity's allotted budgetary requirements.
- VirtuCache was able to drastically increase throughput and reduce latencies, thus having an overall 75% decrease in build-test cycles.
- Objectivity was able to achieve better performance than an All-Flash array without the hefty cost associated with upgrading to such a solution.
The Virtunet DifferenceObjectivity develops purpose built databases and graphing applications for geo-spatial data. As a result their dev/ops infrastructure is storage intensive. By caching frequently used data from their HP appliance to large amounts of in-host SSDs, we ensured lower latencies even at high gigabyte/second throughput. Virtucache was much cheaper than the alternative solution of buying terabytes of all flash HP storage.
By improving storage performance, Virtucache was able to improve the performance of Jenkins based Continuous Integration process, which in turn resulted in quicker build-test cycles.
Objectivity develops object database to store and analyze large amounts of geo-spatial image data. They had recently moved to a Jenkins based CI process.
Continuous Integration (CI) tools like Jenkins, Electric Cloud, Django allow software development teams to easily run tests automatically whenever new code is pushed to the source repository. This allows developers to get quick feedback with smaller increments of code added to the repository, as a result problems in new code can be detected and subsequently corrected faster than the older process of pushing large amounts of code to the repository and testing infrequently.
As a result of increased build frequency and larger number of jobs running simultaneously, such continuous build process is storage IO intensive, especially on writes. Also the problem of solving high write latencies is harder than read latency issues even with higher speed SAN and storage appliance.
Choosing a Solution
As Objectivity started to experience poor performance, they looked at two options, one was to upgrade to an All-Flash array (AFA) and a second option was to move their datastores to host based SSDs. An AFA with a SAN upgrade was cost prohibitive. So then they decided to use host based SSDs for configuration, build records, and artifact storage. However that led to reduced manageability, because VMware vmotion and high availability were now not possible, and as a result host failure would result in data loss.
VirtuCache was deployed with 1.2TB of in-host write optimized Intel S3710 SSDs in each host. VirtuCache then cached both reads and writes to this local in-host SSD from Objectivity’s back-end HP 3Par appliance. VirtuCache caches writes by writing to the local SSD first and then syncing the local writes to the backend SAN over time. To protect against data loss if the local host were to fail, VirtuCache mirrors writes to another SSD in another host. This caching policy within VirtuCache is called “Write-Back Policy with One Replica”.
Considering that the continuous build and test process is write intensive, and especially at Objectivity, where large volumes of geo-spatial image data had to be quickly imported and analyzed, this was a difficult problem to solve even for all-flash arrays.
Benefit to Objectivity
With VirtuCache, caching reads and writes to local in-host SSDs, Objectivity was now able to process over 500MBps throughput per host and yet achieve sub 10ms latencies at the VM level for their Jenkins based VMs.
The time it took for Objectivity’s developers to run their build-test cycles was reduced to a fourth of what it was before.
Signup for the VirtunetSystems Newsletter
Related Case Studies
- Improving Performance of Log Management Application at a Service Provider
- Reducing Write Latencies in CEPH Storage
- Improving Performance of Microsoft Dynamics AX ERP
- VirtuCache Improves Hadoop Performance within VMs at Stanford
- VirtuCache Improves the Performance of Equallogic Appliances for VMware VMs and Bare-Metal Linux Servers