I’m often asked a question along the lines of “I’m on Release X and considering upgrading to Release Y? Is the new release faster? How much so?” Maybe you are planning a hardware upgrade and have similar questions.
I understand your interest. But here’s my problem. While we do run in-house benchmarks when we are preparing a new release of OpenVOS, our benchmarks simply reflect how well the system runs the benchmark! To the extent that the activity of a benchmark reflects your application, you can expect somewhat similar results. But we have many OpenVOS customers, and they run many different applications. So while we can make some general statements about what you might expect, we always have to couch them in careful language. Even if we saw a 20% performance improvement on our benchmark, your results will probably be less, but could be better. The result is that our estimate is often not very useful.
There can be no doubt that understanding the performance characteristics of an application is an important step in the qualification of that application on a new release of the operating system, or on a newer hardware platform. Most OpenVOS customers are running mission-critical applications on their systems; the last thing you need is to take an upgrade and see some sort of surprise.
So I’d like to propose a different approach. Instead of asking me for general statements on the performance of a new release, I’d like to suggest that you prepare a subset of your application — perhaps the most performance-sensitive parts of it — to be run in a controlled, simulated environment. Make up some fictional data that retains the breadth and depth of the actual data. If you handle transactions for 3 million customers in the real world, then populate 3 million simulated customers in your test environment. If you handle 1000 stores in the real world, then populate 1000 stores for testing purposes. The reason that you need to take this measure is simple; you want the memory and storage footprint of the test environment to accurately reproduce what happens in production.
You can use this test environment to establish baseline performance on your current setup, and then when it is time to upgrade to a new release of OpenVOS, or to a new hardware platform, you can use your own test environment as the yardstick. If you want to find out the benefits of a hardware upgrade before you purchase the equipment, talk to us. We maintain a benchmark lab where you can come and run your tests on any of our current products. Often, you don’t even have to travel; we can make the equipment available over the Internet.
Once you have a realistic, reproducible software test environment, you can easily answer several really crucial questions — where is the top-end of my application performance? How many transactions can I drive through this system? What bottlenecks do I hit when I try to do this? In my experience, there are always bottlenecks. Far better to find the bottlenecks on the test system than in production.
Stratus Professional Services has lots of experience helping customers measure and optimize the performance of their applications on our products. So if you need a little help with this exercise, please give your Account Executive a phone call.