What if you had the choice of having your applications available 99% of the time versus 99.9995% of the time – would you really experience a difference?
What is your typical morning like? Perhaps it begins with breakfast, followed by the morning news and a 30-minute workout. But not everything goes as planned. Sometimes the cereal you were hoping for or the web site you frequent for current events aren’t available. For these daily decisions, the answers are easy – eat something else or try another URL. Honestly, if your favorite cereal was only available 90% of the time, you’d be fine.
A typical day at the office generally starts out with a similar pattern – turn on the computer, log-on and begin using the applications essential to your job and company’s success. For the majority of the time, most days go as planned. However, what happens when your routine goes awry? What’s the effect on your company’s productivity when the applications you and your colleagues depend upon go down and everything comes to a screeching halt? What if the application is outward facing and affects customers trying to do business with you? What if it happens at a peak time? These are all questions someone considered when deciding what type of availability solution was required for the application (or at least you hope they did). The effects are as much as the potential costs – but that is a story for another post.
This Availability Journey Infographic does a great job of representing almost every factor you should consider and classifies the probable solutions by their average yearly downtime. This average is translated into a “Downtime Index Multiplier” that can be used to help calculate your company’s “Yearly Downtime Risk”. The Downtime Index Multiplier is shown at each stage in the infographic. It is derived from the average downtime for the given solution — converting hours, minutes and seconds into a decimal format for multiplication. So, a solution with 99% availability has about 87 hours and 36 minutes of yearly downtime – converting to a Downtime Index Multiplier of 87.6 (87+(36 /60)). You use this multiplier to calculate your yearly downtime risk for the solution as shown on the Availability Journey Infographic. For example, if you calculated your application’s hourly cost of downtime at only $10,000 – your yearly downtime risk, at a 99% availability rate, would be $876,000 ($10,000 x 87.6). In comparison, a 99.9995% solution has only 2 minutes and 38 seconds of yearly downtime – or an index of only 0.04. Using the example above, the yearly downtime risk would be $400 ($10,000 x .04). Thus, if your application’s hourly cost of downtime were only $10,000 an hour, the difference in yearly risk between the lowest and highest availability solutions would be $875,600.
Today’s top-of-the-line availability solutions are not the purpose-built exorbitantly priced mainframes of yesterday. They are industry-standard plug-and-play solutions that fit into almost any infrastructure including virtualized and cloud. One thing I can guarantee; unless you’re a credit card company, fault-tolerant hardware or software won’t cost you a fraction of the $876K at risk in the example above. Then again, if you were, you’d already be utilizing fault-tolerance because your risk is probably in the billions even without considering the hidden costs of downtime like damaged reputation, regulatory impact and lost customers.
So, what is the cost of downtime and availability goal for your company’s applications in this always-on world? Well, 67% of best-in-class organizations use fault-tolerant servers for high availability6. Be careful if you’re part of the 66% who still rely on traditional backup for availability8… because you are taking a huge gamble.