There was a point in our history when experiencing unexpected downtime was considered unavoidable. Back in 1996, the primary internet provider AOL, went offline for 19 hours. The outage inconvenienced more than 6 million subscribers and was a wake-up call for organizations that had come to depend on the connectivity to earn money.
And yet, the Chairman and CEO, Steve Case said that the outage, caused by a maintenance error, might very well happen again. He offered no promises that the internet would be continue to run 24/7 and in fact, cited the outrage as evidence of just how important AOL was to people.
Imagine that same scenario playing out today.
Imagine that after Bank of America was down for 6 days in 2011 that their response had been something akin to, “Yeah, that might happen again.” Or if after Delta Airlines’ service was disrupted in 2016 as a result of aging technology they had managed the downtime issue that cost them $150 million dollars by stating, “The good news, is that now you understand how important Delta is to you.”
Northrop Grumman, a defense technology company, best expressed how organizations feel about downtime after they had a technological meltdown that left many Virginia based government agencies frozen. A mistake that was eventually traced back to the failure of three year-old memory cards and ultimately cost the company $5 million dollars in penalties. Chief Information Officer Sam Nixon said,”The thing that is never supposed to happen, happened.”
So how do you make sure downtime, the thing that is never supposed to happen, never does happen? Start by asking the right questions before making any initial purchases or upgrades to your system. The Downtime Prevention Buyer’s Guide, states the first, and most important, question you should be asking is, “What level of uninterrupted application processing can your solution guarantee?”
“There are a variety of availability solutions on the market today, each of which delivers a different level of application uptime. When evaluating solutions, it is helpful to ask vendors how many “nines” of availability their offerings provide.
If your availability requirements are relatively low, you may be able to get by using a standard server with duplicate internal components. These servers typically deliver two nines — 99% — or more of availability for the applications running on them, which can result in as much as 87.6 hours of unplanned downtime per year.
Continuous data replication delivers three nines — 99.9% availability —which equates to 8 hours and 45 minutes of downtime annually.
For those with more rigorous availability requirements,traditional high-availability clusters, which link two or more physical servers in a single, fault-resilient network, get you to 99.95% availability or 4.38 hours of downtime per year.
Virtualized high availability software solutions deliver four nines of availability — 99.99% — which reduces unplanned downtime to 53 minutes per annum.
Fault-tolerant solutions are often described as providing continuous availability because they are designed to prevent downtime from happening in the first place. Fault-tolerant software and hardware solutions provide at least five nines of availability — 99.999+% — for minimal unplanned downtime of between two and a half and five and a quarter minutes per year.
While fault-tolerant hardware and software solutions both provide extremely high levels of availability, there is a trade-off: fault-tolerant servers achieve high availability with a minimal amount of system overhead to deliver a superior level of performance while fault-tolerant software can be run on industry-standard servers your organization may already have in place.”
Download the entire Downtime Prevention Buyer’s Guide and learn what the remaining five questions you should be asking to prevent downtime are.