Previous articles in our virtualization series looked at the benefits of virtualization and how it can reduce IT’s reliance on server farms and data centers.

Every business seeks perfection when it comes to application availability, yet few ever meet it. In fact, most availability solutions deliver 99 percent — which may sound pretty good to most organizations until you realize 99 percent means 87.6 hours of unplanned downtime per year. According to research from the Aberdeen Group, the average cost of downtime due to data loss can amount to more than $163,000 an hour for companies. Thus, the Rule of Nines: for every “9” an IT team can achieve in increasing their availability, the more they can reduce downtime and increase system profitability. Let’s look at how each additional “9” is being achieved today, and how it impacts business performance.

99.9 percent is good business practice

There are plenty of availability solutions delivering average results — for example, an x86 server can be counted on to deliver 99 percent availability if that’s all your business is looking for. But with today’s availability solutions, 99.9 percent is very attainable. Affordable servers more powerful than the average x86 can be combined with redundant power supplies, fans, a RAID array, and of course, good business practices to maintain and protect your system.

The result? 99.9 percent translates to around 8.76 hours of unplanned downtime per year. That’s a massive improvement over nearly 90 hours of downtime at 99 percent, but for many companies, losing a business day in productivity per year is still too much for their bottom line to bear.

High availability at 99.99 percent

The secret to achieving the next “9” for 99.99 percent is cluster technology. Often referred to as high availability solutions, clusters are essentially two or more physical servers connected in a single network. If one server fails, application support resumes on a second server.

Clusters can range from 99.95 to 99.99 percent availability depending on how well built the cluster is and how quickly failover can be achieved. Some clustered applications such as databases can’t failover quickly enough because they must check file integrity and replay transactions logs after a failure, which delays application start-up.

Fault tolerance delivers 99.999 percent

Now imagine your business was able to add the most elusive “9” to achieve 99.999 percent availability. What would that take? Fault-tolerant systems are delivering the “Holy Grail” of availability today by working through faults and continuing to run without disrupting applications at all — preventing any downtime resulting from a system failure.

Fault-tolerant hardware solutions deliver 99.999 percent availability or better, translating to less than five minutes of unplanned downtime per year. Software fault tolerance delivers similar results using industry-standard servers running in parallel, enabling a single application to live on two virtual machines (VMs) simultaneously. If one VM fails, the application continues to run on the other VM with no interruptions or data loss. Thus, virtualization delivers the fifth 9.

All that being said, not all fault-tolerant solutions are created equal. Some emulate fault tolerance but end up creating lots of overhead, which drags down performance. You need true fault tolerance to avoid performance problems and meet all your application requirements.

Can we continue the conversation?

Tell us what you think by sharing your thoughts about this post on Twitter or LinkedIn.