In today’s always on, increasingly complex IT world, CIOs are faced with an unprecedented set of challenges — Big Data, virtualization and cloud computing, the consumerization of IT applications and devices, mobility, overburdened IT staffs and social networking – the list goes on and on. Never before has this confluence of factors created a digital demand mashup of this complexity. Layered on top of this complexity are the demands for information from employees, customers and partners to do their jobs.

These developments are fundamentally changing the scope and nature of what is considered mission critical for IT. In the past, the definition of what constituted mission critical was relatively well understood. Financial, supply chain and telecommunications applications were considered the ones that couldn’t go down. Today, that mission critical definition is expanding quickly. Areas such as analytics, sales force automation, CRM, web content, social applications and logistics, which all impact the customer experience, have pushed their way into the mission-critical category.

Broadening the mission-critical category brings many more users into the fold, which also expands the demand for access to information. This, in turn, increases the expectations on application availability. Applications that are unresponsive or altogether not available translate into lost revenue at that given moment in time, as well as take a toll on brand reputation in the future. Here are a few recent high-profile examples of how application downtime has negatively impacted organizations:

  • RIM essentially revolutionized mobile, but massive outages contributed to its demise – an epic fall from grace.
  • United had a glitch in the software that controls ground operations the weekend before Thanksgiving causing a two-hour outage and passenger outrage as people missed flights nationwide.
  • Netflix users could not access streaming devices on Christmas Eve due to a malfunction in Amazon AWS’s cloud infrastructure, leaving millions without the ability to watch their favorite movies.

According to Stratus, CIOs on average put up with three to five hours of downtime per year, with most servers experiencing 44 hours of downtime over the life of that server. But, the aforementioned factors are redefining how much downtime is acceptable especially as costs rise. Consider this: the cost of downtime per hour, which has recently been estimated to cost on average $138,888 per hour, as reported by the Aberdeen Group, is 38% higher from the cost of downtime in 2010. The cost of downtime ranges by industry from $181,770 for average companies to $101,600 for best-in-class companies.

So, how come businesses essentially “whistle while walking past the graveyard” when it comes to downtime? Why do they think downtime is acceptable, when it’s not, especially as the data center goes through this transition? This notion should really be rejected as a business norm. Technology exists to prevent downtime, boasting six 9s or 99.9999% of availability. With this technology, proper procedures and monitoring in place, downtime can be practically eliminated.

To understand downtime and how costly it can be, you should first understand the three classifications of downtime:

  • Conventional availability equates to 99%, or an average of just shy of 88 hours of downtime a year.
  • High availability is equal to almost four 9s (99.99%) of downtime, or between four to eight hours a year.
  • Continuous availability equates to just over five minutes of downtime a year, or 99.999% uptime assurance.

Now, multiply the various amounts of downtime to see how much your company is losing in revenue alone from downtime.

As mentioned before, downtime doesn’t have to occur, and much of the reason it does is because everyone just assumes that at some point servers will go down and applications will follow suit. To combat downtime, you have to approach the challenge from the point of view of what needs to be done to keep applications available all the time. This incorporates the use of smart technology, fault-tolerant, high-availability servers that ensure 99.9999% uptime, and smart management practices, including proactive monitoring of all mission-critical systems that detect outages before they occur. This is true across the data center and cloud infrastructure.

As people continue to rely on applications for a myriad of purposes, businesses will need to make sure these applications are responsive and always available, accepting any downtime is simply not an adequate expectation.