• If you built it from ‘the ground up’ with no single point of failure – “You might be fault tolerant”

• If the ten’s of thousands of machines in production at customer sites are monitored daily, and you post an uptime of 99.9999% on your company home page – “You might be fault tolerant”

• If you understand that being fault tolerant is more then a piece of hardware or software, but an entire infrastructure including services – “You might be fault tolerant”

• If customers around the world have been trusting you with their most mission critical applications for almost 30 years – “You might be fault tolerant”

First I want to go on record and apologize to Jeff Foxworthy for butchering his tag line, but I thought it was an interesting way to get a couple of points across.

Having been in this industry for going on 30 years now, I have seen many things “recycle”, sometimes the terminology changes, sometimes it stays the same. What I find interesting is how when something ‘comes back around again’ in many ways it’s thought of as “new and innovative”. The first thing that comes to mind for me is virtualization. Over the last 18 to 24 months, this has been the hottest topic in the IT industry, and for good reason. But in reality it’s not new. Virtualization was new in the late 1960’s when IBM put it on the System 360.

Most recently another 35-year-old technology is making a comeback. OK, let me rephrase that, because it never really went away, it sort of went to the outer rings of the radar screen. I’m talking about fault tolerance. Fault tolerant machines began to make their mark on the IT industry in the late 1970’s. These machines where large, proprietary and very expensive, but then again, so was every other computer of the late 70’s! Check out this commentary on the how & why fault tolerance is back, by Director of Product Management Denny Lane – Rediscovering FT