As virtualization gains more and more ground in IT centers around the world, “old bit heads” like myself know that this is not anything new. Heck, virtualization has been around since the late 60’s, the big difference is in the old glass house days there were lots and lots of people to support those installations and the vendors “owned” everything – soup to nuts.

The hardware, the OS, the applications, the services and the virtualization layer all came from one source. That made life easy because A) It was all designed to work together. B) If it didn’t, you had one throat to choke.

Things are very different today. The hardware comes from one vendor (or from a few in many cases), most IT centers run multiple operating systems, applications come from many vendors and the virtualization software from another. As virtualization has taken hold in the ‘industry standard’ world, it has gone through a few phases. Five years ago you might find a few machines running VM’s in test and development. VM’s are easy to deploy and if you’re testing something that’s going to crash, it was nice to not have to bring down the whole machine. As the toe hold got larger, VM’s were a great way to consolidate. Taking those 12 or 15 older x86 boxes and putting those print and file serve apps on a single new x64 machine. Far less footprint, much less power and cooling needed, and easier to manage and maintain then multiple boxes. Now we enter the next phase. It’s pretty obvious virtualization is here to stay, the ROI is clear, so the next step is virtualizing those productions applications, you know, the ones that actually run your business.

Here’s where things can get a bit ‘sticky’. The benefits of consolidating and running VM’s is clear. The fact that you are running multiple OS’s and applications, the applications you can’t afford to lose, are all running on one piece of hardware, that can be an issue. It might be that losing one of those applications for a short period of time would have a small impact, and when it was one its own server you could deal with a short outage or a restart/reboot. But losing 8, 10 or more of those production apps all at once, even for only a short stretch, that’s a whole different issue.

So what if there was a way where you could rely on your server, be assured that it won’t fail, and won’t have to restart. Where you wouldn’t lose all the data in memory, or what was being written out to disk. Let’s take it a step further, what if the machine “told you” it was having some issues, what if it was preemptive and proactive in its diagnoses? That even if there was an issue, even a hardware failure, the server would keep on chugging along and users would be completely unaffected and repairs could take place without the server ever going down. Impossible… Well this is what Stratus Technologies has been doing for over 31 years. But don’t take our word for it.

When Pinellas County wanted to consolidate some of their most mission critical applications, applications that are responsible for delivering 71 million gallons of clean water to over a million residents and another 4.2 million visitors, and treat more than 30 million gallons of waste water – every day, they came to Stratus to run their VMware environment. Having been a long time Stratus customer, the choice was an easy one. Ken Osborne, SCADA Supervisor said it best – “Our operation has relied on Stratus systems for about ten years with no unscheduled downtime caused by a server failure. Keeping the water on is a public health and safety issue. We can’t tolerate any downtime. Replacing clustered failover servers with ftServers saved us a lot of money and simplified the entire operation. I’ve never looked back.”

To see the full customer case study Click here.

Sign-up to receive relevant news on, IA, IIoT, Edge Computing, Failure Prevention, and more.  Weekly updates that keep you informed.
close-link