When it comes to protecting critical production applications, most manufacturers are not taking advantage of proactive high availability solutions to prevent downtime. The majority of businesses choose reactive backup solutions, leaving applications unprotected from downtime, which can cause significant damage to a manufacturer’s reputation and bottom line. In partnership with IndustryWeek, Stratus conducted a survey of manufacturers to determine what type of solution or high availability strategy they are using to protect against downtime.
Findings from the survey include:
“Like all businesses, maintaining continuous server uptime is critical for manufacturers. Even a few minutes of downtime can result in significant financial loss, and Aberdeen Group estimates that one hour of downtime costs business an average of $110,000,” said Dave LeClair, director of product management and marketing, Stratus. “From this survey, it is obvious that businesses need to be doing more than anticipating failure – they need a high availability plan in place that proactively prevents downtime. Otherwise, businesses put themselves at risk of short-term and long-term damage that can occur from any type of downtime.”
More than 500 IndustryWeek readers responded to the “Manufacturer IT Applications Survey,” representing a broad range of company sizes and products produced. The magazine tabulated results by annual revenue categories – less than $100 million, $100-$999 million and above $1 billion – and by the average of all respondents.
The full survey results were presented during a webinar hosted by IndustryWeek on May 31, 2012. Featured speakers included NetSuite’s GM of Manufacturing/Wholesale & Distribution, Roman Bukary, and Stratus’ Director of Global Alliances, Peter Cook, who offered insights into what manufacturers are currently experiencing with regard to downtime, as well as some best practices to prevent it. You can read about additional survey results on virtualization and downtime.
Downtime for manufacturing applications is getting costlier and costlier. Efficiency improvements in the increasingly competitive landscape center on resource consolidation, information technology and automation. Manufacturers are deploying more business critical applications on the production floor to increase and optimize product quality and output, without sacrificing abilities to respond to changing raw material quality, market conditions, and customer demands.
These changes are good because they allow manufacturers access to better information on improvements, infrastructure costs, and resource availability. They also allow some manufacturers to run continuously.
However, the consolidation of computer resources and server virtualization pushes more and more applications onto fewer pieces of infrastructure, creating a single point of failure for the plant. Even a minor loss of uptime can be catastrophic for productivity, and in some cases, entire batches of products are ruined.
Manufacturers are facing enough pressures including tense global competition, government regulations, and a lack of skilled workers. The last thing they need is a breakdown of processes due to a faulted server.
ARC Advisory Group recently conducted a survey on application downtime specific to manufacturing. Their webcast, “Application downtime, your productivity killer,” discusses the critical nature of downtime and how best in class manufacturing organizations are addressing this issue.
Watch the webcast, “Application downtime, your productivity killer,”, to hear John Blanchard, a principal analyst at ARC Advisory Group, explain manufacturing trends are making uptime assurance more important than ever, and how to protect your own plant from downtime consequences.
Recently, pharmaceutical giant Pfizer recalled over a million birth control pills due to packaging and visual inspection errors. The media coverage of the incident was crucial to getting the word out about the possibly-erroneous packs, but it also serves to draw attention to the costly fragility of pharmaceutical processes.
In the time it took to package just 30 birth control packs, they created such a huge reputation disaster that it shook public confidence in the company’s products. Due to the nature of the industry, reputation is critical: patients with heart conditions or blood disorders won’t trust a company to put together the complex chemical configurations creating life-saving drugs if they can’t even count out 28 pills without a problem.
This particular accident wasn’t technology-related, but you can see how a similarly small fault in a pharmaceutical manufacturing server could prove disastrous. Downtime of traditional high-availability solutions – including Microsoft clusters that fail over to another machine, but completely lose the data from whatever processes were occurring at the time of the fault – is around 8 hours and 46 minutes annually on average.
This begs the question, how many birth control pills do you think Pfizer packages in a full workday? That is the amount of pills that would be affected with “traditional” high availability solutions.
Understanding your plant’s annual downtime and its total cost is critical to finding a solution that best fits your needs. The goal should be to minimize downtime by implementing high availability technologies that work with your existing applications. The simplicity of adaptable high availability solutions add value without adding to the total cost of ownership of your servers and IT headaches that accompany management of new technologies and machines. Protect your company, products, and brand reputation simply and effectively by doing the research upfront and implementing tools before a fault breaks down plant production. To learn how to calculate the cost of downtime, read our whitepaper or watch the webinar.