In our 35-plus years of providing continuous availability solutions for enterprises, we’ve seen only a handful of technology shifts that you could call “seismic.” The globalization of eCommerce was a big one that was transformational for mission critical infrastructures. At Stratus, we believe that the next big transformation – the Industrial Internet of Things (IIoT) – has the potential to be even more seismic.
Over the past year or so we have been more and more engaged in pilots and solutions for IIoT. We see this as a major shift in how industries such as utilities, food and beverage, pharma and others will deliver products to market and become more efficient. IIoT is on the minds of just about everyone working in these and other industries today on some level. But, there’s a lot of information out there to digest and it’s still very much in its infancy.
Based on our experiences to date, here are our predictions for how IIoT will shape up in 2017.
1. Companies will need to get educated about IIoT
IT vendors have started to create awareness of the value of IIoT. This has helped users begin to run pilot projects and experience the benefits of IIoT first hand. In response, more companies will need to get educated. This will lead to frank assessments about infrastructure, which often will reveal that most operations lack secure connectivity, virtualization, and reliability needed for IIoT. Addressing those infrastructure weaknesses will be a top agenda item for 2017.
2. Small wins will yield bigger investments in IIoT
We expect many companies to start their IIoT journey by pursuing short-term projects that have big efficiency or cost impact. For example, one of our customers has been doing some cool work piloting analytics technology to diagnose and troubleshoot issues in a production line. This produced valuable intelligence that has made the production line more efficient. Small victories such as these will become more common and will give industrial automation decision makers more confidence to support bolder investments.
3. Reliability and security are the hot buttons for companies considering IIoT
Success with IIoT requires starting with an infrastructure you trust. So 2017 will be the year to lock down reliability and security. Then you can focus solely identifying those IIoT investments that will return the powerful benefits to your business. In manufacturing and energy, the primary benefits will be efficiency and productivity gains. Building security and management should look for cost savings by shrinking the technology footprint. Financial services can expect improved performance, data integrity and business agility.
At Stratus, we believe the potential impact of IIoT on an array of industries is huge. A number of early adopters have already proved this. However, the best approach is to start with a reliable infrastructure to support your IIoT vision. If you agree, keep in mind that =Stratus has been an driving availability in the Industrial Automation market for decades. In fact, our first customer was in the food and beverage industry. But further that just our zero downtime advantage industrial customers really value how Stratus solutions dramatically simplify their unique operational requirements.
The state of the art in building automation and security is evolving with incredible speed. But one thing is certain: Construction companies and building owners will become increasingly reliant on digital systems to keep their buildings safe, secure, comfortable and energy efficient. Focusing on the issue of fault tolerance right from the blueprint stage of any new construction or major renovation project is becoming increasingly important.
But how do you design an approach that rationalizes the infrastructure and management of all this disparate technology coming from numerous vendors in a streamlined, consistent way? When developing an availability approach, consider the following key questions.
1. Is it simple?
Automated building systems may be expanding, but building management budgets are not. An availability solution should be easy to deploy without any specialized development skills. And it should be easy to manage and easy to service in the event of a failure. Avoiding large, multi-component systems in favor of an all-in-one “appliance-like” solution reduces complexity, as well as physical footprint.
In addition, the availability solution should provide a single, end-to-end view of the entire building automation and security infrastructure. This simplifies management and makes it easier for building staff to proactively identify potential issues before they become problems.
2. Does it leverage industry standards?
Sophisticated building automation and security systems may involve literally dozens of applications from an array of vendors in a virtualized environment. This requires an underlying availability infrastructure that is based on industry standards, with the flexibility to support a wide range of applications and vendors. Standards-based solutions also allow the use of lower-cost off-the-shelf servers, further reducing total cost of ownership.
3. Is it optimized for smart building deployment?
A solution with little or no track record in building automation and security may not deliver on its promises. Look for technologies from vendors that have experience in the field and deep relationships with building automation and security application vendors. That’s a good sign of an ecosystem that’s been proven in many different building deployments. Don’t be shy—ask about their experience and connections within the industry.
You can read more about the growing need for fault tolerance as buildings become more automated in my article published recently in Construction Executive.
There’s so much happening today with the Industrial Internet of Things (IIoT), it’s important to understand where Stratus fits. For one, we’re proven as we’ve played a role in supporting mission-critical industrial automation for decades. This includes supervisory control and data acquisition (SCADA), human machine interface (HMI), and historian database solutions. We agree with many industry analysts like ARC who see the evolution of these technologies naturally supporting the adoption of IIoT.
IIoT offers exciting opportunities for improving efficiency and productivity. But there are many components to consider beyond machine-integrated sensors. Networking and communications, data collection and analytics, automated controls, and decision support are the connective tissue of IIoT. Stratus is an important part of this big picture because our hardware and software protects this connective tissue in thousands of facilities today. We believe that our existing deployments will help those customers deploy IIoT more quickly. However, the benefits provided by an Always-On infrastructure in an IIoT environment go beyond preventing unplanned downtime.
For starters, the evolution toward IIoT enables industrial automation technologies to be deployed into new industries and places. For instance, in many process industries, endpoints and stations, say in an oil pipeline, have needed to be manned. New technologies enable more and more of these remote sites to be remotely monitored and completely human free. But this remote visibility comes with a price. If the system that provides remote monitoring goes down, nobody knows what is happening. In the natural gas industry, this is called a “blind moment” and it’s a BIG deal. This situation is not limited to oil and gas pipelines. As factories in the semiconductor and other industries get larger and more automated, the goal is to get better productivity with fewer people. Always-On visibility will be a requirement to ensure that goal.
Additionally, compliance comes into play. While data generated by IIoT is critical for production efficiency and productivity, in some industries this proliferation of date will require oversight and reporting. A good example is the food and beverage industry. If you’re subject to regulations, you can’t afford to lose data as it could result in expensive recalls, audits or even fines. If your solution runs on hardware or software infrastructure from Stratus, data availability and integrity won’t be a concern because our servers are always on.
Lastly, the transition to IIoT will come with implementation costs. Many organizations are taking their first steps toward IIoT by deploying virtualization to reduce costs. However, the combination of the Always-On requirements with virtualization in a non-data center environment can actually add costs and complexity. Stratus builds fault tolerance, virtualization, monitoring, and downtime prevention features into a single solution. That gives you a smaller technology footprint that doesn’t require a platoon of people who are necessary for many of the clustered environments.
Ultimately, this means that Stratus can provide an easy on-ramp to a fortified IIoT solution.
When it comes to the Industrial Internet of Things (IIoT), there is a general feeling that operations technology (OT) and information technology (IT) organizations are at odds. To a certain extent, that’s true.
In the data center, IT is largely concerned with reducing costs through consolidation and standardization. On the production line, OT also wants to decrease costs but strives to keep productivity as high as possible to drive revenue.
IT also is generally more comfortable with change. Servers and software are updated all the time. New, more cost-effective IT solutions seem to emerge daily. For OT, stability and reliability are most important. The “If it ain’t broke, don’t fix it” thinking often dominates. As a result, OT wants systems that chug for 10, 20, 30 years or more.
Examples of these differences in thinking are quite common. For instance, one practice in the gas delivery is to maintain proprietary networks to connect their pumping stations more securely. For OT, this is brilliant because it protects the network from hackers. But IT sees solutions like these as costly, and antiquated that should be replaced with less expensive, standards-based equipment secured with software. Both approaches achieve network security; they just come at it from different angles.
Now let’s look at IIoT. Here is an opportunity for OT and IT to overcome their differences and achieve common goals.
Things like predictive maintenance, enabled by IIoT, help OT run production lines more efficiently and with less unplanned downtime. This is certainly good for productivity and revenue. But predictive maintenance requires continuously monitoring and analyzing key system data. Enter IT.
IT staff are experts at installing software, servers, and networking needed to deliver predictive maintenance analytics. But they also must work with OT to understand parameters to be measured and key performance indicators that will drive operations and maintenance decisions.
For OT, the IIoT solution must be absolutely reliable and available 24×7. IT will want the solution to be efficient, secure, and cost-effective. What they both want is simply the right tool for the right job. The good news is there are solutions available today that accomplish the goals of both OT and IT.
For example, fault-tolerant servers provide stability and longevity required by OT, and they meet IT’s need for standardization, security, and ease of management. These perceived differences between OT and IT are simply alternative approaches to reaching a common goal, which is to streamline business operations through IIoT. And everybody wins. OT drives increased productivity and revenue while IT keeps costs in check.
The buildings we sit in or public spaces we visit (like airports) today are getting smarter all the time. A simple case in point is the lights that automatically turn on when you enter your office. A more advanced example is when your badge reader is tied to your company’s HR database and provides secure access to a room. A future example is when you can access a room with your badge (or phone) and that room’s lighting and climate is automatically set to meet your preferences. This future is real and a lot of technology is beginning to converge to usher it in. These advancements are all very exciting, but for those directly involved in creating smarter buildings, we should not underestimate the complexity involved. Here are some key considerations when charting your course towards a smarter building.
- Plan to consolidate your building technology– Right now every different building control (heating, power monitoring, video, access control) is on a separate application which is likely to be deployed on separate servers. This leads to a heavy footprint that is hard to manage and is likely costing you too much money. So, often the first step towards a smarter building is to virtualize your building’s software infrastructure. Stratus and our partners can provide you with the reliable foundation required for this with our recently announced Stratus Always-On Infrastructure for Smart Buildings.
- Take a close look at your needs for availability and fault tolerance – Once you have consolidated your solutions, you’ll invariably be forced to decide how and where to virtualize these applications. The easy answer is to just add the VMs into your existing data center. That’s a pretty good idea if your needs for availability and compliance are pretty basic (say in an office campus). But if you have critical areas to serve (such as access controls into a clinical environment or runway lighting controls at an airport) where no amount of downtime is acceptable, you may need a specialized solution deployed on site that ensures that failures of service won’t happen. And remember the more applications or building services you consolidate onto an infrastructure the more likely it needs fault tolerance.
- Understand that the smart building infrastructure is pervasive and expanding– The internet of things is enabling the deployment of cheaper devices to help build smarter buildings. However, all of those devices need some degree of monitoring and visibility. This is why we have built everRun® Monitor powered by Sightline Assure® into our Always-On Infrastructure for Smart Buildings. It goes beyond the standard server based infrastructure and can monitor the entire gambit of smart building technology, giving building managers the insights they need to secure and operate their buildings more effectively.
- Get ready for analytics and compliance– A big part of the business case for smart buildings is the fact the new intelligence driven by the data that gets produced by the end point devices (sensors, cameras, badge readers), will help reduce costs and/or make buildings more secure. The application of analytics to these new building services will deliver those efficiencies and improvements provided that the data produced is consistent and available.
Learn what you can do to eliminate downtime with Application Availability Solutions from Stratus.
The smart buildings of the future are both realistic and beneficial. There are a lot of cost efficiencies to be gained, as well as safer spaces for people to work and visit. However, like many things it needs to start with a reliable technical foundation on which to build upon.
The Industrial Internet of Things (IIoT) holds huge rewards for manufacturing companies from consumer goods makers to petrochemical firms to utilities. Companies, large and small, already are crediting IIoT with hard cost savings and advances in operational efficiency and product quality. This blog will answer frequent questions about IIoT we get from our industrial customers that you also might have.
What Is IIoT Anyway?
Sensor data, machine-to-machine communication, and automation systems have existed in industrial environments for years. IIoT builds on these technologies and bakes smart devices, machine learning, big data, and analytics into the mix.
With additional data sources and better intelligence and analytics embedded into the supply chain, you can adjust your industrial processes in real time. From there, you can expect tangible progress toward improved operational efficiency, return on assets, and profitability. That is the heart and soul of IIoT.
My Production Line Is Working Fine. Why Would I Change Things?
One of the biggest drags on inventory and order flow is unplanned downtime. For example, one hour of downtime for a large turbine powering a production line can cost a company up to $10,000 an hour. To avoid outages, manufacturers take production systems offline for periodic maintenance—needed or not. Not only does this get costly but even planned downtime is disruptive.
Alternatively, some manufacturers are using IIoT for predictive maintenance of factory line equipment. In these situations, a smart sensor attached to an assembly line motor monitors performance and reports on changes, such as temperature or vibration, which may signal failing parts. A proactive repair of the motor could avoid a complete failure and potentially weeks of downtime, costing millions of dollars in lost revenue. Or, it could shave seconds from the assembly line process and help the business fulfill orders and recognize revenue faster.
Such improvements translate into a compelling competitive advantage since the firms embracing IIoT turn out products faster and at a lower cost. That alone is a viable reason to embrace IIoT.
I’m Ready. How Do I Get Started?
Before getting started you need to ask yourself if your infrastructure is ready for IIoT.
Our recommended first step is to look at virtualization technologies to reduce your infrastructure and maintenance costs. The work effort involved in securing virtualized environments is less intensive than existing approaches and they are far easier to update and scale.
The good news is that by virtualizing you can continue running your existing automation systems to minimize your upfront investment. To ensure uninterrupted uptime, a fault-tolerant server that will keep other connected virtual servers running in the presence of a hardware problem is essential. Unlike clustered solutions, fault-tolerant systems are easier to manage and not subject to downtime when failover occurs.
Once you have your IIoT infrastructure in place, you can begin to enjoy the rewards of manufacturing processes that run faster, more cost-efficiently, and reliably than ever before.
The value of IIoT is in connecting systems for a more fluid flow of information. But some basic challenges exist that are keeping industrial companies from adopting IIoT, all of which can be traced back to a fear of moving on from legacy systems. The most basic is just that: people know their legacy systems work, so why switch to something they’re unsure of? When companies set a high bar for the level of service they’re delivering, the risk of switching systems can often seem like it might not be worth the reward. In the same vein, how exactly can we get industrial operators to really understand what that reward even is, in terms of cost savings? Efficiencies and cost saving are often touted as ultimately one of the greatest benefits of IIoT, but there’s often a large gap between understanding that on a broad level and understanding where exactly those savings and efficiencies are going to be realized in an actual operation. And finally, many operators can just be intimidated by opening up their operation to enable this more fluid flow of information. A single major security breach or wrong decision based off incomplete data can sink a company and have very real consequences. In industrial automation, these are high stakes risks.
Follow us on Twitter to learn more at @StratusAlwaysOn.
At Stratus, our solutions are very often deployed at what we call “the Edge”. This is a very interesting place that is sometimes also referred to as the Remote Office / Branch Office, “ROBO” or even lately “the Fog”. We are often asked for our definition so here it is in three versions.
The Analyst Version
We largely align our vision and definition of the Edge with what IDC calls “Departmental” Deployment Locations. This is something that is deployed outside the datacenter with 1000 feet or more of floor space. This datacenter can be either traditional or hyper-scale.
The Short Version
Our short version is that the Edge is a server deployed outside the datacenter where there is no dedicated IT staff to manage it.
The Long Version
The Edge is the place where technology is deployed without the standard trappings of a traditional datacenter in terms of human and technology resources. Often treated like “an orphan out in the wild”, computing at the Edge is giving advent to the internet of things (IoT) as well as localized data and telemetry services. At Stratus we believe there are 3 categories of Edge technologies. These categories are likely to co-exist with each other but can also be stand-alone depending upon the situation.
- Fixed Remote Edge – Fixed remote Edge is very common today and is a superset of ROBO. Fixed Remote Edge goes beyond the idea of end user computing and expands into new areas such as Industrial Automation (Manufacturing), Energy Production and Retail. Fixed Remote Edge solutions typically have some level of IT support (albeit not likely dedicated or specialized). A major trend in this area includes server consolidation via virtualization due to the proliferation of software defined automation technology.
- Operational Mobile Edge – Operational Mobile Edge differs from Fixed Remote in that it is often placed alone and somewhat autonomous in a wide range of environments. This technology is also often not permanently deployed and may move about. It’s generally remotely operated and support is often not immediately available – it may take days to get someone out there. Examples include wind and solar farms, container ships, airplanes and highway systems.
- Mobile End Points (aka Things) – When people think about the IoT they often default to the Things. This is common since many systems are defined by the parts that have the most user impact. The nature of these things is rapidly evolving and you can look to your home to see that thermostats, security systems and even your phones are all in this category. In the enterprise examples include next generation PLC equipment, supply chain tracking and cars with electronic toll payments.
So, the Edge is a very diverse place where reliability and serviceability are mandatory. This is why you will see Stratus hardware and software technologies deployed globally in all three categories of Edge applications.
I’d promised myself that I’d take a break from discussing cloud and NFV for a while, but against my better judgement, here’s one more…
For the past year or so we have been working on our new cloud product designed to solve the problem of resiliency for NFV based networks. NFV is a daunting task, but the potential is great and it could literally transform the daily lives of everyone.
Without NFV, the operators really can’t move forward quickly with next generation networks. Without next generation networks, innovations in IoT and even ideas that seem so far out like driverless cars will be at best stymied and at worst impossible. So everyone is in agreement that we need NFV, but it’s a daunting task and it will be really hard to get there.
- The technology isn’t there yet – Today’s VNFs are not carrier grade and neither is the underlying NFV infrastructure (something Stratus and others are trying to fix)
- The vendor incentives aren’t there – If you are Cisco or Alcatel and see what just happened to EMC, you have to consider how fast you want to change your business model from appliances to software
- The operators are still getting ready for this change – There is a lot of legacy and history there and changing people’s mindsets is probably harder than addressing 1 and 2 above
This is why I’m glad to see AT&T acting like an industry titan and making change happen. They have been getting a lot of press lately but the transformation doesn’t happen overnight. It’s been a while and if you have been keeping an eye on OpenStack, OP-NFV and other NFV communities it should be no surprise. So, what makes AT&T different?
- AT&T are working with those who have incentive to move slowly – They have a vision and a supporting program called Domain 2.0 which is an ecosystem of traditional and disruptive technology companies. They are running PoCs and testing new technologies now to see what works and what doesn’t. And in some instances, they are directly involved in the design of these technologies. Of course the incentive for the vendors involved is a crack at being part of the future vs the past.
- AT&T are defining the requirements in an open source way – The biggest roadblock to NFV adoption will be standards. The natural tendency for a VNF provider is to build everything into the VNF just like they did in the old days. Although that maximizes the vendor’s flexibility, it doesn’t help the operators who want to build as much into the infrastructure as possible to maximize their flexibility. This is why communities such as OP-NFV allow operators to define the infrastructure as a template that everyone can work towards. Who founded OP-NFV? Well it was AT&T.
- Lastly, AT&T are getting out there and being up front about what’s working at what’s not. All of the interviews, the keynotes and public facing activity is a means to communicate and demonstrate to other operators what is working. AT&T know NFV will not reach its full potential if other operators don’t buy into it. It’s not about competitive advantage. It’s about creating a next generation platform upon which competitive advantage will be built.
When I think about the enterprise and my experience in open source, it was always the industry titans that drove everyone forward. Open Source middleware would not be what it is today without companies like Geico and NYSE. Linux would not be what it is today without the U.S Government and a host of others like Apple who just announced they were adopting KVM in a big way. All were game changers, and someday, I think AT&T will be a game changer for NFV and will be to the benefit of everyone.
Managed cloud providers should be doing great. After all, they appear to offer an ideal middle ground between the costly prospect of enterprises building their own private clouds and the public cloud options that lack enterprise-class security, availability and manageability. In reality, however, managed private cloud providers are struggling to win that enterprise business.
What’s going on? I believe there are three key factors preventing managed cloud services from delivering on their business goals.
Factor 1: Flawed service strategy
To justify their cost premium over Amazon, Google and other “commodity” offerings, some managed cloud providers are going beyond infrastructure services, bundling in “value added” services. Sounds like a good idea. But every customer has unique needs, making it difficult to offer services everyone will want. Add a database? Great idea. Except that one customer might want PostgreSQL while another wants MySQL. There just is no single “killer app” that will justify a higher price for all customers.
Perhaps there’s a better way to add value. Instead of adding application services, perhaps managed cloud providers should be focusing on delivering more advanced infrastructure capabilities—like true, fault-tolerant availability or guaranteed SLAs. That would address the real-world needs of enterprise customers, while making a managed cloud offering really stand out from the pack.
Factor 2: Weak orchestration capabilities
Having the ability to orchestrate services is crucial for delivering an enterprise-class managed cloud service that is scalable and, therefore, profitable. Public clouds are built with strong orchestration capabilities to allow massive scaling. But private managed clouds are usually built with off-the-shelf solutions from a variety of vendors, making it difficult to achieve robust orchestration at scale. What can they do?
Managed cloud providers need to avail themselves of a new class of automation and management tools that work in concert with the available orchestration tools to deliver the functionality they need, at scale. This automation enables them to orchestrate resources dynamically, when and where they need it.
With dynamic orchestration, managed cloud providers can do amazing things. For example, they could deploy fault tolerance for an application only when it is needed. When fault-tolerance isn’t required, the application can be automatically moved back to a non-FT infrastructure, with no interruption in service. That saves money while delivering the availability customers need, when they need it.
Factor 3: Costly proprietary systems
Many of these struggling managed cloud providers have infrastructures built on older, managed hosting business models. So they face the challenge of balancing newer cloud technologies with legacy business structures, tooling, processes, and skillsets that are both costly and inflexible. As the marketplace advances, the high cost of proprietary system licenses and support fees will become a huge competitive disadvantage.
Adopting open source cloud technologies like OpenStack, Linux and KVM (Kernel-based Virtual Machine) offer an escape from the “proprietary software tax,” while providing greater flexibility than traditional approaches. That’s just part of the solution. It’s also important to adopt solutions that leverage the flexibility of open source frameworks in order to maximize their efficiency, reliability and automation. As open source cloud frameworks continues to mature, these additional technologies enable managed service providers to deliver a premium, enterprise-class service today at a competitive cost.
Change is never easy. But for private managed cloud providers looking to grow their businesses by attracting and retaining enterprise customers, adopting new ways of thinking and working is essential. By focusing their service strategy around enterprise-class infrastructure services, strengthening their orchestration capabilities, and migrating away from costly proprietary infrastructure, managed service providers will finally position themselves to hit the “sweet spot” in cloud offerings for the enterprise.