Stratus Blog

Showing archives for category Cloud

NFV PoC#35 at SDN & OpenFlow World Congress in Dusseldorf, Germany

10.13.2015Cloud, Fault Tolerance, High Availability, NFV/SDN, VirtualizationBy:  

This week I am in Dusseldorf, Germany showing our ETSI PoC#35 titled, Availability Management with Stateful Fault Tolerance. This Proof of Concept demonstrates how virtualized network functions (VNFs) from multiple vendors can be easily deployed in a highly resilient software infrastructure environment, that provides complete and seamless fault management to achieve fault tolerance, which means continuous availability with state protection (by remembering the preceding events in a given sequence of interactions) in the event of a system fault or failure.

The results were compelling in that for the first time we have been able to prove a number of things:

  • OpenStack based VIM mechanisms alone are insufficient for supporting carrier grade availability objectives. Baseline functionality is only adequate for supporting development scenarios and non-resilient workloads.
  • All phases of the fault management cycle (fault detection, fault localization, fault isolation, fault recovery and fault repair) can be provided as infrastructure services using a combination of NFVI and MANO level mechanism to deploy VNFs with varying availability and latency requirements – all without any application (i.e. VNF) level support mechanisms.
  • We also demonstrated that NFVI services can offer a sophisticated VM based state replication mechanism (CheckPointing and I/O StateStepping) to ensure globally consistent state for stateful applications in maintaining both high service accessibility and service availability, without application awareness.

We believe that this is a major step forward in proving that the vision of a carrier grade cloud is viable and a software infrastructure solution is beneficial to both VNF providers and network operators/service providers.

  • For network operators/service providers, it enables the deployment any KVM/OpenStack application with transparent and instantaneous fault tolerance for service accessibility and service continuity, without requiring code changes in the VNFs.
  • For VNF providers, it reduces the time, complexity and risk associated with adding high availability and resiliency to every VNF

While there is still much more progress to be made, the very possibility that reliable carrier grade workloads can be maintained will help accelerate the adoption of NFV worldwide. If you’d like to see the details of our POC click here. Non ETSI NFV members can download PDF versions of the PoC Proposal that describes the testing we performed as well as the PoC Report that describes the findings and results of the testing.  If you’d like to know more about the technology Stratus provides to enable these results check our Cloud Solution Brief and contact Ali_Kafel@Stratus.com for a white paper with more details.

Brief overview of the Stratus Fault Tolerant Cloud Infrastructure

The Stratus Fault Tolerant Cloud Infrastructure provides seamless fault management and automatic failover for all applications, without requiring code changes.  The applications do not need to be modified to become redundant and resilient because the software infrastructure enables every virtual machine (including its application) to automatically live on two virtual machines simultaneously — generally on two physical servers. If one VM fails, the application continues to run on the other VM and the processing is automatically switched to the other, with no interruptions or data loss.

Two Key Benefits

Reduce time, complexity and the risk in achieving instantaneous resiliency

  • Seamless and instantaneous fault management and continuous availablity for any application, without code changes – includes fault detection, localization, isolation, recovery and repair

Flexibity in deployment multiple levels of availability to suit the applications

  • Dynamically specify availability level at deployment time based on application type – for example some applications may require globally consistent state at all times, while others may only require an immediate and automatic restart
  • Enables mixed deployments decomposed control plane elements (CE) that may be state protection, and forwarding plane elements (FE) may be stateless, leveraging DPDK and SR-IOV for higher performance and lower latency processing

What and how we tested

  • The Stratus Fault Tolerant Cloud Infrastructure conforms to the blue elements in the ETSI NFV reference architecture below

NFV-MGT

 

We showed three configurations:

  1. Unprotected server – shows that upon a system failure, the applications will go down until manually restarted
  2. Highly Available (HA) servers – stateless protection – upon a system failure, the service will go down for a short period but will automatically and immediately be restarted by the software infrastructure
  3. Fault Tolerant (FT) server – stateful protection – upon a system failure, the applications will continue to run without any interruption or loss of state, because the software infrastructure will perform all fault management, state protection (on another server) and automatic failover

The Cobham Wireless TeraVM virtualized IP tester was one of the VNFs deployed, which was generating and measuring traffic. In this case the traffic we showed was a streaming video because it is easy to see if there is a failure.

The TeraVM is a fully virtualized IP test and measurement solution that can emulate and measure millions of unique application flows. TeraVM provides comprehensive measurement and performance analysis on each and every application flow, with the ability to easily pinpoint and isolate problem flows.

External-Openstack

While video traffic was streaming through the system passing (which includes the Firewall and QoS servers) and visible on each of the three laptops, we simulated failure for each of the three sets of systems. As expected, the video stream coming from the unprotected server stopped and never recovered. The HA system stopped and restarted after a few seconds. As for the FT system, it continued without any loss of traffic!

Why private managed cloud providers are stalled. And what they can do about it.

9.9.2015Cloud, High AvailabilityBy:  

Managed cloud providers should be doing great. After all, they appear to offer an ideal middle ground between the costly prospect of enterprises building their own private clouds and the public cloud options that lack enterprise-class security, availability and manageability. In reality, however, managed private cloud providers are struggling to win that enterprise business.

What’s going on? I believe there are three key factors preventing managed cloud services from delivering on their business goals.

Factor 1: Flawed service strategy

To justify their cost premium over Amazon, Google and other “commodity” offerings, some managed cloud providers are going beyond infrastructure services, bundling in “value added” services. Sounds like a good idea. But every customer has unique needs, making it difficult to offer services everyone will want. Add a database? Great idea. Except that one customer might want PostgreSQL while another wants MySQL. There just is no single “killer app” that will justify a higher price for all customers.

Perhaps there’s a better way to add value. Instead of adding application services, perhaps managed cloud providers should be focusing on delivering more advanced infrastructure capabilities—like true, fault-tolerant availability or guaranteed SLAs. That would address the real-world needs of enterprise customers, while making a managed cloud offering really stand out from the pack.

Factor 2: Weak orchestration capabilities

Having the ability to orchestrate services is crucial for delivering an enterprise-class managed cloud service that is scalable and, therefore, profitable. Public clouds are built with strong orchestration capabilities to allow massive scaling. But private managed clouds are usually built with off-the-shelf solutions from a variety of vendors, making it difficult to achieve robust orchestration at scale. What can they do?

Managed cloud providers need to avail themselves of a new class of automation and management tools that work in concert with the available orchestration tools to deliver the functionality they need, at scale. This automation enables them to orchestrate resources dynamically, when and where they need it.

With dynamic orchestration, managed cloud providers can do amazing things. For example, they could deploy fault tolerance for an application only when it is needed. When fault-tolerance isn’t required, the application can be automatically moved back to a non-FT infrastructure, with no interruption in service. That saves money while delivering the availability customers need, when they need it.

Factor 3: Costly proprietary systems

Many of these struggling managed cloud providers have infrastructures built on older, managed hosting business models. So they face the challenge of balancing newer cloud technologies with legacy business structures, tooling, processes, and skillsets that are both costly and inflexible. As the marketplace advances, the high cost of proprietary system licenses and support fees will become a huge competitive disadvantage.

Adopting open source cloud technologies like OpenStack, Linux and KVM (Kernel-based Virtual Machine) offer an escape from the “proprietary software tax,” while providing greater flexibility than traditional approaches. That’s just part of the solution. It’s also important to adopt solutions that leverage the flexibility of open source frameworks in order to maximize their efficiency, reliability and automation. As open source cloud frameworks continues to mature, these additional technologies enable managed service providers to deliver a premium, enterprise-class service today at a competitive cost.

Change is never easy. But for private managed cloud providers looking to grow their businesses by attracting and retaining enterprise customers, adopting new ways of thinking and working is essential. By focusing their service strategy around enterprise-class infrastructure services, strengthening their orchestration capabilities, and migrating away from costly proprietary infrastructure, managed service providers will finally position themselves to hit the “sweet spot” in cloud offerings for the enterprise.

The Consumerization of the Data Center – Thoughts on VMworld – Day 2

9.2.2015Cloud, Data Center, vmwareBy:

After taking in some evo on Monday, let’s change gears to cloud which is another big reason I came out to VMworld this week. And while we at Stratus have been heavily focused on delivering resiliency to OpenStack workloads, it’s always good to check in on what VMware is doing.  Especially because it’s pretty easy to add one of our VMware based ftServerse into a VMware cloud. Again, keeping with the theme of evolutionary technology innovation to support a robust strategy, you have to admit that VMware has a very robust cloud story. That said, here are some thoughts that once again reinforce that VMware is consumerizing the data center.

  1. VMware’s solution is a very complete hybrid cloud solution. Yes, it all hands together and you can build a very complete cloud, but it requires all VMware products. There is not much of an ecosystem or choice in place. But if you are OK with a single vendor solution, it’s worth a look.
  2. I was especially impressed with the vRealize automation capabilities. It’s a great toolset for making sense of and simplifying all of the pieces you would use. Of course, it’s not perfect but compared with a lot of other cloud managers and orchestrators it’s very good.
  3. I can see a future where HA operations span on prem and cloud. There has been a lot of emphasis on extending HA outside of the rack or server and if you leverage vCLoud many of the technical limitations (with the possible exception of latency to the cloud) can be overcome.

So, even though I am an open source believer (and ex-Red Hatter), I have to admit VMware is the first ready for mainstream IT cloud I have seen. But remember that comes with the usual gotchas that comes with a consumerized approach. You will give up some flexibility since VMware’s cloud does not have much of an ecosystem. It’s also expensive. Lastly, it’s a cloud so all statements about simplicity are relative to other clouds and not a rack full of virtualized servers.

Calculating Your Cloud ROI

8.27.2015Cloud, NFV/SDN, VirtualizationBy:

So you’re sold on the advantages of cloud services—the flexibility, agility and “always on” business models they enable. But what’s the return on investment?

The fact is, calculating ROI on cloud services is challenging. The abstract nature of the cloud doesn’t lend itself to a simple matter of addition or subtraction. It’s not like the business case for virtualization, where you simply add up all the servers you didn’t have to buy, operate and maintain.

On the investment side of the ledger, there are costs associated with moving to the cloud, especially if you decide to build your own. The good news is that you can mitigate these costs by using open source technologies like OpenStack and KVM (kernel-based virtual machine) to eliminate the expense of software licenses. Or you can dramatically reduce up-front costs by going with a subscription cloud model, paying as you go.

Adding Up the Returns

So what are the potential returns from moving to the cloud?

For starters, productivity in the cloud is tremendous. It’s much easier and faster to build and deploy apps in the cloud. Deploying more apps in less time means you can either clear your development backlog faster or reduce your overall development costs.

The cloud also improves automation, leading to even greater density of your virtualized environment. The current global density of server virtualization is around 9x or 10x. The additional automation delivered by cloud services could take that density to 12x or 13x or even more. For larger environments, the financial impact of this could be significant.

The advantages go beyond computing. The next big cloud opportunities are in networking and storage—two areas dominated by systems built on proprietary hardware. Technologies like software-defined networking (SDN) and network function virtualization (NFV) let you run enterprise-class (and telco-grade) workloads on low-cost commodity hardware. That is a real game changer. While SDN and NFV are not, strictly speaking, “cloud” technologies, they are often employed as part of a cloud migration strategy—leading to some compelling financial benefits.

The Value of Competitive Advantage

But reducing costs is just part of the equation. Perhaps the greatest potential return lies in what the cloud enables your business to do that it couldn’t do before.

The cloud’s agility and productivity allows you to respond faster to market opportunities. You can create new services and business models and extend your reach to attract and retain customers more effectively—and more cost-effectively, because of the cloud’s ability to deliver services at scale.

Say your enterprise delivers a service at different levels—Bronze, Silver and Gold—reflecting the added cost of delivering the higher-level service. What if the cloud allowed you to offer all of your customers a higher level of service at a lower cost? What impact could that have on your business?

Cloud’s “Killer App”

Perhaps the final entry in the “return” column has to do with the cloud’s potential as an enabling technology. Every IT paradigm shift has had its “killer app”—for the cloud, I believe it is Big Data. Imagine you’re a retailer and you want to make sure the right product is on the right shelf at the right time. The ability to deploy Big Data analytics on cloud platforms is the key to solving those kinds of complicated problems, driving real business advantage and ROI.

Moving to the cloud requires a different approach to calculating return on investment. For enterprises focusing only on short-term costs and traditional metrics, deploying cloud apps may or may not add up. But for organizations that value things like business agility, development productivity, customer retention, and market leadership, the business case becomes far more compelling.

Carrier Grade Clouds

8.5.2015Cloud, Fault Tolerance, NFV/SDN, TelcoBy:

When someone in the technology industry throws around the term “carrier grade” it suggests the highest bar when it comes to reliability, availability and resiliency. A carrier grade network is such a high bar that it’s actually enforced by laws in some countries. So, it’s more than interesting that telcos are looking to provide network functions virtualization (NFV) in OpenStack based clouds. This means these telcos envision a future where the cloud itself will be carrier grade and ready for the job done by physical equipment today. This is still in the early semi-visionary stage but there is continued progress on many fronts.

ETSI (the European Telecommunications Standards Institute) is a telecommunication industry standards body at the forefront of defining and standardizing this vision of carrier grade clouds. Members of this body nominate various Proof of Concept (PoC) demonstrations to investigate, conceptualize and ultimately provide standards for achieving that vision. In May, Stratus along with our partners embarked upon a PoC to address what may be the single biggest barrier to making a carrier grade cloud a reality. And today we are happy to announce the results of our POC.

Our PoC titled, Availability Management with Stateful Fault Tolerance demonstrates how virtualized network functions (VNFs) from multiple vendors can be easily deployed in a highly resilient software infrastructure environment, that provides complete and seamless fault management to achieve high availability, keep running and keep state (by remembering the preceding events in a given sequence of interactions with a user) in the event of a system fault or failure.

The results were compelling in that for the first time we have been able to prove a number of things:

  • OpenStack based VIM mechanisms alone are insufficient for supporting carrier grade availability objectives. Baseline functionality is only adequate for supporting development scenarios and non-resilient workloads.
  • All phases of the fault management cycle (fault detection, fault localization, fault isolation, fault recovery and fault repair) can be provided as infrastructure services using a combination of NFVI and MANO level mechanism to deploy VNFs with varying availability and latency requirements – all without any application (i.e. VNF) level support mechanisms.
  • We also demonstrated that NFVI services can offer a sophisticated VM based state replication mechanism (CheckPointing and I/O StateStepping) to ensure globally consistent state for stateful applications in maintaining both high service accessibility and service availability, without application awareness.

We believe that this is a major step forward in proving that the vision of a carrier grade cloud is viable and a software infrastructure solution is beneficial to both VNF providers and network operators/service providers.

  • For network operators/service providers, it enables the deployment any KVM/OpenStack application with transparent and instantaneous fault tolerance for service accessibility and service continuity, without requiring code changes in the VNFs.
  • For VNF providers, it reduces the time, complexity and risk associated with adding high availability and resiliency to every VNF

While there is still much more progress to be made, the very possibility that reliable carrier grade workloads can be maintained will help accelerate the adoption of NFV worldwide. If you’d like to see the details of our POC click here. Non ETSI NFV members can download PDF versions of the PoC Proposal that describes the testing we performed as well as the PoC Report that describes the findings and results of the testing.  If you’d like to know more about the technology Stratus provides to enable these results check our Cloud Solution Brief and contact Ali_Kafel@Stratus.com for a white paper with more details.

Lastly, we did not do all of this work ourselves, there were many partners involved as well as our industry sponsors. We’d like to extend our thanks in helping us achieve this great result.

When Everything is a Container

7.2.2015Cloud, High AvailabilityBy:  

Is it too early to start planning for when everything is a container? Many technology trends start out strong only to soon fade away. But with large scale deployments of containers by high tech behemoths such as Google, Microsoft and Amazon and with the success of organizations like Docker bringing unique container technology to market, it is very likely that containers will be a part of your future. Nowhere was the container “infiltration” more visible than at the annual Red Hat Summit that was held last week in Boston. A year ago, Red Hat used the forum to educate attendees on basic container technology concepts. Fast forward one year and Red Hat used the Summit to communicate their container strategy which forms the backbone of their vision to enable applications to seamlessly operate across physical, virtualized and hybrid cloud infrastructures.

But First, Understand the Differences

Linux Containers (LXC) is an operating-system level virtualization environment for running multiple isolated Linux systems (containers) on a single host. The environment allows limitation and prioritization of resources (CPU, memory, block I/O, network, etc.) and namespace isolation functionality that segregates an application’s view of the operating environment. While I won’t get into a detailed comparison of utilizing hypervisors versus containers for the isolation of your application deployments, you need to understand the potential changes containers will have on your development and operational model. These changes will be across the spectrum of packaging and container formats, security best practices, build and deployment models and application lifecycle management. Equally important will be an understanding of the changes to your resiliency requirements. Containers will change the way application health management is performed and how fault detection, isolation and recovery is realized when “everything is a container”.

Will Containers Need SDA?

Absolutely! Whether your applications run on a physical host, a virtual machine or in a container, there will be unique resiliency and fault management capabilities for each infrastructure environment. In order for applications to be truly portable, the availability methods realized in the infrastructure must be transparent to the application. This is the primary objective of the Stratus vision for Software Defined Availability. The ability for the application to dynamically declare its availability requirements and drive infrastructure automation in a software controlled fashion will enable the next wave of carrier-grade, enterprise-ready production environments in the cloud.

Please, stay tuned for future posts …

Is OpenStack Ready for Prime Time in the Enterprise?

6.5.2015CloudBy:  

Open source software has earned a solid place in the modern enterprise. The advantage of avoiding the software license “technology tax” is just too attractive to ignore. So the increased interest in open source cloud technologies like OpenStack and KVM (kernel-based virtual machine) is not surprising.

OpenStack offers some very compelling benefits around cost and flexibility. But is it ready to handle enterprise applications? The short answer: It depends.

To understand what I mean, it’s important to recognize that OpenStack is not a single, monolithic “thing.” In reality, it’s a collection of projects divided into three main functional groups: Compute, Storage, and Networking. And frankly, some of these projects are more mature than others.

The most mature is OpenStack’s compute project, dubbed “Nova.” This is where it all started, so Nova is actually quite far along the maturity curve. In fact, for new, cloud-native applications, OpenStack’s compute capabilities are robust and ready. On the other hand, migrating existing enterprise applications over to OpenStack presents some major challenges around application management and availability.

To handle storage, OpenStack has two projects, Cinder (for block storage) and Swift (for object storage). Swift is the most mature, however both are pretty solid, including strong support from major storage solution vendors.

The third leg of the OpenStack stool, networking, is still a bit wobbly. Called Neutron, this project is the least mature of all. Key capabilities like quality of service (QoS) or virtualized firewalls are still being developed.

Why isn’t networking further along? One reason is that the companies best positioned to add value—networking vendors—have been hesitant to contribute to the development of open source technologies that undermine their expensive, proprietary systems. That’s changing, however, as the market demands alternatives. A key factor is the current movement by leading telco players toward “cloudy” technologies and NFV (network function virtualization).

The final piece of the OpenStack puzzle, Heat, is the project devoted to orchestrating all the other pieces. This orchestration piece is pretty solid for many application types, though for more advanced scenarios testing is recommended.

So what does all this mean for enterprises assessing the feasibility of using OpenStack? As I said earlier, it depends on what you want to do with it.

Building a new cloud application? OpenStack can dramatically reduce the cost and complexity of building many types of cloud-native apps. On the other hand, for apps that require high levels of automation or complex change management, OpenStack may not have the support you need—at least not yet.

In addition, the rapid pace of OpenStack upgrades presents its own challenges, with new releases occurring twice a year and an upgrade process that can be disruptive. And availability is not yet up to enterprise standards.

Will this change? Absolutely. And the driving force will be the entire ecosystem that develops around OpenStack. Like all open source technologies, everyone—developers, users and vendors—will play a role in OpenStack’s continued development and maturity. Of course, the OpenStack community will continue its work and vendors will add innovations to meet the needs of their customers. But just as important are the contributions made by the enterprises themselves as they build their first cloud solutions on OpenStack. That’s the way open source development works.

At the same time, enterprises embracing the cloud will evolve their own ideas about how they do things. OpenStack represents a golden opportunity to rethink how to deliver applications and services in ways that can transform how they do business. So think about which cloud applications could be implemented with OpenStack today. Because that could form a foundation for business growth tomorrow.

Why ETSI and OP-NFV Impact Everyone in Cloud (Not Just Telcos)

5.6.2015Cloud, TelcoBy:  

Yesterday we announced that Stratus is in the process of delivering a proof of concept (PoC) for the European Telecommunication Standards Institute (ETSI) – you can read the press release here. This is a pretty big deal in the telco and NFV world, but I think ETSI PoCs like this have a much more horizontal impact than you might think.

Naturally we and our sponsors are excited, and see a major opportunity for our solutions for telco operators but in this post,  I’m going to address the big picture as to why ETSI (and in some ways by extension OP-NFV) are very important to anyone interested in clouds.

  1. ETSI and OP-NFV are user led – This is a very important distinction in ecosystems. And before people misconstrue what I am saying, let me clear this up. I believe most ecosystems in IT have a user element and often even user members and contributors. But to be user led is different. Let me give an example – the OpenStack Summit is about to kick off in the next couple of weeks and while the effort, progress and enthusiasm are all great, frankly, it’s taking too long to get to maturity. I believe there is a direct correlation between the progress of OpenStack and the fact the foundation that governs it is almost 100% vendor led. Vendors are incented to monetize the output of OpenStack and that leads to a broadly scoped solution and inevitably lower standards for enterprise readiness (otherwise what would the vendors sell to the users)? A user led community is better equipped to drive interoperability standards and prioritize real world adoption over who claims what revenues.
  2. Telco’s care less about compute and more about networking – This one is BIG. Technically, the computing aspects of cloud are pretty worked out, but outside of the mega public cloud guys, the networking parts are not as mature yet. This is coming along really quickly, but ETSI and OP-NFV are going to push very hard on the industry to get it done well, and to support the most demanding use cases. By extension, these innovations and learnings will trickle down into industries where networking may not have to be carrier grade.
  3. Solid standards are the key to adoption – One of the myths of clouds is that any one organization will build one cloud and/or manage multiple clouds with one orchestrator. That’s just fiction. There never has been one tool to solve any problem and the notion that one tool can do everything this time around is silly.  At the end of the day the management of however many clouds you have will be a more layered approach. There may be one master console, but different underlying services or users will automate and drive the cloud in different ways. The layering of management services is interesting and different because it enables more flexibility. In fact Cisco’s Intercloud offering has a good view on this. But, for that approach to work good interoperability standards are mandatory. ETSI gets this and sees it as a gap. I’m hoping to see more ETSI POCs to focus on this area so at least 1 or 2 verticals can standardize and everyone can move forward.

Truthfully, until you have good inter/op standards and a set of users being open about what they really need, no technology crosses over from early adoption to mainstream use. It’s not to belittle what’s been done so far. In fact it’s a reflection that good things have been done and now cloud technology is viable enough for users and user groups to invest in helping take it to the next level. Which is good for everyone.

Exciting times ahead.

Three Steps for Moving Business Critical Apps to the Cloud

4.30.2015Cloud, Mission CriticalBy:  

The trend toward cloud-based applications and services is well underway as enterprises see the advantages in cost, efficiency, and agility. Largely absent from this march to the cloud have been mission-critical applications, which remain locked within legacy systems in the data center. Understandably, IT leaders want to make sure they can meet the security and availability requirements of mission-critical apps in a cloud environment before making that leap.

But as cloud technologies mature, this is starting to change. New approaches are emerging that offer the potential to meet the demands of business-critical applications in private cloud environments. At the heart of these new approaches lies a new mindset. IT leaders need to adopt an application-centric approach to cloud services rather than an infrastructure-centric approach. That means building cloud environments that go beyond “commodity services” and deliver robust capabilities to meet the needs of mission-critical apps. I believe there are three steps to achieving this successfully.

Step 1: Rethink your approach to availability

It goes without saying that availability is non-negotiable for business-critical apps. And until lately, “cloud” and “high availability” are terms not normally used together. That’s because traditional hardware-based fault tolerance approaches don’t lend themselves to the elastic, virtualized nature of cloud computing environments. That’s where software-defined availability (SDA) comes to the rescue. With the new generation of SDA solutions, failure recovery is abstracted from the application, enabling mainframe-like availability levels in a cloud environment running on low-cost commodity hardware.

This abstract approach also means you can achieve business-critical availability without completely re-engineering the application. In essence, you are deploying availability as a cloud service—dramatically reducing cost, complexity and risk.

Step 2: Focus on orchestration

Orchestration means making sure every bit of data moving around in the cloud ends up exactly where it’s supposed to be, when it’s supposed to. This requires sophisticated solutions with the intelligence to dedicate the right resources, when and where they are needed.

The fact is, many applications are only “mission-critical” at certain times. For example, you might have a financial application that has high availability requirements during specific times in the accounting cycle. Today’s advanced cloud orchestration solutions allocate the appropriate resources to support this availability requirement—dynamically and automatically. When this level of availability is no longer required, resources are redeployed seamlessly. The result: availability when you need it and optimized computing resources at all times.

Step 3: Leverage open source technologies

What’s the point of embracing the flexibility and cost efficiencies of the cloud, if you’re going to lock yourself in with expensive, proprietary technologies? Taking advantage of open source technologies and architectures like OpenStack, Linux, and KVM (Kernel-based Virtual Machine) enable you to avoid costly license fees while allowing the flexibility and interoperability to create cloud environments using best-of-breed solutions—like SDA and orchestration solutions discussed above.

The open source cloud ecosystem is growing and maturing rapidly, and fostering tremendous innovation as it goes. I believe building on this evolving open source foundation will pay huge dividends in agility down the road.

There you have it: Three critical keys for moving business-critical apps to the cloud. Embracing these crucial success factors, and the innovative technologies behind them, just might be the bridge to the cloud your IT organization has been looking for.

 

State of the Cloud Part Two: Lessons from the Trailblazers

4.22.2015CloudBy:  

In my last blog post, I examined some of the barriers creating “cloud migration inertia” for many enterprises—challenges concerning SLAs, calculating ROI and the cloud talent gap. Yet there are a growing number of enterprises that have overcome these barriers and embraced the cloud as a core element in their IT strategy—and their business. What sets these trailblazers apart? How have they successfully navigated their movement of key business applications to the cloud?

In this second installment, I’d like to share my observations of the key strategies of successful cloud trailblazers that may help illustrate how others can chart their cloud course.

Strategy #1: One step at a time

Most organizations moving to the cloud are taking an incremental approach, rather than a dramatic “forklift” shift of their applications. They often start with just one app, focusing their efforts to iron out any wrinkles and make sure it is successful before moving on to others.

Once they have several successful cloud implementations under their belt, an interesting thing happens: Other groups across the enterprise begin to notice. Once the word gets out and others see the potential, this may spark an up swell in cloud projects. But getting that first cloud project or two right is critical. So careful planning and alignment with key business priorities are absolutely essential.

Strategy #2 Pick the right apps

The most successful cloud projects are those that focus on apps that truly leverage the cloud’s potential to change and improve how companies do business—apps that do things organizations could never do in their legacy infrastructure. From customer engagement, enterprise collaboration and sales automation to supply chain and inventory management, trailblazers are focusing on apps that leverage the dynamic agility of the cloud to transform a variety of distributed processes.

Ultimately, the key to success for any cloud app is its ability to scale across the enterprise and impact the bottom line. For this reason, smart trailblazers are targeting cloud apps with the potential to add to the revenue stream.

Strategy #3 Change the culture

Moving to the cloud requires a cultural shift. It means breaking down barriers that separate IT and the business. It means adopting new approaches to technology standardization and governance. And it means recruiting and cultivating the next generation of technology talent to fuel your competitive advantage into the future.

Most importantly, it means embracing the cloud as a strategy to protect your assets by enabling progressive new business models, without introducing risk by resorting to shadow IT. Enterprise IT leaders must play a central role in helping drive that cultural transformation, in partnership with their business counterparts.

What will the cloud landscape look like a year from now? The tools, technologies and talent are maturing rapidly. The rise of open source solutions like OpenStack and KVM (Kernel-based Virtual Machine) and the availability of software-defined availability and dynamic cloud orchestration solutions are setting the stage for tomorrow’s mission-critical cloud strategies. All that’s needed is the right leadership to take those first incremental steps toward the cloud. Those who overcome inertia today will be at a distinct advantage tomorrow.

Pageof 5

Share