Achieving 100 percent uptime? Here’s how you do it

SHARE:
Copied!

100% uptime seems to have become an illusion – a marketing myth rather than reality. The recent technical problems at Tax-on-Web, the ICT problems at Argenta last year, which have closed down their internet banking, show us as much. And yet, organizations today are expected to deliver 100% availability to their partners. Fortunately, there are technological solutions at hand to ensure this.

When the corporate world felt the need to be available 24/7 for vendors, partners and customers, many organizations – initially mostly situated in the finance industry – have focused on a redundant infrastructure (through server and storage virtualization). Most organizations have not yet moved on to the next step: high availability. As a result, quite a few companies have suffered unavailability leading to damaged company reputation, financial loss caused by data loss. Moving to the next step is critical to avoid such problems, whether they are caused by technical problems or by cyberattacks.

There are several ways to ensure a better uptime of your IT infrastructure. Datacenter redundancy, for instance, allows you, in case of problems, to restart a server or app from your second datacenter which serves as a (real-time) copy. Additionally, virtualization is an important step towards high availability. But even today, many organizations still haven’t introduced a Disaster Recovery strategy, not even for their most mission-critical applications. Finally, there are also solutions for data protection and to improve your fight against cybercrime.

 

Cloud in large organizations

In practice, many technologies will be deployed to ensure an end-to-end data protection. And if the organization decides to (partially or fully) move to the public cloud, they must make sure that any potential availability problem is clearly identified and agreed upon in the  Service Level Agreement (SLA) with their cloud provider.

A cloud-first strategy must be based on a thorough application-based analysis, combined with an infrastructure diagram drawn up with the following criteria in mind: app availability and security. Any rosy promise made by the cloud provider can thus easily be questioned: is a migration to the cloud feasible on the basis of these criteria? Obviously, such a cloud migration will take more time and will be more expensive. I often use the example of a large Dutch bank in this context. They initially planned to move 50% of their applications towards the cloud. After a few years, it had to be reconsidered and eventually decided to deploy between 5 and 10% of its infrastructure in the cloud.

Any large organization will inevitably adopt a multi-cloud strategy. Ideally, this will be an agnostic multi-cloud, allowing you to choose the best possible platform for each application, depending on which provider is most relevant. After all, each cloud provider does have its pros and cons.

 

 

Cloud for Small and Medium Businesses

On the other side of the spectrum, even the smallest organization can boast a hyperconverged architecture, combined with an orchestration solution which ensures a better visibility between the cloud and the internal infrastructure. An SMB can, for instance, deploy Pivotal in SaaS mode and Kubernetes as container technology, VMware for orchestration and Dell EMC for its internal platforms (including VXRail in hyperconvergence) and either a local or cloud-based back-up solution.

 

Not an either-or scenario

That is why we usually provide more standardized solutions to SMBs, while adopting a customized approach for larger organizations. In both scenarios, we opt for a hybrid multi-cloud strategy, rather than enforcing a choice between private and public cloud. It is not a matter of ‘either-or’, but rather of ‘and-and’.

Moreover, the migration between public and private clouds will be simplified by implementing software-defined technologies, such as VeloCloud for WAN. This allows organizations to very simply build and run hybrid WAN services from a central cloud-based console. Don’t forget to include edge computing, by the way: part of the computing power will move from the cloud to the edges (closer to the endpoints), leading inevitably to a higher power consumption.

 

Security of paramount importance

In this story – actually in every single action an organization takes – security is of paramount importance. At Dell Technologies, we opt for a three-pronged approach towards security. Firstly, our SecureWorks provides customers with a Security Operations Center (SOC) which remotely manages the customers’ infrastructure security. Next, our RSA solution for token management, provides them with a solution for IAM (identity and access management) and GRC (governance, risk and compliance). The third security layer offered by Dell Technologies consists of recovery solutions. These help deploy management and automation software, allowing them to automate processes and to protect critical end-to-end data, to detect suspicious activities and to restart in case of (major) problems.

 

 

Critical for any size

The public cloud should never be viewed as the magical solution for all problems. Google has recently calculated that 30% of data that were migrated towards the cloud, will soon be moved back to an on premise environment. Therefore, it is important that organizations, considering a cloud strategy should be fully informed, in order to raise their awareness of the pros and cons of each approach, allowing them to make well-considered decisions.

Summarizing, cloud acceptance will be critical for any organization of any size. This acceptance will undoubtedly be enhanced by the fact that organizations are increasingly aiming at a flexible model for their IT consumption.

 

Arnaud Bacros, Managing Director Enterprise Benelux at Dell Technologies.

Continue Reading

Related Posts

Click to Load More