2017: A Turbulent Year – But Also a Year of IT Pragmatism

Inevitably as a calendar year wraps up – one reflects back on the year past and dreams of the year to come. One thing that is constant is hyperbole and irrational exuberance. We tend to like black and white and struggle with nuance and transition. Our IT industry is always so full change, so dynamic, so disruptive that we tend to bounce from idea to idea. On the upside, this creates a constant “zig zag” of progress.

In the end, I find that those that ignore the noise, but have a pragmatic approach to “lean into change,” tend to win.

While the debates about one technology replacing another are constant, the essential truth is that most of those debates wind up being meaningless. Every new technology that comes down the pike is not only additive; it usually serves to improve everything that has gone before it. Yes, ultimately there are winners and losers. Yes, over the fullness of time, we’ve even seen some things fully and completely get consigned to the dustbin of history. But the story arc is always a little more nuanced than the headlines.

There is no more current an example of this than the rise of public cloud.

When public clouds first started to gain traction years ago, prognostications about how every application workload would be moving into the cloud were made. They were wrong.

Don’t misunderstand me. Anyone who attended or watched AWS re:Invent 2017, or who looks at the growth rates of Azure, of SFDC and other SaaS options knows that the public cloud is massive and is experiencing massive growth. Public Cloud (IaaS, PaaS, CaaS, SaaS, and server-less models all) are here to stay and are a critical part of every customer’s ecosystem.

But in 2017, I know from thousands of customer conversations that we’ve evolved our thinking. We’ve moved to a more pragmatic point of view.

“All workloads will be in the cloud” is a load of hoo-ha.

There are whole classes of workloads that for economic, governance, and data gravity (compute tends to co-locate with data, because moving data is generally hard – as if it has “mass”) reasons are not ideally suited to run in a public cloud.

What public clouds did show us is that there is a better way to build IT infrastructure – and that standardization, simplification, and software-defined approaches are the foundation on which clouds are built. We at Dell EMC took those lessons to heart by developing a wide range of converged and hyper-converged infrastructure (HCI) solutions that effectively turn local data centers into a private cloud with hybrid cloud capabilities. Instead of spending months configuring and integrating IT infrastructure, the pre-integrated systems from Dell EMC enable the internal IT organization to function as an internal cloud service provider.

In 2017, many IT organizations clearly discovered that for certain workloads, there is an economic imperative to have a real private cloud part of a multi-cloud and hybrid cloud strategy. Fact: it is much less expensive to deploy long-running workloads, particularly those that have large amounts of data, and are not naturally cloud native in a private cloud in a datacenter or a colo than it is to host them in a public cloud.

This is simple “rent, lease, own” economics in action.

The result is an era of new found pragmatism in IT circles. Rather than assume everything is going to be automatically deployed on a public cloud, our customers now simply view the local data center as one choice of many clouds. Furthermore, just because an application workload began life in one cloud does not mean it will spend its entire life there.

There are many cases of application workloads created in public clouds to be migrated to an on-premises environment to contain costs. At the same time, there are still plenty of workloads, such as legacy applications or databases not used often, that can be lifted and shifted into a public cloud as part of an effort to reduce cost or make room for additional applications deployed in the local data center.   And of course – the public IaaS/PaaS/CaaS cloud platforms play a critical workload when you need something for hours or days (and not months/years), or for workloads that have unknown scaling needs.

The need to not only be a public cloud consumer, but also to function as a cloud service provider is what’s driving so many organizations to modernize their data centers. And, if they’re making this IT transformation, they’re certainly looking at HCI. International Data Corp. (IDC) recently reported that hyper-converged system sales grew 48.5% year over year during the second quarter of 2017, generating $763.4 million worth of sales. This amounts to 24.2% of the total value for combined integrated infrastructure and certified reference systems valued at $1.56 billion in the second quarter alone. Overall, Dell EMC is the largest supplier of this combined market segment with $763.4 million in sales, or 48.5 percent share of the market segment.

The good news is that all these fierce debates about enabling technologies seem to be taking less time to play out.

Earlier in the year, container manager/cluster manager debates raged. Now it’s clear that Kubernetes is the standard around which everyone is rallying. Kubernetes can, will be, and IS being deployed on top of virtual machines, public clouds and bare-metal servers.

Another debate which seems to have burnt furiously – but has now burnt out, and is in “fierce pragmatism” phase is “kernel mode VMs” vs. “containers.” This was always a silly debate. Who cares?   Yes, in some cases, container/cluster managers are deployed on Linux OSes on bare metal. However, in most cases, Kubernetes (and more generally, containers/cluster managers) are being deployed on top of kernel mode hypervisors to isolate applications in a way that not only provides better security, it prevents “noisy neighbor” applications from consuming all the available resources. Oh, and that darn intersection of hardware and software (which is always there). Again, pragmatism has won the day as IT organizations come to realize that containers and virtual machines complement each other.

IT leaders are starting to realize that the best way to approach IT is to start at the highest level of abstraction possible. If a simple process can be accomplished using a software-as-a-service (SaaS) application, chances are high that’s probably going to be the simplest way of achieving that goal. At the other extreme, if the process involves a high amount of differentiated business value, then chances are high that the organization should build a custom application. What IT organizations should not be wasting their time and resources on in this day and age is stitching together disparate pieces of infrastructure to support those applications. It makes little sense for IT organizations to re-invent the wheel, when vendors like us spend tens of thousands of work hours validating and optimizing complete IT systems for repeatable use.

That’s why infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) environment have become so widely employed.

IT is increasingly something consumed, not something constructed.  The effort of invention, innovation and unique differentiation at each customer is accelerating its move into the application domain.

Our aspiration is to make the infrastructure invisible—for the benefit of our customers. We want infrastructure, to some degree, to be boring.

Every minute and dollar spent on infrastructure is time and money that gets diverted away from applications and new services. Organizations today differentiate themselves on the quality of the digital experiences they enable for customers. The amount of differentiated business value that can be generated by manually optimizing IT infrastructure is minimal at best. As the business becomes more dependent on IT, there’s a growing appreciation for accelerating outcomes. No business leader especially cares whether an IT administrator could wring an extra 10 percent of utilization out of a server or storage system. They want to have confidence in an IT organization that can responsibly respond to changing business conditions as adroitly whenever and, increasingly, wherever necessary.

The biggest decision any IT leader needs to make today comes down to philosophy. Inflexible IT infrastructure has contributed greatly to a negative perception of IT departments everywhere. IT leaders now have an opportunity to greatly enhance the perception of IT departments within their organization by focusing more on the art of the possible versus the constraints that have in many ways held IT organizations from reaching their full potential. None of that means that every IT organization should deploy every application workload on premise. But it does mean that the single biggest benefit of the merger between Dell and EMC and creation of Dell Technologies inclusive of VMware, Pivotal has been the creation of an full technology stack that make internal IT organizations more relevant – by becoming a service provider, not a bespoke infrastructure builder – to the business than any other time in history.

 

About the Author: Chad Sakac

Chad Sakac leads the Pivotal Container Service (PKS) efforts at Pivotal where he brings together the Engineering, Marketing and GTM aspects of the business – with the goal of building the best Enterprise Container Platform together with VMware – part of how Pivotal is transforming the way how software and the future is built. PKS is a joint effort with VMware – and the effort involves bringing the immense resources of two great companies together. This alliance part of Chad’s role extends to all of the elements of how Pivotal works with Dell Technologies (Dell, Dell EMC, VMware, RSA, Secureworks, Virtustream, Boomi) - across the transformational methodologies (Pivotal Labs, Platform Acceleration Labs, Application Transformation and more) and technologies (all of Pivotal Cloud Foundry, Pivotal Data) of Pivotal as a whole. Prior to this role, Chad spent 14 years at Dell EMC where he was responsible for several technical customer focusing on customer and partner innovation – most recently as the President and GM of the Converged Platform and Solutions Division (CPSD), and prior to that leading all global Systems Engineering team. Before joining EMC, Chad led the Systems Engineering team at Allocity, Inc. Chad authors one of the top 20 cloud, virtualization and infrastructure blogs, “Virtual Geek” He holds an Electrical Engineering degree from the University of Western Ontario, Canada.