Five Reasons Mobile Devices Will Generate the Need for More Servers

In 1987 when I read a book titled The Media Lab that described some forward-looking work at MIT, I came across the question: how will we directly connect our nervous systems to the global computer?  I remember wondering at the time what global computer they were talking about.  A few short years later a couple of things happened.  The internet became a household concept, and phones resembling the Star Trek communicators I saw on television as a child became a reality.  The concept of radio communication and its potential linkage to the phone system had been conceived much earlier in the twentieth century, but by the 1990s it became a commercial viability for everyday consumers.  The buildout of all of this infrastructure exploded, and now the internet is accessible by user endpoints all over the globe.  The internet has become ‘the global computer’ and the wireless infrastructure has become part of the answer to how we connect to it—from anywhere.  Extending this trend of mankind’s history into the future, we can only expect that this growth will continue for decades and beyond.  Although I made the transition from radio engineer to computer engineer long ago, I retain my optimism and interest in the wireless industry.  With the inevitable progression of technology to enable richer experiences and services over the air, there are some things to anticipate for the server usage.  Here are five reasons why I am bullish on the impact of wireless industry on the computer industry.

Backend Datacenters

The increasing number of mobile devices are using their wireless access to connect to something.  Whether it’s streaming video, daily news, online music, or ride sharing services—the volume of traffic is growing.  To support this traffic, the data and intelligence must be hosted somewhere on computers.  Servers are pervasive in datacenters of all sizes and locations, supporting workloads varying from content delivery to data analytics.  Some of these services are hosted in the public cloud, but many are also hosted in private datacenters whose tenets include security and greater control over their computing performance.  A mix of deployments are likely to continue for the foreseeable future.

NFV

Network Functions Virtualization, proposed in an influential 2012 whitepaper, has led to a migration of functionality from custom equipment to standard servers and switches.  Equipment that may have been realized with ASIC-based hardware in the past can now be implemented in software on off-the-shelf servers, controlling costs and easing lifecycle maintenance.  The software packages that have been implemented are called virtual network functions (VNFs).  According to the original vision, these VNFs can be migrated and scaled to accommodate changes in network usage, just like cloud-native applications at SaaS hosters and “webtech” companies.  This however does not preclude software from being delivered either in containers or run as processes on bare metal servers as performance requirements dictate, which again leads back to more usage for servers.  The core network subcomponents in EPC and IMS that support the mobile networks are key targets for the virtual network functions.  As global wireless infrastructure grows to support demand, supporting core networks increase in number and house more servers.

Cloud Radio Access Networks

The concept of Cloud Radio Access Networks brings us deeper into the metaphorical forest—exactly how will carriers be able to accommodate an increasing number of users with an increasing demand for more and more bandwidth, while still delivering a decent quality of service and controlling their costs?  The industry has to solve this problem within the bounds of the following constraints: the radio wave spectrum is finite, and deploying towers and associated equipment is expensive.  Historically, equipment at or near the cellular antenna sites performed the packet and signal processing necessary to receive and deliver end user data across the backhaul networks.

With CRAN, this functionality is split between a BBU (baseband unit) and RRH (remote radio head).  A Centralized Unit (CU) will host the BBU at a network edge site within periphery of multiple antenna locations and their remote radio heads.  This would be less expensive than putting a BBU at every location, but it comes with some additional interesting benefits.  CoMP (coordinated multipoint) reception and transmission can be achieved, bringing better utilization of the network by providing mobile devices with connections to several base stations at once.  Data can be passed through the least loaded stations with some real-time decisions from the centralized unit.  Similarly, devices can receive from more than one tower while in fringe areas, and centralized traffic decisions can lead to fewer handover failures.  There is also a potential cost savings by alleviating the need for inter-base station networks.  This implementation also allows for reconfiguring network coverage based on times of peak needs like sporting events.

Why is this relevant?  Because that CU is a server running software that implements the BBU.  There is a lot of possible variability in this implementation, but same concept applicable to NFV is pertinent for radio access networks as well.  Off-the-shelf equipment has an economy of scale that will be favored over the development of custom specific-purpose appliances.  As 5G cellular is deployed and new coverages are created, CRAN will become widespread.

Mobile Edge Computing

Mobile Edge Computing is another inevitable phenomenon that will be increasingly developed as wireless usage grows.  There is already a concept today within the world of datacenters in which “points of presence” are deployed to allow acceptable response times for internet users.  For example, OTP (“over the top”) video services such as streaming movies are cached at locations near to the users.  MEC entails a buildout of sites (lots of them) close to the consumers of data.  Autonomous vehicles would not tolerate latency of downloading maps from a location six thousand miles away—instead, a local computing site would support this type of workload.  Many evolving workloads including virtual reality and mobile gaming will benefit from mobile edge computing.

The question (and difficulty) for service providers will be where to deploy these computing locations.  This will occur over time and will result in a hierarchical network with more layers than the networks of today, which is part of the reason an analogy has been made to the distributed nervous system of an octopus.  The concept of “far edge” locations will be introduced which house servers to host workloads and data in support the emerging wireless uses.  Some of these locations contain existing buildings with environmental constraints, but that does not necessarily preclude installation of ruggedized servers.  Perhaps more appealing to service providers, instead of investing in new brick-and-mortar sites, modular datacenters of all shapes and sizes can be dropped into place as “green-field” solutions.  These prebuilt datacenters as small as one rack can be installed capable of providing fresh air cooling to standard servers even in warm environments.

Reconfigurable Computing

With all of these emerging workloads running on servers to support mobile wireless users, performance and packet latency can become an issue.  There have been early adopters of this type of computing model in the high-frequency trading industry, where companies have deployed FPGAs (field programmable gate arrays) as PCIe cards inside their servers.  These devices, when programmed for specific tasks, can offer faster operations compared to CPUs running generic instruction sets.  Incidentally, FPGAs have been widely utilized in the telecommunications industry for a long time.  Returning to the CRAN example, FPGAs can be used to implement FEC (forward error correction) algorithms required by the standard 4G and 5G cellular protocols, offloading the CPUs and accelerating the packet processing for these workloads.  These devices will extend the capabilities of standard servers, further extending their value in this industry.

So the “server industry” is not just one industry; it is a cross-section of tools and equipment applicable to many sectors including wireless communications.

About the Author: Andy Butcher

Andy Butcher is in the Server Advanced Engineering group at Dell in Round Rock, Texas, where he has been since 2008 after a wide variety of technical and managerial roles in industry dating back to the 1980’s. He holds an MSEE degree from Georgia Tech.