Teetering at the Edge - Successful Positioning for Edge Computing Close

December 2017: Teetering at the Edge - Successful Positioning for Edge Computing

Author: Dr. Phil Marshall





Bottom Line: Demand for edge computing is on the increase, fueled by industry initiatives such as IoT and network cloudification and virtualization. Edge compute is supported by a slew of industry initiatives (e.g. MEC, CORD, CUPS OEC, and Fog) and proprietary deployments by companies like Amazon and Microsoft. Edge compute has its challenges, particularly with cost and complexity. Although network operators are well positioned to capitalize on edge compute they must pay greater attention to end-user services , with suitable edge compute designs.


Executive Summary

As cloud computing matures and enables virtualization, there is a growing need for edge computing capabilities to address specific service demands. These demands include latency sensitivity, security and privacy, bandwidth management, local control and service continuity, and advanced analytics and automation. While the value proposition for edge computing is relatively well understood, there are challenges that must be addressed. Edge computing creates decentralized architectures, which can be costly and challenging to manage and orchestrate, and often depend on significant design and operational changes. Some edge compute use-cases are more challenging than others, and those that are easier to implement with compelling business cases will define early market adoption.

The edge compute infrastructure spend is summarized in Exhibit 1 for key market segments and is forecast to increase from USD 620 million in 2017 to 12.40 billion in 2022. A long range estimate is also shown in Exhibit 1, which forecasts that the edge computing infrastructure market will reach USD 35.40 billion by 2026.

Edge computing technology development and standardization initiatives are being pursued by a variety of players. The telecom industry has several, including the Multi-Access Edge Compute (MEC), Central Office Re-Architected as a Data Center (CORD) and the Control User Plane Separation (CUPS) initiatives. Several companies including Amazon (Bluegrass) and Microsoft (Azure IoT) have implemented edge compute solutions that are targeted specifically for IoT and to integrate with their cloud platforms. Cisco has been driving thought leadership in edge compute architectures under the guise of fog networking, and Huawei and Nokia with their strong support for MEC. The Open Edge Compute Initiative has been established to capitalize on “Cloudlet” technology developed at Carnegie Mellon University for novel extensions to OpenStack.

Theoretically communication network operators are in a strong position to capitalize on edge computing, since it brings intelligent functionality closer to the network. Amazon and Microsoft recognize the strategic importance of edge compute and are targeting ‘low hanging fruit’ opportunities for IoT solutions. This is in stark contrast to many operators who are particularly focused on edge compute solutions to support technologies like Network Function Virtualization (NFV) and cloud RAN. The edge compute platforms needed for this purpose are vastly over-provisioned, too costly and generally ill suited for the typical services demanded by end users. We believe that this needs to change and that communication network operators must capitalize on edge compute opportunities with platforms that are better suited to end-user demands. If operators fail to do this, we fear that their edge compute efforts will experience a similar fate to that of Internet Multimedia Subsystems (IMS).

Exhibit 1: Edge compute infrastructure spend will reach USD 12.04 billion in 2022 and 35.40 billion in 2026
Source: Tolaga Research, 2017

Edge Computing Disrupts the Disrupter

The age-old debate continues. When should compute resources be centralized and when should they be decentralized? Historically, computing architectures have vacillated between the two, depending on the underlying systems and services being supported. Currently, cloud computing favors centralization, particularly for services that are supported with public infrastructure. In response, web-scale providers have deployed massive data centers in key strategic locations across the globe. For example, Amazon, Microsoft and Google each have between fourteen and fifteen major data centers to support their global services. Centralized cloud architectures enable efficiencies and economies of scale that have helped propel the profitability of web-scale providers, and fuel the destruction of the traditional telecom industry by driving value away from networks.

Although centralized cloud architectures will prevail for the foreseeable future, there is a growing need for decentralized solutions, with edge computing. Edge computing is a relatively young technology, but is already being targeted towards a variety of services, including virtual and augmented reality (AR/VR), and those that have specific security or connectivity demands, in addition to high bandwidth media applications. Many of the services that are being promoted for advanced 4G and 5G also depend on edge compute architectures, as do next generation network functions, such as cloud-RAN.

Building a Case for Edge Computing

Edge computing is capturing tremendous market attention as pundits recognize its role in supporting cloud and virtualization initiatives as they mature. The demand drivers for edge compute are compelling and include:

  • Low Latency Services. In some cases, the round-trip delay to application servers is too long for the services to function reliably. Notable examples include: autonomous systems, AR/VR, gaming, and manufacturing control.
  • Security and Privacy. As more devices become connected, new attack surfaces are emerging, including those that target opportunities to capture sensitive device data. This drives interest in processing and possibly storing data closer to the network edge. For example, in some smart city implementations, video surveillance cameras are processing images at, or near, the camera and sending back simple metadata (e.g. car, bus, bike, pedestrian) rather than the entire image to the cloud.
  • Bandwidth Management. Some services have high capacity requirements that can be economically managed locally. This is particularly the case for high bandwidth media and entertainment services, but also applies to other high bandwidth applications, such as computer vision which is being used in industrial applications.
  • Local Control. In some cases, the data collected from devices is much more valuable near the creation point, and becomes less valuable and potentially more of a liability further away. This drives the need for local control and is particularly pertinent to cases were there are heightened data or system security demands (e.g. corporate campuses, military bases, or private networks).
  • Service Continuity. While mobile networks are becoming increasingly reliable, connectivity is not truly ubiquitous. Edge computing can be used to "buffer" services so that end users have ubiquitous service even when there are connectivity gaps in the supporting networks.
  • Advanced Analytics and Automation. As technologies for automation, autonomous systems, and computer vision improve, they will drive increased demand for advanced analytics that is enabled through edge computing. Notable examples include surveillance cameras (which are discussed above), smart manufacturing, smart logistics and autonomous and remotely controlled vehicles.
  • Energy Efficient Devices. In some cases, battery powered IoT devices are being placed in the field for a decade or more. To improve the energy efficiency of these devices, gateway and edge compute systems might be deployed in the vicinity of the devices to enable low powered operations, and device/service management and orchestration, while supporting northbound functionality to cloud servers.
  • Constrained Environments. In some environments, connectivity is sparse and expensive. For example, satellite connected oil rigs, or mines. These environments are likely to benefit from edge compute functionality to enable services to run with local persistence, and with efficient bandwidth utilization on northbound satellite links.

However, edge compute has its challenges. In particular:

  • Incumbent cloud based solutions are already widely deployed. Cloud service providers have an incentive to retain these solutions, because they enable centralized control and are well positioned to deliver free-mium business models. These business models capitalize on adjacent market opportunities, such as advertising and deliver tremendous value. In general, these same opportunities are more challenging to implement in edge compute environments.
  • In many cases, edge compute is more expensive than centralized data-center solutions. Edge compute is also more complicated to implement, particularly when venues are not readily available to host physical infrastructure. This is an area where telecom companies have a distinct advantage relative to Webscale providers in delivering wide area edge compute solutions.
  • Management and orchestration is challenged by the distributed nature of edge compute architectures. This will remain a challenge until suitable standards and best practices are developed.
  • Many edge compute use-cases are prone to market friction, particularly in cases where status quo is being disrupted. Often market friction is under-estimated and plays an important role in determining the solutions that gain traction and those that don't.
  • The ecosystems needed to support edge compute solutions are often complicated. Notable complications include specific regulatory demands, and constraints created by legacy operations (e.g. energy, smart-city and automotive). Accordingly, edge compute market opportunities are with players who have the ability and incentive to overcome ecosystem complexities (e.g. enterprises with specific IoT requirements, automotive and network operators).

The challenges with edge compute will play an important role in defining its future. Clearly some use-cases are easier to implement than others, and depend on adequate business cases to gain market momentum. It is crucial for edge compute use-cases to align with tangible opportunities within targeted industry verticals. Exhibit 2 illustrates a sample of use-cases for edge computing. It illustrates the diversity of the edge compute platforms needed, ranging from low end devices in residential environments (e.g. Raspberry Pi controllers), to mid-tier integrated platforms for enterprise IoT solutions, hybrid platforms that combine integrated IoT with high performance compute platforms, and high-end server platforms for operators to support advanced network functions. For the purposes of the forecast that is summarized in Exhibit 1, four reference architectures were considered and sized for the targeted use cases. These reference architectures included:

  • Low end platforms, such as residential gateways and specialized computing devices (e.g. Raspberry Pi).
  • Mid-Tier platforms, which deliver edge compute capabilities for specific services such as IoT. (e.g. HPE Edgeline EL1000/EL4000).
  • Hybrid platforms, which combine mid-tier platforms with high performance compute servers for local processing, such as real-time video analytics, and;
  • High performance platforms, which essentially encompass specialized platforms for compute, control and storage/caching. Similar platforms are deployed in data center environments (e.g. HPE ProLiant 360/380).

Exhibit 2:Edge compute use-cases must be carefully defined within the targeted verticals
Source: Tolaga Research, 2017

Network operators might be blind-sided by edge compute opportunities

If network operators play their cards right, they are in the 'box-seat' to use edge computing to bring value back closer to the network. However, this is punctuated by competition from companies like Amazon and Microsoft and other initiatives that focus on consumer and enterprise end-to-end opportunities. Both Amazon and Microsoft are promoting proprietary enterprise edge compute solutions that are tightly coupled with their respective cloud platforms and targeted specifically for the IoT services. Other companies like Cisco are spearheading thought leadership in edge computing, with its fog computing concept that lays out a framework for distributing compute functionality throughout the ecosystems between end-point devices and cloud computing data-centers. The Open Edge Compute Initiative was recently established initially to capitalize on technology developed at Carnegie Mellon University to extend cloud capabilities to edge infrastructure. The technology uses so-called "Cloudlets" that are based on OpenStack, with novel extensions (OpenStack++) to enable agile VM image management in edge computing platforms.

The edge compute initiatives being pursued by network operators are tied to several industry standards, including:

  • Multi-Access Edge Computing (MEC), which is spearheaded by the ETSI. MEC was originally architected specifically for mobile networks, and named Mobile Edge Computing. The scope of MEC has subsequently been broadened to incorporate other access technologies. However, current MEC functionality and its associated use cases are still anchored primarily with 3GPP access network technology. Infrastructure vendors like Huawei and Nokia are strong proponents of MEC and are embedding its capabilities throughout its network equipment.
  • CORD (Central Office Re-Architected as a Data Center), which was initially pioneered by AT&T, to apply data center technology in central office(CO) locations. Network operators have hundreds and in some cases thousands of central offices (CO) deployed at the edge of their networks. The COs support numerous vertically integrated services and applications. For example, we estimate that AT&T has in excess of 4700 COs, which contain over 300 unique hardware applications. The value proposition for CORD is to drive cloud based data center design principles to COs and to virtualize and cloudify existing services and create design blueprints for residential (RCORD), enterprise (ECORD) and mobile-centric (MCORD) solutions. And;
  • CUPS (Control User Plane Separation), which is an important 3GPP initiative to simplify mobile edge compute architectures and operations. CUPS enables user and control plane separation in the EPC's Signaling and Packet Gateways (S-GW, P-GW) and Traffic Detection Functions (TDF). In the context of MEC, CUPS allows a user plane to be distributed to the network edge, with a centralized control plane, such as in the mobile core. We believe that CUPS is essential functionality for MEC architectures to function as intended.

The business cases for edge compute can be challenging and will determine the use-cases that thrive, and those that don't. Campus or regionally orientated edge compute architectures are appealing since they target the infrastructure to where it is needed. These architectures are being complemented with the development of integrated platforms that are optimized and targeted towards specific use-cases, such as the IoT service management. Companies like Amazon and Microsoft are capitalizing on integrated platforms for their respective edge compute platforms.

Conclusion

The business cases for edge compute can be challenging and will determine the use-cases that thrive, and those that don't. Campus or regionally orientated edge compute architectures are appealing since they target the infrastructure to where it is needed. These architectures are being complemented with the development of integrated platforms that are optimized and targeted towards specific use-cases, such as the IoT service management. Companies like Amazon and Microsoft are capitalizing on integrated platforms for their respective edge compute platforms.

When wide-area-network implementations are deployed, network operators must pay careful attention to the number of edge compute sites they implement, and the compute functionality provisioned. Since high-end edge compute platforms are needed to support NFV and cloud-RAN capabilities, it is currently prohibitively expensive to deploy these capabilities across the central office footprint of a typical network, as is intended with CORD. However, CORD based implementations are currently viable for end-user services that can be supported with less expensive integrated edge compute infrastructure.

Although edge compute is nascent, it is already witnessing demand with enterprise led implementations, particularly for IoT applications. We believe that it is important for network operators to broaden their edge compute initiatives to capitalize on this demand. In addition, we believe that if network operators don't align their edge compute strategies with tangible end-user demands, they might fall into a similar trap that the industry experienced in the past, with Internet Multimedia Subsystems (IMS).

About the Author
Dr. Phil Marshall

Phil Marshall is the Chief Research Officer of Tolaga, where he leads its software architecture and development, and directs tolaga's thought leadership for the internet-of-things (iot) and mobile industry research. before founding tolaga, Dr. Marshall was an Executive at Yankee Group for nine years, and most recently led its service provider technology research globally, spanning wireless, wireline, and broadband technologies and telecommunication regulation. He serves on the Advisory Board of Strategic Venue Partners, is an Industry Advisor for Silverwood Partners – Investment Bank, and was a non-Executive Board Member of Antone Wireless, which was acquired by Westell in 2012.

Marshall has 20 years of experience in the wireless communications industry. He spent many years working in various engineering operations, software design, research and strategic planning roles in New Zealand, Mexico, Indonesia and Thailand for Verizon International (previously Bell Atlantic International Wireless) and Telecom New Zealand.

In addition, Marshall was an Electrical Engineer at BHP New Zealand Steel before he attended graduate school. He has a PhD degree in Electrical and Electronic Engineering, is a Senior Member of the IEEE and the Systems Dynamics Society. His technical specialty is in radio engineering and advanced system modeling, and his operational experience is primarily in communications network design, security and optimization.

;