Vested Interests Drive Divergence in Edge Computing Close

June 2018

Vested Interests Drive Divergence in Edge Computing

Executive Summary

Edge computing promises to disrupt traditional compute architectures and is touted as being as strategically important as cloud computing. There is no clear technical definition for edge computing, and industry players are developing solutions that favor their respective market positions. Network operators are developing solutions that collocate edge compute infrastructure with their networks. Web-scale providers have on-premise edge compute solutions, with the aim of controlling value creation, and embedded solution providers emphasize the role of end-point devices in delivering edge compute capabilities. Edge compute is also impacted by the technical demands that drives its adoption. These technical demands include the need for ultra-low latency connectivity, security and privacy, bandwidth management, local service control, service continuity, advanced analytics and automation, device energy efficiency, and connected operations in constrained environments.

Edge computing, is supported by a diverse range of industry standards and initiatives and proprietary solutions. Notable standards include:

We expect that the diversity of edge computing initiatives will continue to proliferate over the next 12-24 months as the market continues to develop and define itself. However, after this period, we expect that initiatives will consolidate around key functionality, such as that associated with the communications industry, industry verticals and consumer markets. Unlike the cloud computing market, which has consolidated to become dominated by a handful of players, we expect that edge computing will retain greater supplier diversity because of its distributed architecture and specialist functionality.

Although edge computing will be complementary to centralized cloud architectures for the foreseeable future, it will disrupt traditional cloud-centric business models. Industry players cannot afford to ignore edge computing and must navigate the uncertainties that it creates and the changes that it heralds.

Vested interests define edge

There are familiar moments in history when seemingly modest innovations result in tectonic shifts in the business models of entire industries. Notable examples include the iPhone in January 2007, or more importantly its touch screen technology, and Amazon’s AWS cloud Infrastructure-as-a-Service (IaaS) solution, which was launched almost a year earlier. Both these innovations combined with Internet and mobility technology have played a tremendous role in defining cloud services and will impact the future of edge computing.

Cloud computing enables applications to operate in massive data centers, with centralized architectures provided by web-scale companies like Amazon, Facebook, Google, and Microsoft. Commonly cloud applications are peered directly with user-friendly end-point devices. This peering has generally pushed value creation away from communication networks, and towards end-point devices and cloud data-centers. This, to benefit web-scale providers and device manufacturers, largely at the expense of network operators.

Cloud computing has enabled tremendous growth in consumer and enterprise-led digital services and the proliferation of connected devices. This growth is stressing current systems and creating opportunities for edge computing solutions, with distributed (as opposed to centralized) compute architectures.

Edge computing is not a new concept and is has been widely adopted for:

  • Smart consumer electronics devices provided by companies like Samsung and Whirlpool.
  • Smart consumer electronics devices provided by companies like Samsung and Whirlpool.
  • Connected enterprise and industrial infrastructure equipment (such as programmable logic controllers (PLC)) provided by companies like ABB, Honeywell, Siemens General Electric and Rockwell, and;
  • Content distribution network (CDN) solutions that are operated by companies like Akamai, Amazon, Fastly and Verizon.

Although centralized (cloud) and distributed (edge) architectures are complementary, edge computing has the potential to disrupt the business models of incumbent players depending how it is implemented. Edge computing has a broad definition which allows for compute functionality anywhere from outside a data-center to endpoint devices. Network operators favor opportunities to collocate edge computing and network infrastructure with the aim of bringing value creation back to network environments. Operators are also using edge computing for their own network advancements, such as cloud RAN, and other virtualized network functions. In contrast, web-scale companies are providing edge compute solutions that are targeted towards enterprise IoT. Commonly these solutions seek to maintain value at the ecosystem peripheries where web-scale companies have greater strength. Other players like embedded system providers have edge compute strategies that tend to concentrate value within the end-point devices themselves.

The divergent approaches towards edge computing are reflected in a variety of initiatives that are being pursued by industry players. Key initiatives are investigated in this report with a focus towards their salient characteristics relative to key technology and market drivers.

Why edge computing and when is it needed?

Edge and centralized cloud computing will coexist for the foreseeable future. Today centralized cloud services are prevalent. For these services there is generally an economic cost for migrating to edge compute platforms. While data-center inspired design-principles are being used for edge computing, they come with added expense and complexity and lack the economies of scale of centralized cloud platforms. Notable factors that constrain the the adoption of edge computing include the following:

  • Existing services are designed and deployed in centralized cloud compute environments. Since these services have been designed in the absence of edge computing, a classic “chicken and egg” dilemma commonly occurs. This dilemma constrains market development, even for services that are better suited to edge compute architectures, because it encourages status quo design principles.
  • Edge compute implementations can be costly and complex to scale. All things being equal, the cost and complexity for workloads operating in centralized environments are less than those operating in distributed edge computing environments. As a result, designs typically start with centralized architectures and only become distributed with edge computing if the centralized architectures prove inadequate. However, design philosophies that emphasize centralized architectures have their flaws, since all things are not necessarily equal. This is particularly the case as digital services proliferate and become increasingly demanding, and edge computing solutions mature, and;
  • Ecosystem Friction. The ecosystems required for edge computing are generally more complicated than centralized cloud-based architectures. Collaboration is often needed amongst ecosystem players. While this collaboration may be based on seemingly benevolent partnerships, it is likely to change as players jockeying for competitive advantage with incentive-led ideologies.

Even though edge computing is confronted with market constraints, there are specific service requirements that drive its adoption. These requirements include, reduced service latency, security and privacy, bandwidth management, local service control, service continuity, advanced analytics and automation, device energy efficiency, and operations in constrained environments.

Managing latency with edge computing

Today most digital services can cope with the connection latencies associated with centralized cloud services. These latencies tend to be in the order of 100’s of milliseconds. This is a consequence of delays in transmission medium (e.g. fiber-optic cable) and in the gateway, routing and server functions. Connection latencies can be improved with optimal routing and server platform architectures. However, latencies are fundamentally limited by the speed of light over the physical transmission medium between the client and server devices. For example, light travels at approximately 203 meters/microsecond in typical fiber links (i.e. the fiber has a refractive index of approximately 1.47). This defines the theoretical latency limits for a given geographical separation between a client device and its application server.

The tolerable latencies for several services are summarized in Exhibit 1. In particular, Exhibit 1 identifies those services that don’t require edge computing, benefit from edge computing, or for which edge computing is mandatory. Most digital services deployed today can cope with the connection latencies associated with centralized cloud services. Many services can be improved with the benefit of edge computing, and edge computing enables new services, such as those associated with autonomous transportation, haptics, robotics, augmented and virtual reality (AR/VR) and real-time manufacturing.

Exhibit 1: Theoretical outdoor cell site coverage based on Okumura Hata model
Source: Tolaga Research 2018

Tolaga conducted a series of latency measurements for Amazon’s AWS-EC2. Amazon has 55 data centers (so called Availability Zones) clustered in 18 regions, see Exhibit 2. In addition, Amazon has been expanding its edge compute infrastructure across all the regions where it operates. To investigate latencies over large geographical distances, measurements were conducted with a client device located in Auckland, New Zealand. A modified “TraceRoute” algorithm was used and connection latencies were measured relative to their theoretical minima.

For the measurements conducted, the theoretical minimum round-trip latencies ranged between 22 and 178ms and the measured round-trip latencies ranged between 92 and 360ms. Based on these results, the incremental latencies relative to the theoretical minima ranged between 14 and 208ms. These results demonstrated that not only protracted latencies are problematic for latency sensitive services, but also the variations in latency that can occur between sessions.

Exhibit 2: Amazon EC2 round-trip connection latency measurement results based on a modified “TraceRoute” algorithm
Source:Amazon and Tolaga Research, 2018

Edge computing brings bandwidth relief

As connected devices proliferate data traffic volumes are growing at an unprecedented rate. The number of connected devices globally is forecast to increase from 18 to 30 billion between 2018 and 2023, see Exhibit 3. Tolaga estimates that on average each device will generate between 40 and 105 kB/s of useful data, which corresponds to 2.5-3.6 pB/s (1015 Bytes/second) of useful data cross the 30 billion connected devices forecast in 2023. Less than ten percent of the useful data from connected devices is transported to the cloud today, largely because of network transport costs and complexities. Edge computing can be used to process device data locally and alleviate transport costs. Furthermore, since access networks constitute between 60 and 80 percent of network transport costs, we believe that when edge computing is used for bandwidth management it will predominantly have on-device and on-premise implementations.

Edge computing for security, privacy and local service control

As society becomes increasingly dependent on digital services and connected devices, the scope and magnitude of security vulnerabilities are on the increase. High profile attacks are now commonplace, and executed by bad-actors who are aided by the prevalence of malicious software and motivated by the greater impact of their attacks (see our recent publication, Taking Communication Network Security to New Heights).

Even centralized servers with the most sophisticated protection are vulnerable to malicious attacks and require security operations to rapidly identify and respond to attacks when they occur. For example, when the Mirai malware software was launched in 2017, it has wreaked havoc on the Internet with distributed denial of service (DDoS) attacks. These attacks and many others capitalize on centralized web server architectures to concentrate their impact and are much less effective in targeting distributed edge compute environments.

As more 'things' become connected, privacy and anonymity becomes harder to protect. Server-less edge computing based artificial intelligence has been implemented by companies including Amazon (e.g. Alexa) and Google (e.g. Nest) and startups like IC Realtime and Light House AI. These solutions enable private and sensitive device data to be processed and analyzed locally in the end-point devices. Similar solutions are incorporated in smart-CCTV-camera technology and provided by start-up companies like Horizon Robotics and Intellivision. With these solutions, key CCTV images (such as those associated with anomalous or relevant activity) are processed locally and only meta-data describing the images is sent to cloud servers.

Exhibit 3: Connected device forecast and useful data volumes
Source: Tolaga Research, 2018

Service continuity in constrained environments with energy efficient devices

Although network coverage is always improving, there is and will continue to be network coverage holes, areas where network bandwidth is inadequate and expensive, and where fringe coverage result in severely compromised device battery life. Extreme cases include operations in remote mining and oil and gas plants. Other less extreme cases include asset tracking and remote infrastructure monitoring, in areas where network availability is intermittent.

Edge compute technology with local area network capabilities can be used to enable service continuity in areas where wide area coverage is lacking. Depending on technical and commercial priorities, the edge compute solutions might use:

  • Proprietary platforms that are developed in-house and managed by internal IT or OT organizations.
  • Proprietary platforms provided by industrial companies like ABB, GE (GE Predix), Honeywell (Mobility Edge) and Siemens (Industrial Edge), with vertically integrated solutions that are targeted towards particular markets.
  • Proprietary platforms that are hosted by companies like Amazon Green-Grass and Microsoft Azure IoT, and;
  • Standards based platforms provided by enterprise IT and telecom service providers

Analytics and automation

Analytics and automation is being implemented throughout entire connectivity ecosystems. However, as connected devices evolve with diverse scope and functionality, the associated ecosystems grow in complexity. Edge computing is used to localize the functionality and reduce the implementation complexities for analytics and automation. Rather than requiring the coordination of many companies across complex ecosystems, the entire functionality can be supported by the edge compute platform of a single company. For example, analytics and automation is at the heart of the edge computing solutions provided by industrial infrastructure providers like ABB, General Electric, Honeywell, Rockwell and Siemens. Although the Amazon Greengrass and Microsoft Azure IoT platforms have started out with a focus towards enabling service continuity for IoT applications, they are rapidly evolving to incorporate functionality for analytics and operational automation.

Vertical markets have unique edge computing demands

The demand drivers for edge computing and the associated functionality that is needed, varies amongst industry verticals and the specific solutions being addressed. Vertical market studies will be investigated in upcoming Tolaga reports, and notable verticals and services are summarized in Exhibit 4. The services that are identified for each of the verticals are those that would benefit (and in some cases require) edge computing capabilities. In addition, to existing services, new services and features are likely to emerge once edge computing solutions become available. These include solutions for augmented and virtual reality and autonomous systems, local data collection and analytics, machine learning and artificial intelligence, and persistent connectivity in remote areas such as oil and gas fields.

Exhibit 4: Vertical market services and solutions that depend on edge computing
Source: Tolaga Research, 2018

Edge computing initiatives are proliferating

Many initiatives and framework architectures are being developed for edge computing. These commonly have biases that reflect the interests of their sponsors and reflect the diverse implementation scenarios for edge computing solutions. The edge compute initiatives and framework architectures are supported by seemingly mutually beneficial partnerships to address ecosystem complexities. We expect that these partnerships will change as the market matures and players seek competitive advantage. Notable edge computing initiatives can be divided into several categories, including those being pursued by the communications industry, proprietary solutions and other initiatives and frameworks which have broader market applicability.

Communications industry leads network-centricity

The communications industry is theoretically well positioned to capitalize on edge computing since it’s networks are inherently distributed. For communication network operators, edge computing provides:

  • Opportunities to drive network-centric value creation by aligning solutions with network ecosystems.
  • Opportunities to relieve the network demands by offloading traffic at the edge, and;
  • Support for next generation network demands, such as virtual network functions (VNF) and cloud RAN solutions

There are several well-known edge computing initiatives being spearheaded by the communications industry, which include Multi-Access Edge Computing (MEC), CORD (Central Offices Re-Architected for Data-Centers), and Control User Plane Separation (CUPS).

Multi-Access Edge Computing (MEC)

Multi-Access Edge Computing (MEC) is an ETSI-3GPP standard that was originally developed with a mobile industry focus. It was spearheaded by Nokia and has been adopted by many other players across the communication industry value chain.

MEC currently has 19 active work items, with the support of 21 companies. Exhibit 5 summarizes the number of work items that each of these companies are participating in.

n September 2017, MEC announced a collaborative partnership with the OpenFog Consortium. While this collaboration makes sense, we believe that MEC must attract greater participation from vertical industry players, such as ABB, Bosch, General Electric, Honeywell, Philips, Rockwell, Siemens and Whirlpool.

Exhibit 5: Contributors to active MEC Work Items in June 2018
Source: Tolaga Research, 2018

Today MEC functionality is primarily anchored with 3GPP based mobile access technologies. However, MEC has several active Work Items that incorporate other access technologies beyond mobile. Early MEC standardization efforts have focused on exposing radio network performance management capabilities and device location information, for applications to use. MEC supports RESTful centric APIs to orchestrate and manage applications. The stated use cases for MEC include the following:

  • Consumer orientated: gaming, augmented/assisted reality, remote desktop, cognitive assistance etc.
  • Network operator and third party services: big data preprocessing, active device location tracking, cognitive assistance, connected vehicles and;
  • Network performance related services: quality of experience, content/DNS caching, and network performance and video optimization.

Exhibit 6 illustrates a generic network architecture that incorporates MEC. Devices connect via multi-access environments to MEC platforms that contain the edge compute functions. The MEC platforms can reside anywhere within edge networks and can support the needs of multiple applications. Input parameters from networks (such as bandwidth utilization) and devices (such as their location) are used to support the capabilities provided by the MEC.

Currently MEC solutions are focused on application enablement (i.e. discovery, security and communication) with standardized APIs (based on Open API with standardized network and context exposure) and mobile access-centric management and orchestration frameworks. As the scope for MEC has broadened towards multi-access functionality, so has the associated work items, to include Wi-Fi and fixed access technologies, support for a broader range of virtualized environments (including NFV), dedicated Vehicle-to-X capabilities, direct interaction with charging platforms and regulated systems such as law enforcement.

Exhibit 6: Generic Multi-Access Edge Computing (MEC) Architecture
Source: Tolaga Research, 2018

Control and User Plane Separation of EPC Nodes (CUPS)

Control and user-plane separation is commonplace in traditional communication network architectures (e.g. SS7) and is essential for many proposed edge computing services and applications. The evolved packet core (EPC) in 4G networks introduced some control/user plane separation. This came with the introduction of the Mobility Management Entity (MME) to manage control plane functions relating to mobility, and S-GW and G-GW platforms to manage user plane and the remaining control plane functions. CUPS was introduced in 3GPP-Release 14 and further separates the control and user planes within the S-GWs and P-GWs. It also introduces comparable control/user plane separation in the traffic detection functions (TDF), which were introduced in 3GPP-Release 11.

When CUPS is used with MEC it allows independent user-plane and control plane designs. For example, a user-plane might have highly distributed termination points, with a centralized control plane. Notable use cases for CUPS include the following:

  • High bandwidth video optimization, with centralized service and highly distributed content management.
  • IoT services that have significantly more signaling relative to payload traffic.
  • Low latency services, with centralized control of edged terminated data traffic, and;
  • Business and mission critical services, with centralized control to manage critical edge network traffic.

Exhibit 7: CUPS implementation in a mobile network environment
Source: Tolaga Research, 2018

Software Defined Networking (SDN) functionality. With this functionality, MEC capabilities can be delivered more effectively and in a complementary manner relative to other 3GPP features such as Flexible Mobile Steering Services (FMSS). FMSS was also introduced in 3GPP-Release 14.

CUPS deployments involve much more than infrastructure technology upgrades. In particular, when network operators deploy CUPS in their next generation network environments, they must contend with legacy environments, the need for re-architected network boundaries, and changes to related operational activities.

Central Office Re-architected as a Datacenter (Open CORD)

Tolaga estimates that telecom operators have over 430 thousand central offices deployed globally. Since these central offices are geographically distributed across network footprints, they have been targeted for edge compute deployments using Open CORD. CORD was initially spearheaded by AT&T in the United States and is seeing widespread support from the telecom operator community.

The value proposition for CORD is relatively straightforward. For example, AT&T has 4700 COs which contain over 300 unique hardware applications. With CORD, AT&T aims to:

  • Drive cloud-based data center design principles to COs.
  • Virtualize and cloudify existing services, and;
  • Create design blueprints for residential (RCORD), enterprise (ECORD) and mobile-centric (MCORD) solutions.

Exhibit 8 shows a functional diagram for mobile-CORD (MCORD), with support for MEC and CUPs, and a variety of service functions including resource scheduling, network slicing, analytics and service chaining.

Exhibit 8: Open-CORD functional diagram with MCORD to support MEC, CUPS and enable resource scheduling, network slicing, analytics and service chaining
Source: Tolaga Research, 2018

Legacy and vested interests define edge compute designs - at least for now

The "edge" can reside anywhere outside of a data center, extending to end-point devices. As the communication industry drives initiatives that generally prioritize the placement of edge computing within communication network environments, other players are taking different approaches. Webscale providers like Amazon and Microsoft have “server-less” initiatives that prioritize edge compute placement in enterprise on-premise locations. In contrast, device and embedded system manufacturers place edge compute functionality directly in end-point devices. For many of these players including Honeywell and Rockwell, edge computing provides connectivity and a means for cloud enabling operational technology (OT) devices, such as industrial PCs and programmable logic controllers (PLC).

The diversity of edge compute initiatives is tremendous. Within the broader edge computing landscape, there are several notable initiatives, which include Open Edge Computing, Open Fog and the Edge-X Foundry, and the Edge Computing Consortium, which is spearheaded in China. In addition a variety of proprietary (“server-less”) architectures are being pursued by webscale companies like Amazon and Microsoft, and industrial companies like General Electric, Honeywell and Siemens.

Open Edge Computing Initiative

The Open Edge Computing Initiative (OEC) is a vendor neutral standardization effort spearheaded by Carnegie Mellon University to extend cloud capabilities to edge infrastructure. The initiative was established to tackle management difficulties for virtual machines in distributed cloud (edge compute) environments. This culminated in novel extensions to OpenStack (i.e. OpenStack++) with so-called “cloudlets” that enable agile VM image management in edge computing platforms. OEC uses existing cloud based architectural design principles for compute storage and networking, and OpenStack with extensions for rapid VM provisioning, handoff across cloudlets and efficient cloudlet discovery.

The underlying principle of the cloudlet architecture is to capitalize on functional commonality across VMs. When new VMs are provisioned, the base VM functionality is already pre-provisioned, and a compressed “overlay-VM” can be rapidly provisioned using the base and launch VM functionality, see Exhibit 10. The solution also leverages standard SDN and NFV based environments.

In addition to providing an open source development environment for it’s OpenStack++ cloudlet solution, the OEC launched its ‘Living Edge Lab’ in 2017, for OEC members and other ecosystem players to cooperate with each other and the academic community to develop and trial innovative edge computing solutions.

Exhibit 9: Open Edge Compute Initiatives introduces OpenStack++ to accelerate cloud functionality to suit edge computing environments
Source: Tolaga Research, 2018

OpenFog Delivers a Hierarchal Edge Architecture for IoT

The OpenFog initiative was originally spearheaded by Cisco and is now under the stewardship of the OpenFog Consortium. Fog emphasizes the migration of cloud functionality to the edge. It has a primary focus towards IoT solutions and a collaborative partnership with ETSI-MEC, which was established in 2017. The Fog framework architecture is generally relevant to most edge computing initiatives because it emphasizes the importance of hierarchal edge designs, see Exhibit 10. As the OpenFog develops, its Working Groups are focused on seven parallel capabilities to enable its hierarchal architecture, which include, security, scalability, open platforms, automation, remote access systems, and agility and programmability.

Fog hierarchies vary depending on the services being supported. For example, the edge computing hierarchy required for smart-manufacturing is different to cloud RAN implementations. More generally, edge compute hierarchies depend on a variety of technical and commercial factors, regulatory and security considerations, and the impact of legacy operating environments. These include the following:

  • Technical design factors such as service latency, bandwidth demands, the need for local service control, service continuity, device energy efficiencies, and operations in constrained environments.
  • Commercial factors, which are generally dictated by dominant ecosystem players and paying customers and include factors such as the architecture of incumbent systems, availability and suitability of real estate to host the edge computing infrastructure, and strategies to minimize the cost and complexity of edge compute installations.
  • Regulatory and security factors, such as those relating to information privacy and system security.
  • Legacy implementation and operating environments. Commonly edge computing is introduced to existing systems and operating environments and must be implemented accordingly. This drives varied edge computing definitions. For example, a telecom operator will typically define edge computing in the context of where it lies within its network, such as at an access node, or at transmission pre-aggregation and aggregation layers. For the embedded system community, edge computing brings cloud connectivity and intelligence to end-point devices. For IT organizations, edge computing brings functionality back from the cloud and extends the scope of enterprise networks.

The hierarchal framework architecture adopted by Fog addresses the diversity of solutions being targeted for edge computing, it also supports ecosystem fragmentation, which is characteristic of distributed systems. This is in contrast to the consolidation that has occurred in centralized cloud environments.

Exhibit 10: OpenFog Consortium defines a hierarchical edge computing architecture
Source: Tolaga Research, 2018

The Edge-X Foundry

The Edge-X Foundry is a Linux Foundation initiative launched in April 2017 that aims to create a standardized framework for edge computing in Industrial-IoT environments. It adopts cloud-native principles and supports both IP and non-IP based protocols. It also uses service management standards that anticipate the diversity of edge devices in industry IoT environments. The reference architecture for Edge-X is illustrated in Exhibit 11, and has the following key attributes:

  • Device services> with support for a range of protocols typically used in industrial IoT environments, and SDKs for proprietary protocol support.
  • Core services which are required for the Edge-X architecture and include core data, command, metadata and registry and configuration functionality.
  • Replaceable reference services including security services, device and system management, and support and export services, and;
  • Protocol compatibility with northbound infrastructure and applications.

Exhibit 11: Edge-X Foundry reference architecture
Source: Tolaga Research, 2018

Edge Computing Consortium (China)

The Edge Computing Consortium (ECC) was established in 2016 and is an initiative spearheaded by Huawei in China, with the support of Intel and ARM, and other industry stakeholders and academics. ECC is focused towards coordinated Information Technology (IT) and Operational Technology (OT) based solutions and provides a framework for achieving this coordination with vertical industry solutions, see Exhibit 12. Like other edge compute initiatives, the ECC aims to drive cross industry cooperation for ecosystem development and to reduce operational friction. It provides a structured framework that is underpinned by vertical industry use-cases and aligned with standardized operations, governance and service lifestyle management. Currently the ECC is particularly focused towards key industry verticals including energy, transportation, manufacturing and smart cities.

Although ECC currently lacks the recognition of other edge computing initiatives, it benefits from the monumental scale of China and is of tremendous strategic importance to Chinese companies like Huawei, and other domestic equipment manufacturers, and end edge computing solution providers. In the same way that China has incubated its own web-scale providers like Alibaba with cloud computing services, we expect that edge computing in China will follow suit.

Exhibit 12: Edge Computing Consortium Reference Architecture
Source: Tolaga Research, 2018

Proprietary Solutions Accelerate Edge Computing

The edge computing market is fragmented and supported by numerous proprietary solutions that address specific vertical market implementations. This is particularly the case for the smart-devices and appliances associated with industrial and consumer services.

Since edge computing ecosystems are fragmented, they normally require vertical integration, or systems integration, or both. Vertical integration is commonplace for consumer and industrial solutions. Smart residential devices, such as smart-appliances and entertainment solutions are vertically integrated with proprietary solutions by companies like Amazon, Google, Haier, Samsung and Whirlpool.

In industrial environments, vertically integrated solutions are commonplace and have a wide range of edge computing platform designs. For example, Siemens has its Industrial Edge product line, which is vertically integrated with its cloud services. Honeywell has its Mobility Edge solution as an extension of enterprise IT environments and General Electric has its Predix platform which provides an end-to-end, edge-to-cloud architecture. While proprietary solutions like these will prevail for the foreseeable future, particularly for business and mission critical solutions, there is growing interest in standards-based solutions and “horizontal” platforms provided by companies like Amazon and Microsoft. For example, in April 2018, ABB, HPE and Rittal launched a Secure Edge Data Center solution that is targeted for industrial and telecommunication environments and is pre-integrated with Microsoft’s Azure platform.

New pastures for Amazon with Greengrass and Microsoft with Azure IoT

Amazon Green-Grass is server-less edge computing solution designed as an extension of Amazon’s AWS-IoT platform, (see Exhibit 13). The solution can be implemented on standard edge compute equipment with programmable AWS lambda containers and local analytics and machine learning capabilities. Greengrass benefits from an intuitive interface that is characteristic of Amazon Web Services (AWS). Greengrass was initially architected to support the MQTT protocol and enables local MQTT messaging amongst devices as required. It is notable that currently the Greengrass core only supports MQTT QoS level 0 (i.e. always only send message once), which we believe illustrates its initial target towards non-mission critical sensor networks.

By providing local IoT gateway functionality, Greengrass supports local connectivity with service continuity even with intermittent IoT cloud connectivity. The Greengrass platform has been standardized for a variety of systems including Raspberry Pi, x86, and the AnnaPurLabs platform-on-a-chip solution (an Amazon subsidiary company). Greengrass use cases range from home gateways to small, medium and large enterprises, and are currently targeted towards enterprise applications.

Exhibit 13: Amazon Green Grass Functional Architecture
Source: Tolaga Research, 2018

The Microsoft Azure IoT solution competes directly with Greengrass and enables persistent connectivity amongst heterogeneous devices and the Azure IoT back-end, see Exhibit 14. Azure IoT-Hubs (Edge Compute Platform) in conjunction with field gateways support devices with a variety of connectivity types, including IP/HTTPS,MQTT,AMQP. Microsoft has added its Azure Stream Analytics to the suite of services that can be offered in its edge devices. The IoT Hubs connect to the Azure IoT backend to support:

  • event-based management,
  • device-to-cloud ingestion,
  • reliable cloud-to-device messaging,
  • per-device authentication, and
  • secure connectivity.

Exhibit 14: Microsoft Azure IoT
Source: Tolaga Research, 2018


Edge computing is capturing broad industry interest, buoyed by a variety of factors, including thriving markets for digitization and the widespread adoption of cloud computing. Other than enabling distributed computing, there is no specific architecture that can be associated with edge compute. Edge computing infrastructure can reside anywhere outside of data centers all the way to end-point devices. While the demand drivers for edge computing are clear, they don’t necessarily define the edge computing architecture either. Instead the edge compute architectures are heavily influenced by the players involved.

The communications industry generally favors the collocation of edge compute equipment with their network infrastructure, whether in access network nodes, and pre-aggregation and aggregation locations in transport networks. Having seen the value of their networks erode with the proliferation of cloud computing, the communications industry is seeking opportunities for edge computing to bring value back to their networks. Edge computing initiatives being spearheaded by the communications industry include MEC and CUPS, which are both network centric initiatives, and CORD, which is applying data center technologies in telecom central office environments. While operators are well positioned to develop network centric edge computing capabilities, we believe that a focus towards network-centricity will potentially undermine opportunities to address broader consumer and enterprise edge computing demands. Particularly for use-cases that are best suited for on-premise edge compute platforms.

Web scale providers like Amazon, Facebook, Google and Microsoft have benefited from the proliferation of cloud computing by essentially driving value away from communication networks and to cloud applications and end-point devices. Amazon and Microsoft have large cloud computing revenues which they are eager to protect as the edge computing market takes hold. This has culminated in Amazon developing its Greengrass and Microsoft its IoT Azure edge compute platforms. Both platforms target IoT services and are architecturally designed to anchor on-premise or end-point device edge compute functionality with their respective cloud platforms. Both companies are taking pragmatic approaches towards edge computing, but potentially run the risk of being disintermediated by competitors that are not wedded to cloud-centric approaches towards edge computing.

Industrial companies such as General Electric, Honeywell, and Siemens, and consumer electronics providers like Amazon, Google, Haier, Samsung and Whirlpool have proprietary and vertically integrated edge computing solutions. These players benefit from their customer relationships, service offerings, and market reputations, but cannot necessarily rely on vertically integrated approaches prevailing indefinitely. Already players like ABB are partnering with web-scale providers, to deliver end-to-end solutions, which we anticipate will become more common, particularly for enterprise services.

Although many edge computing solutions are proprietary today, there is a broad industry recognition that standards are particularly important as the market scales. This has culminated in a variety of industry initiatives beyond those being spearheaded by the communications industry. Notable examples include, the Open Edge Computing Initiative, the Edge-X Foundry, OpenFog and the Edge Computing Consortium (China). We expect that the number of initiatives will increase over the next 12-24 months, but ultimately consolidate as the edge compute market matures and its salient demands become better understood.

While there are strong market drivers for edge computing, it is not without its challenges. Incumbent services that use centralized cloud platforms create strong incentives for status quo to be maintained. The added costs and complexities associated with distributed edge computing architectures must be justified. Rather than disrupting conventional cloud services, edge computing will be complementary. However, industry players cannot afford to ignore edge computing and the role it will play in transforming future business opportunities.