Edge computing moves toward full autonomy

Edge computing moves toward full autonomy

With edge computing already transforming the way data is being handled, processed, and delivered, edge site operations are becoming as hands-off as possible

Edge computing is rapidly shedding its reputation as a fringe concept, and both adopters and vendors are focusing their sights on the technology’s next goal: fully autonomous deployment and operation.

The edge deployment experience is drawing closer to the simplicity of unboxing a new mobile phone, says Teresa Tung, cloud first chief technologist at IT advisory and consulting firm Accenture. “We’re seeing automated technology that simplifies handling the edge’s unique complexity for application, network, and security deployments.”

The ability to create and manage containerized applications enables seamless development and deployment in the cloud, with the edge simply becoming a specialized location with more stringent resource constraints, Tung says. “Self-organizing and self-healing wireless mesh communications protocols, such as Zigbee, Z-Wave, ISA100.11a, or WirelessHART can create networks where devices can be deployed ad hoc and self-configure.”

The decentralization of IT environments to encompass edge systems comes with specific challenges, says Matteo Gallina, principal consultant with global technology research and advisory firm ISG. “Management of devices and services has to be done outside the traditional management sphere, including managing physically inaccessible devices, a high variance of solutions and operating systems, different security requirements, and more,” he says. “The larger and more disperse the systems get, the more significant the role automation plays to ensure effectiveness and reliability.”

Automation technology innovation led by open source communities

The trend toward automating edge deployments is not unlike the journey into AI, where innovations are led by open source groups, infrastructure manufacturers, and cloud service providers, Tung says. She notes that open source communities—such as LF Edge—are leading innovations and building critical standards definitions in areas such as communication, security, and resource management.

“Infrastructure providers are creating solutions that allow compute to be run anywhere and embedded in anything,” Tung says. “It includes new hardware capabilities that are ultra-low power, ultra-fast, connected anywhere, and ultra-secure and private.” She adds, “5G opens new opportunities for network equipment providers and telecom operators to innovate with both private and public networks with embedded edge compute capabilities.”

At the same time, cloud provider innovations are making it easier to extend centralized cloud DevOps and management practices to the edge. “Just like [the] central cloud makes it easy for any developer to access services, we are now seeing the same thing happening for technologies like 5G, robotics, digital twin, and IoT,” Tung says.

Software-defined integration of multiple network services has emerged as the most important technology approach to automating edge deployments, says Ron Howell, managing network architect at Capgemini Americas. Network security, equipped with Zero Trust deployment methods incorporating SASE edge features, can significantly enhance automation, and simplify what it takes to deploy and monitor an edge compute solution. Additionally, when deployed, full stack observability tools and methods that incorporate AIOps will help to proactively keep data and edge compute resources available and reliable.

AI applied to the network edge is now widely viewed as the leading way forward in network edge availability. “AIOps, when used in the form of full-stack observability is one key enhancement,” Howell says.

A variety of options are already available to help organizations looking to move toward edge autonomy. “These begin with physical and functional asset onboarding and management, and include automated software and security updates, and automated device testing,” Gallina explains. If a device works with some form of ML or AI functionality, AIOps will be needed, both at the device level to keep the local ML model up-to-date—and ensure that correct decisions are made in any situation— as well as within any backbone ML/AI that might be located on premises or in centralized edge systems.

Physical and digital experiences come together at the edge

Tung uses the term “phygital” to describe the result when digital practices are applied to physical experiences, such as in the case of autonomous management of edge data centers. “We see creating highly personalized and adaptive phygital experiences as the ultimate goal,” she notes. “In a phygital world, anyone can imagine an experience, build it and scale it.”

In an edge computing environment that integrates digital processes and physical devices, hands-on network management is significantly reduced or eliminated to the point where network failure and downtime is automatically detected and resolved, and configurations are applied consistently across the infrastructure, making scaling simpler and faster.

Automatic data quality control is another potential benefit. “This involves a combination of sensor data, edge analytics, or natural language processing (NLP) to control the system and to deliver data on-site,” Gallina says. Yet another way an autonomous edge environment can benefit enterprises is with “zero touch” remote hardware provisioning remotely at scale, with the OS and system software downloaded automatically from the cloud.

Gallina notes that a growing number of edge devices are now packaged with dedicated operating systems and various other types of support tools. “Off-the-shelf edge applications and marketplaces are starting to become available, as well as an increasing number of open-source projects,” he says.

Providers are working on solutions to seamlessly manage edge assets of almost any type and with any underlying technology. Edge-oriented, open-source software projects, for example, such as those hosted by the Linux Foundation, can further drive scaled adoption, Gallina says.

AI-optimized hardware is an up-and-coming edge computing technology, Gallina says, with many products offering interoperability and resilience. “Solutions and services for edge data collection—quality control, management, and analytics—are likely to expand enormously in the next few years: just as cloud native applications have done,” he adds.

AI on Edge automation leaders include IBM, ClearBlade, Verizon, hyperscalers

Numerous technologies are already available for enterprises considering edge automation, including offerings from hyperscaler developers and other specialized providers. One example is KubeEdge, which offers Kubernetes, an open-source system for automating the deployment, scaling, and management of containerized applications.

Gallina notes that in 2021 ISG ranked system integrators Atos, Capgemini, Cognizant, Harman, IBM, and Siemens as global leaders in AI on edge technology. Among the leading edge computing vendors are the hyperscalers (AWS, Azure, Google), as well as edge platform providers ClearBlade and IBM. In the telco market, Verizon stands out.

Edge-specific features deliver autonomy and reliability

Vendors are building both digital and physical availability features into their offerings in an effort to make edge technology more autonomous and reliable. Providers generally use two methods to provide autonomy and reliability: internal sensors and redundant hardware components, Gallina says.

Built-in sensors, for example, can use on-location monitoring to control the environment, detect and report anomalies, and may be combined with fail-over components for the required level of redundancy.

Tung lists several other approaches:

  • Physical tamper-resistant features designed to protect devices from unauthorized access.
  • Secure identifiers built into chipsets allowing the devices to be easily and reliably authenticated.
  • Self-configuring network protocols, based on ad hoc and mesh networks, to ensure connectivity whenever possible.
  • Partitioned boot configurations so that updates can be applied without the risk of bricking devices if the installation goes wrong.
  • Hardware watchdog capabilities to ensure that devices will automatically restart if they become unresponsive.
  • Boot time integrity checking from a secure root of trust, protecting devices against malicious hardware installation.
  • Trusted compute and secure execution environments to ensure approved compute runs on protected and private data.
  • Firewalls with anomaly detection that pick up unusual behaviors, indicative of emerging faults or unauthorized access.

Self-optimization and AI

Networks require an almost endless number of configuration settings and fine tuning in order to function efficiently. “Wi-Fi networks need to be adjusted for signal strength, firewalls need to be constantly updated with support for new threat vectors, and edge routers need constantly changing configurations to enforce service level agreements (SLAs),” says Patrick MeLampy, a Juniper Fellow at Juniper Networks. “Nearly all of this can be automated, saving human labor, and human mistakes.”

Self-optimization and AI are needed to operate at the edge and determine how to handle change, Tung says. What, for instance, should happen if the network goes down, power goes out, or a camera is misaligned? And what should happen when the problem is fixed? “The edge will not scale if these situations require manual interventions every time,” she warns. Issue resolution can be addressed by simply implementing a rule to detect conditions and prioritize application deployment accordingly.

Key Takeaways

The edge is not a single technology, but a collection of technologies working together to support an entirely new topology that can effortlessly connect data, AI, and actions, Tung says “The biggest innovations are yet to come,” she adds.

Meanwhile, the pendulum is swinging toward more numerous but smaller network edge centers located closer to customer needs, complimented by larger cloud services that can handle additional workloads that are less time sensitive, less mission-critical, and less latency-sensitive, Howell says. He notes that the one factor that remains immutable is that information must be highly available at all times. “This first rule of data centers has not changed—high quality services that are always available.”