Technological Advances that are Driving Edge Computing Adoption

Technology


The evolution of a technology as a pervasive force is often a time-consuming process. But edge computing is different — its impact radius is increasing at an exponential rate. AI is an area where edge is playing a crucial role, and it is evident from how companies like Kneron, IBM, Synaptic, Run:ai, and others are investing in the tech.

In other industries, such as space-tech or healthcare, companies including Fortifyedge and Sidus Space are planning big for edge computing.

Technological advances and questions regarding app performance and security

However, such a near-ubiquitous presence is bound to trigger questions regarding app performance and security. Edge computing is no exception, and in recent years, it has become more inclusive in terms of accommodating new tools.

In my experience as the Head of Emerging Technologies for startups, I have found that understanding where edge computing is headed before you adopt it – is imperative. In my previous article for ReadWrtie — I discussed major enablers in edge computing. In this article, my focus is on recent technical developments that are trying to solve pressing industrial concerns and shape the future.

WebAssembly to Emerge as a Better Alternative for JavaScript Libraries

JavaScript-based AI/ML libraries are popular and mature for web-based applications. The driving force is increased efficacy in delivering personalized content by running edge analytics. But it has constraints and does not provide security like a sandbox. The VM module does not guarantee secured sandboxed execution. Besides, for container-based applications, startup latency is the prime constraint.

WebAssembly is emerging fast as an alternative for edge application development. It is portable and provides security with a sandbox runtime environment. As a plus, it allows faster startup for containers than cold (slow) starting containers.

Businesses can leverage WebAssembly-based code for running AI/ML inferencing in browsers as well as program logic over CDN PoPs. Its permeation across industries has grown significantly, and research studies support it by analyzing binaries from several sources ranging from source code repositories, package managers, and live websites. Use cases that recognize facial expressions and process images or videos to improve operational efficacy will benefit more from WebAssembly.

TinyML to Ensure Better Optimization for Edge AI

Edge AI refers to the deployment of AI/ML applications at the edge. However, most edge devices are not as resource-rich as cloud or server machines in terms of computing, storage, and network bandwidth.

TinyML is the use of AI/ML on resource-constraint devices. It drives the edge AI implementation at the device edge. Under TinyML, the possible optimization approaches are optimizing AI/ML models and optimizing AI/ML frameworks, and for that, the ARM architecture is a perfect choice.

It is a widely accepted architecture for edge devices. Research studies show that for workloads like AI/ML inferencing, the ARM architecture has a better price per performance as compared to x86.

For model optimization, developers use model pruning, model shrinking, or parameter quantization.

But TinyML comes with a few boundaries in terms of model deployment, maintaining different model versions, application observability, monitoring, etc. Collectively, these operational challenges are called TinyMLOPs. With the rising adoption of TinyML, product engineers will incline more toward TinyMLOPs solution-providing platforms.

Orchestration to Negate Architectural Blocks for Multiple CSPs

Cloud service providers (CSPs) now provide resources closer to the network edge, offering different benefits. This poses some architectural challenges for businesses that prefer working with multiple CSPs. The perfect solution requires the optimal placing of the edge workload based on real-time network traffic, latency demand, and other parameters.

Services that manage the orchestration and execution of distributed edge workload optimally will be in high demand. But they have to ensure optimal resource management and service level agreements (SLAs).

Orchestration tools like Kubernetes, Docker Swarm, etc., are now in high demand for managing container-based workloads or services. These tools work well when the application is running on a web-scale. But in the case of edge computing, where we have resource constraints, the control planes of these orchestration tools are a complete misfit as they consume considerable resources.

Projects like K3S and KubeEdge are efforts to improve and adapt Kubernetes for edge-specific implementations. KubeEdge claims to scale up to 100K concurrent edge nodes, per this test report. These tools would undergo further improvement and optimization to meet the edge computing requirements.

Federated Learning to Activate Learning at Nodes and Reduce Data Breach

Federated learning is a distributed machine learning (ML) approach where models are built individually on data sources like end devices, organizations, or individuals.

When it comes to edge computing, there is a high chance that the federated machine learning technique will become popular as it can address issues related to distributed data sources, high data volume, and data privacy constraints efficiently.

With this approach, developers do not have to transfer the learning data to the central server. Instead, multiple distributed edge nodes can learn the shared machine-learning model together.

Research proposals related to the use of differential privacy techniques along with federated learning are also getting a substantial tailwind. They hold the promise of enhancing data privacy in the future.

Zero Trust Architecture Holds Better Security Promises

The conventional perimeter-based security approach is not suitable for edge computing. There is no distinct boundary because of the distributed nature of edge computing.

However, zero trust architecture is a cybersecurity strategy that assumes no trust while accessing resources. The principle of zero trust is “Never trust, always verify.” Every request should be authenticated, authorized, and continuously validated.

If we consider the distributed nature of edge computing, it is likely to have a wider attack surface. The zero-trust security model could be the right match to protect edge resources, workloads, and the centralized cloud interacting with the edge.

In Conclusion

The evolving needs of IoT, Metaverse, and Blockchain apps will trigger high adoption of edge computing as the technology can guarantee better performance, compliance, and enhanced user experience for these domains. Awareness about these key technological advancements surrounding edge computing can help inform your decisions and improve the success of implementations.

Featured Image Credit Provided by the Author; AdobeStock; Thank you!

Pankaj Mendki

Pankaj Mendki is the Head of Emerging Technology at Talentica Software. Pankaj is an IIT Bombay alumnus and a researcher who explores and fast-tracks the adoption of evolving technologies for early and growth-stage startups. He has published and presented several research papers on blockchain, edge computing, and IoT in several IEEE and ACM conferences.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *