Andy Brown, Practice Lead, IoT
IoT deployments are maturing, and organizations are increasingly looking to derive more insight from the vast amounts of data being collected.
More organizations are turning to artificial intelligence (AI)/ machine learning (ML) and edge processing to gain real-time insights on operating environments (see Figure 1) and increase automation levels.
This convergence of AI and IoT enables cases ranging from predictive maintenance in manufacturing equipment to heart rate anomaly detection in RPM medical devices.
Traditionally, most IoT endpoints have had low computational processing. Data collected at these endpoints must then traverse through a network to a centralized data center (i.e., the cloud) for storage and processing before any action is taken.
While this may be suitable for many IoT applications, an increasing number of use cases require real-time decision-making with minimal network latency, making the delay in transmitting information between the endpoint and the cloud unacceptable.
As a result, there has been a shift toward edge computing and, increasingly, intelligent endpoints themselves, leading to a higher level of decentralization.
The benefits of driving more processing power and decision-making to the edge are obvious in operational environments:
Real-time processing — By handling data locally, devices can make instant decisions without waiting responses from servers. This is crucial in environments where real-time responsiveness is critical, such as traffic systems, manufacturing environments, or healthcare.
Data privacy — The ability to compartmentalize or isolate data between OT (operational technology) and IT (Information technology) systems is vitally important, especially when considering critical infrastructure such as power grids. The ability to operate AI and ML in an offline state or a decoupled way opens significant opportunities for enhancing operational environments and reducing the risks associated with connected systems.
Network efficiency — With local data processing, transmission costs are significantly reduced, which lowers barriers to adoption and implementation.
Figure 1: What services are part of your IoT deployment strategy?
Source: Omdia
The emergence of smaller, cost-effective open models is expanding AI innovation, particularly benefiting edge devices such as smartphones, PCs, robotics, and IoT devices.
These compact models can be hosted locally, making them highly attractive to organizations that prioritize data privacy and to vendors developing edge AI applications that require ultra-low latency.
There are still barriers to overcome. Distilled models will require significant compute power to operate.
Moreover, pared-down models may compromise on complex reasoning tasks: defining a model for point solutions may render an implementation obsolete over the long lifecycles in IoT environments.
IoT is highly distributed, meaning that the orchestration of local models may only apply to specific operating environments.
Nevertheless, the advances in smaller modes have demonstrated open the door for an increase in more cloud-edge hybrid deployments, whereby intelligence can move closer to the data right at the "micro-edge."
Moore’s law suggests that dedicated and more powerful hardware will continue to proliferate to the edge, which will continue to advance AIoT over the coming years.