TFG: Development of an Artificial Intelligent system based on low-resource Edge Computing for autonomous vehicles applications

Nowadays, there are plenty of IoT devices which make our everyday life easier thanks to their intelligent tasks: data capture, process automation… However, the increase of these devices is turning out to be somehow risky in terms of latency or bandwidth. That is the reason why some alternatives that may solve these problems are being searched, and one of them is Edge Computing technologies.

Edge Computing devices are those who are able to process the information captured without connecting to the network. Due to that, the latency and bandwidth issues that may occur can be significantly reduced, allowing the radio spectrum to decrease its saturation as well as improving the latency and consumption performance.

In this project, the main goal is to create a system that is able to develop and execute Artificial Intelligence algorithms designed for autonomous driving and assistance to the driver, always taking care of Edge Computing philosophy. In order to do that, we have used Google Coral, a hardware platform that perfectly adapts to our needs, allowing us to develop all the Edge Computing algorithms as well as offering appropriate consumption and processing characteristics.

Finally, we have tested our system in a real situation, evaluating the quality of the results as well as the resources used (latency, bandwidth…) and the advantages and disadvantages in relation to the existing technologies is this area. After these experiments, we have concluded that the quality of our Edge Computing system is enough to carry out the tasks it has been designed for. Also, all the resources used have been optimized in relation to Cloud Computing alternatives, turning this project into a faster, more effcient and economic alternative.

Gated Recurrent Unit Neural Networks for Automatic Modulation Classification With Resource-Constrained End-Devices

The article “Gated Recurrent Unit Neural Networks for Automatic Modulation Classification With Resource-Constrained End-Devices” by our lab member Ramiro Utrilla has just been published in the IEEE Access, a high-impact open-access journal.

This work has been carried out in collaboration with researchers from the CONNECT – Centre for Future Networks and Communications in Dublin (Ireland), where Ramiro carried out a research stay of 3 months.

In this article, they focus on the Automatic Modulation Classification (AMC). AMC is essential to carry out multiple CR techniques, such as dynamic spectrum access, link adaptation and interference detection, aimed at improving communications throughput and reliability and, in turn, spectral efficiency. In recent years, multiple Deep Learning (DL) techniques have been proposed to address the AMC problem. These DL techniques have demonstrated better generalization, scalability and robustness capabilities compared to previous solutions. However, most of these techniques require high processing and storage capabilities that limit their applicability to energy- and computation-constrained end-devices.

In this work, they propose a new gated recurrent unit neural network solution for AMC that has been specifically designed for resource-constrained IoT devices.


The proposed GRU network model for AMC.

They trained and tested their solution with over-the-air measurements of real radio signals, which were acquired with the MIGOU platform.


Dataset generation scenario set up.


Comparison of signals recorded at (a) 1 and (b) 6 meters. The signals in the bottom row are the normalized version of those in the top row.

Their results show that the proposed solution has a memory footprint of 73.5 kBytes, 51.74% less than the reference model, and achieves a classification accuracy of 92.4%.

Increasing the training set can lead to improvements in the performance of a model without increasing its complexity. These improvements allow developers to reduce the complexity of the model and, therefore, the device resources it requires. However, longer training processes can lead to fitting and gradient problems. These tradeoffs should be explored when developing neural network-based solutions for resource-constrained end-devices.