Monitoring of large structures, such as buildings or bridges, is a very important task and must be done constantly, due to the danger that can lead to a sudden failure of these. These failures can cause a large number of damages, not only material, but also human losses.
This project aims to design and implement a system that is capable of monitoring the vibrations of a certain place and must also be energetically self-sufficient. For this, the main purpose is to implement a node of this type based on a MEMS accelerometer and powered by solar energy and batteries. The developed monitoring node must be a low power system because it must be able to work autonomously for long periods of time. This will be achieved through the implementation of a power system based on an external battery recharged by solar energy. For the measurement part, accelerometer data will be collected every so often and stored on an SD card for later reference.
The B105 Laboratory has several types of PCBs that have different modules needed to carry out this project (accelerometers, battery management, SD card …). For the development of the hardware it was decided to take advantage of the PCBs already designed. The modules and components to be used were chosen and subsequently welded with two different techniques: manual and by oven.
The software was programmed in C language and it was decided to perform 3 different implementations: first, software was designed on bare machine to check the correct functioning of the measurement module; Later software with operating system was developed to optimize the performance of the system; Finally, tests were performed measuring vibrations with the accelerometer and stored on the SD card to obtain final results and conclusions.
The B105 Electronic Systems Lab has an electronic access system in its door based on a Radio Frequency Identification (RFID) card reader. This system was developed more than 12 years ago so the technology it uses is obsolete and several of its features are out of use. The development of this degree project is intended to implement an alternative to this access control system based on Near Field Communication (NFC) technology.
The RFID system requires the use of physical cards, which are easily misplaced and force the user to carry them around with him/her to enter the laboratory. To solve this problem, the new system allows the users to open the door using their smartphone. This makes it even easier to enter the laboratory, as users always have their mobile phone with them. In addition, users are assigned specific entry times, providing greater security and a better access control to the laboratory.
There is an equipment reservation management service in the laboratory that already has a database of members, an application and an administration website. Therefore, these resources have been used to facilitate the implementation of the new system and avoid data replication on the server.
Once the system has been implemented, any user who is registered in the system and has certain permissions can open the door by bringing their mobile phone closer to the reader. To achieve this, the existing access system has been built on and relevant technologies have been studied.
The development and implementation work has been divided into three blocks: the NFC reader, the application and the server. The reader, integrated into the door opening system, acts as an intermediary between the application and the server. On the other hand, the application only has to emulate the access card and send the entry request. Then, the server evaluates this request checking the user information and its database and it sends a response to the reader. Depending on the message received, the reader opens the door or not and finally informs the user of the decision.
In recent years, the consumption of multimedia content on the Internet has increased substantially. However, there are devices without Internet access that would be interesting if they could play this content, such as loudspeakers. It would also add value if it were a low-resource device, which would have a direct impact on its cost. This TFG aimed to design and implement a network of low-resources wireless nodes for the reception, decoding and playback of MP3 audio within a multipoint communications network.
This work continued the development of the system carried out in a previous TFG, which is described on this post. The system consisted of a transmitter located into a computer and several receivers, each one of them located into a esp8266 chip. The transmitter sent codified audio to a multicast direction, which could be received the receptor chips connected to his same Wi-Fi network, to be decoded and reproduced.
The first objective was to improve the reproduction audio quality of the system. To achieve this, a MP3 decoder chip module was integrated to work as a slave system controlled by the esp8266. After that, audio tests were then carried out to check the similarity between sent and received audio.
The second objective was to provide configurability to the system. A software tool was developed, which set the esp8266 as an access point. If the user connected to it, a configuration website was deployed. This site had a form where the user may write the SSID and the password of a Wi-Fi network. After that, the esp8266 connected to that Wi-Fi network, and started the codified audio reception.
The last objective of this TFG was the design and the implementation of a hardware prototype of the node which included the two modules. For this purpose, a printed circuit board has been designed and manufactured, consisting of the necessary elements to connect all the modules of the system. The resulting PCB and the the final version of the node, connected with the esp8266, can be seen in the pictures below.
The augmentation in the number of risk situations and accidents has caused an increase in the number of spinal cord injuries. These injuries cause plexias and paralysis of the different members of the affected person. This problem has made it necessary to start looking for possible therapies to enhance the lives of patients. One of these solutions is Functional Electrical Stimulation (FES). FES is a technique based on the use of electrical stimulation of the motor nerves in order to generate a functional movement such as walking or picking up an object. This technique involves a series of stimulation parameters that are necessary to control: the stimulation amplitude, the stimulation frequency, the pulse width that composes the stimulation pattern and the waveform of the signal. The objective of this End-of-Degree Project was the development of a platform that allows the electrical stimulation of the motor nerves and the control of the stimulation parameters.
The device designed in this project is constituted by a hardware part and a software part. The stimulator is composed of a series of modules: amplifier module, signal generation module and human-device interface. The signal generation module allows us to control the stimulation parameters through the designed software. Additionally, it is necessary to design an amplification module so that the signals generated have the voltage and current levels necessary for stimulation. The power supply module is responsible for the power supply of the amplifier module and the signal generation module. The interface between the device and the user is based on surface electrodes connected to the output of the amplifier module. The different modules and their components are implemented on a printed circuit board (PCB) that will support and join the modules.
The future of functional electrical stimulation is the creation of closed systems to control the stimulation parameters according to the position of the muscles. Two possible routes can be taken: the use of sensors such as accelerometers and the creation of brain-personal interfaces.
Massive and rapidly increasing use of wireless devices is raising concerns about eventual saturation of the available spectrum in wireless communications, known as the spectrum scarcity problem. This issue is especially relevant for power- and resource-constrained devices, even more when considering the largely variable and adverse environmental conditions radio channels are usually subject to. Considering the case of a network of sensor nodes, a smart approach to face this problem is the use of Cognitive Wireless Sensor Networks (CWSNs), which consist in networks capable of modifying their communication parameters depending on the environmental conditions.
One of the ongoing research lines of the B105 Electronic Systems Lab focuses on the development of low-power CWSNs by designing sensor nodes using a Software-Defined Radio system (SDR). Specifically, an architecture based on the Atmel AT86RF215 transceiver and the SmartFusion2 System-on-Chip (SoC) is used to carry out certain cognitive tasks.
The specific objective of this project was to implement communication between the aforementioned elements and a personal computer (PC). To achieve that, a Printed Circuit Board (PCB) was developed to serve as an interface platform between the different hardware elements in the system. Then, the controllers required to manage communication between the transceiver, which acts as data source, and the PC, which is the receiver, are implemented on the FPGA embedded in the SmartFusion2 SoC.
For the successful realization of this project it was necessary to carry out both hardware and software development tasks. In addition, the programming languages C and VHDL were used, as well as the communication standard protocols Serial Peripheral Interface (SPI) and Low Voltage Differential Signaling (LVDS).
The aim of this project, was to design a functional prototype for the transformation of energy based on the principle of piezoelectricity, in order to harvest the energy produced. After some research, this is determined to be the best postulate to generate electrical power at a low scale for applications in electrical systems that require low voltage power supply, working as a stand-alone power to charge both, medical and electronic devices.
When a piezoelectric material is exposed to mechanical deformation, a voltage is produced. The theoretical behaviour can be appreciated in the following image:
Therefore, the energy that can be harvested depends on two factors: the properties of the piezoelectric material and the amount of deformation applied to the material.
Some of the materials that show piezoelectricity are: quartz, lead zirconate titanate (PZT), aluminum nitride (AlN), zinc oxide (ZnO) and polyvinylidene fluoride (PVDF).
The special property of these piezoelectrics is that it allows them to convert physical energy into electricity, AC. However, we need DC, not AC to power devices. This problem can be solved creating a rectifier bridge with diodes to convert the power from AC to DC, and thus be able to use it.
Although piezoelectric elements generate a lot of voltage, they do not generate many amps. We can solve this problem by wiring all the piezoelectric elements in parallel
Taking into account all the mentioned above, the prototype that has been created is formed by 7 PZT piezoelectrics of 35 mm diameter, as shown in the picture at the top pf the page.
Finally, it has been proved, when charging some capacitors, that it is better to have the shoe sole outside, placed on a smooth surface (as a carpet) and then making pressure on them. In such a way, the most relevant results were obtained. The capacitors were charged more quickly than while walking with the shoe sole inside. The order of magnitude of the power generated by this assembly was mW, and the energy generated was in the order of mJ.
In the last years, the Vehicular Ad hoc Networks or VANET’s are gaining relevance in order to improve traffic management and road safety. In addition, autonomous cars technology has been a boost for VANET’s research in recent years. One of the main services provided by a VANET is the localization support with a Global Position System or GPS. However, the GPS has an error of 3 to 7 meters, a better accuracy may be necessary in some applications. Moreover, in areas with no GPS coverage like tunnels there would not be any localization support. Therefore, another localization method should be implemented to improve accuracy and coverage, which is the main purpose of this project.
In this degree project, a VANET has been used to provide vehicle localization. However, conventional VANETs devices are very expensive and have very large power consumption, so we use a Wireless Sensor Network or WSN as a low-cost and low-power alternative. WSN’s are similar to wireless ad hoc networks, but they have a lower cost. However, these resource-constraint networks does not allow implementing complex algorithms.
The localization algorithm selected in this project is the Fuzzy Ring-Overlapping Range-Free or FRORF. It has been modified so it could be implemented in resource-constraint nodes with low computational capabilities. This algorithm has been implemented in wireless nodes developed by the B105 Electronic Systems Lab and several tests have been performed in different scenarios. The position of the vehicle has been obtained in these scenarios and has been compared with the position obtained from a commercial GPS module.
With the results it is possible to conclude that the implemented algorithm has an error of 1 to 9 meters. This error is similar to the GPS error, so the FRORF algorithm can provide a reasonable position of a car. Althougth the accuracy needed for a VANETs is not solved, the algorithm provides localization in interior areas. This advance is very important as localization support services may be provided in zones without GPS coverage.
Los pasados 9 y 10 de Mayo tuvo lugar en A Coruña la VIII Reunión del Foro de Interoperabilidad en Salud organizado por la Sociedad Española de Informática de la Salud sobre el tema Interoperabilidad, Internet de las cosas y Salud Integrada en la que el B105 Electronic Systems Lab participó en la sesión Innovación e Investigación con la ponencia Retos y oportunidades de IoT en el campo de la salud bajo un punto de vista tecnológico.
En esta ponencia se pasa revista a las tecnologías que está desarrollando el Grupo de Investigación B105 Electronic Systems Lab de la Universidad Politécnica de Madrid en el campo de sistemas empotrados, radio cognitiva y nubes de sensores aplicados al campo de la salud.
En enfoque realizado se basa en dos pilares, la investigación, entendida como la inversión de dinero para obtener conocimiento y la innovación, entendida como la aplicación de ese conocimiento para obtener fondos con los que financiar la investigación. Esta estructura permite la definición de un círculo virtuoso en el que la investigación se financia en base a la innovación y la innovación utiliza el conocimiento de la investigación.
Las tecnologías tratadas son las siguientes:
Sensores, tanto directos (electrodos y sensores biométricos) como indirectos (temperatura, humedad y temperatura, inclinómetros y magnetómetros, acelerómetros, posición, etc.) en el campo de la innovación. Tanto en innovación como en investigación se pasa revista a la creación de sensores virtuales mediante la fusión de sensores reales con parámetros tales como espacio, tiempo, confianza, reputación, etc.
Medio de comunicación, se exponen algunas de las tecnologías en las que se está investigando, como redes neuronales inalámbricas y redes cuerpo humano versus redes aire.
Actuadores, principalmente mencionando los trabajos tanto en investigación como innovación en el campo de la estimulación nerviosa eléctrica transcutánea como interfaz electrotáctil y su posible aplicación al tacto remoto.
Finalmente, se pasa revista a un conjunto de aplicaciones de ejemplo desarrolladas bajo el modelo de innovación como un sistema de detección de fatiga de conductores mediante cámara, otro mediante monitorización del EEG con técnicas de machine learning, un detector de barreras arquitectónicas basado en realidad aumentada o una pulsera inteligente con su aplicación al móvil asociado para monitorizar las condiciones de vida de personas necesidades especiales.
This project “A modular-reconfigurable presentation system design and implementation based on LEDs” consists of a LED screen design and development at hardware and software level, features cited in the name of this project.
In order to approach its design, we have started making an art state investigation, through which some similar projects to this one have been looked into.
Next, we carried out a hardware design and implementation. During this stage two hardware versions were developed.
Then, the software design and development have taken place. A first software stage is executed by the PC and the other stage is executed by the microcontroller. During this phase of the project we have developed many block versions which made the software architecture up.
Later, different hardware and software level tests were performed.
Finally, some full system tests were also carried out.
The project has been developed in seven phases as shown as per below chart.
As said before, the presentation system is made up of a hardware and software structure. The hardware structure is constituted by some elements, from which It is emphasized the relevant LED screen and the development board that includes a microcontroller. The software structure has been coded at PC and within a microcontroller too. This main project aim has been to desing a modular system presentation and resettting based on LEDs.
The main objective is broken down into four purposes
Draw up a modular and reconfigurable LED screen.
Develop a microcontroller software to allow using the LED screen.
Develop a software by PC to display a photo on the LED screen.
Develop a software by PC to display a video on the LED screen.
It is relevant to clarify that every pixel of this screen is encoded by 24 bits. These bits are G7, G6, G5, G4, G3, G2, G1, G0, R7, R6, R5, R4, R3, R2, R1, R0, B7, B6, B5, B4, B3, B2, B1, B0 as shown as per below chart.
The hardware architecture is a set of physical blocks through which this system is based on. In the basis of the system outcome and the hardware requirement some hardware architecture blocks have been defined. The hardware architecture that makes up the bits is described as per below.
PC: It executes processor-transmitter software blocks.
USB-serial converter: It Carries GRB bits forward to microcontroller.
Microcontroller: It executes receiver-presentation software blocks.
Logic level converter: It goes the amplitude up from the PWM to the microcontroller pins GPIO output, from 3.3 [V] to 5 [V].
LED screen: It is made up of LED modules. Each module has 25 LEDs. For LED modules manufacturing purposes some LEDs SMD 5050 have been chosen, these LEDs mix an integrated circuit enclosed in. This circuit incorporates a signal amplifier and depending on the manufacturer also a sequential logic block. This way, the signal is empowered through each LED and 24 bits data is addressed to, from which 8 bits are related to sub LED G, 8 are linked up to sub LED R and 8 bits are connected to sub LED B. So that, color and bright are separately controlled for every LED.
Power supply: It is setup into a star topology. Supplies 23 [A] to the system.
By means of next chart, hadware arquitecture is shown as per below.
The LED screen is composed by four module rows of LEDs, each module row is assigned by a data line as per below.
For the LEDs screen, it has been decided to use several parallel data lines, due to it aims to overcome a LEDs handicap. This handicap consists of when broadcasting a 30 [frames/s] – streaming video. It does not allow to connect more than 1024 LEDs into serial architecture. Essentially because of timing purposes. Which are exposed by means of this reasoning: If the rate to broadcast this video is 30 [frames/s], this indicates every 0.033 [s] a frame to display on LEDs is loaded. Tframe = 0.033 [s].
On the other hand, Tbit=1,25 [µs] which is a value forced by the LEDs.
As discussed earlier in this report, 24 bits are related to every single pixel, immediately after sending all bits towards the LEDs, a time must be saved for a 50 [µs] reset. This way, sending period expression, it is as it follows:
Tsend = ( 1.25 [µs/bit] x 24 [bits/pixel] x 1024 [pixels] ) + Treset = 0.03077[s]
Checking out on, Tsend < Tframe.
That is, this limitation resides in central premise that, sending time cannot be greater than frame time, whether more than 1024 LEDs are connected in cascade architecture, it is need a time greater than frame time, therefore it breaks that premise.
For the case in which the rate to broadcast this video is 60 [frames/s], could be linked into cascade connecting factors up to 512 LEDs.
Therefore, there is an inversely proportional relationship between the video rate broadcast and the number of LEDs to be connected in cascade.
The software architecture is composed by two phases, one developed to PC and another to microcontroller.
In the PC phase, processor and transmitter blocks have been processor defined. This stage consists about reproducing a video signal and executing some processor and transmitter blocks for each frame from the video signal.
In the microcontroller phase, it is been defined both receiver and presentation blocks. In this stage continuously some bytes are received through UART and are shown on the screen.
The next diagram shows the software architecture.
Now we describe the software blocks functions
Processor block: This block extracts the pixels from each frame and it organises them according to the data line of LED screen. Then, it moves these GRB bits to the corresponding bit of data line and then it stores those GRB bits in an array.
Transmitter block: This block is in charge of transmitting in serial the array obtained in the processor block.
Receiver block: This block receives in serial the bytes of array transmitted by the transmitter block. That array contains sorted pixels according to line mapping data on the LEDs screen.
Presentation block: This block carries the symbols “1” or “0” to GPIOs MCU. Those symbols are generated as from extracted pixels from each frame. It should be clarified that GPIOD pines within microcontroller are linked up to the LEDs screen.
The symbols are illustrated below.
Consequently, it is graphically represented by the way different blocks interact one another.
For a video composed by n frames, first the processor block is run for all the pixels within frame 1, then transmitter block sends GRB bits associated to frame 1 towards MCU. After that, receiver block is run for frame 1 and while presentation block shows GRB frame 1 bits on the screen, also GRB bits are received from frame 2.
This way, successively software blocks are run for video n frames.
On the occasion of the II edition of the Symposium “Tell us your thesis” organized by the Universidad Politécnica de Madrid I created a poster summary of my thesis.
Both the thesis and the poster are entitled “Methodology for implementation of Synchronization Strategies for Wireless Sensor Networks“.
In the poster I intend to explain the process that every researcher and/or developer must carry out to add synchronization tasks to his Wireless Sensor Network.
First of all it is needed to know what is the objective of the user application in which we want to add temporary synchronization.
Based on the application we will have some requirements to fulfill. That is, each application will have different requirements regarding timing, maximum permissible error regarding temporal precision or accuracy, network topology, message distribution method, battery consumption and life time objectives, hardware resources of different nature and different price, etc.
Since there are many options and possible ways, a methodology is needed that helps the researcher and/or developer to obtain a solution, in order to achieve a time synchronization in their wireless sensor networks, which is adapted to the needs of the application.
The development of this methodology is the objective of this doctoral thesis.