International Runner-up in the
James Dyson award
Asthma is a respiratory disease affecting 235 million people globally. Asthma causes airways to swell and narrow in response to an external trigger, making breathing difficult. Afflo is an AI enabled wearable device to monitor asthmatic symptoms and environmental factors, making predictions about a patient's key triggers.
1 in 12 people in the UK are currently receiving treatment for asthma
The NHS spends around 1 billion pounds a year treating and caring for people with asthma
0.4 % of asthma sufferers in the UK have access to specialist care
Afflo is an AI enabled wearable device to monitor asthmatic symptoms and environmental factors, taking the guesswork out of trigger diagnosis.
THE AFFLO WEARABLE
Attaches to the chest to listen to breathing, coughing and wheezing using a microphone.
THE AFFLO POD
A bundle of sensors, gathering data about the users' environment, informing trigger predictions.
THE AFFLO APP
A mobile interface that presents the collected data back to the user.
The System
The data collected by the Afflo Wearable and Afflo Pod is uploaded to the mobile application once a day via bluetooth. Overnight, this data is sent to the Afflo servers to train that user's customised algorithm.
​
The boxes shown in purple were selected as the main project scope. This allowed time to be focussed on the most critical areas.
Use Case
PHYSIOLOGICAL SENSING
To record the respiratory noises made by the user, a microphone would be used. This choice was based on extensive research. The optimal type of microphone and the influence of external factors was found through rigorous testing.
Test Methods
A test procedure was developed to make quick, direct comparison between different microphones and housings.
The Test Cycle
The Breathing Cycle
Test Microphones
Chest positioning was tested alongside the effect of clothes brushing past the microphone.
Signal Analysis
The collected Audio signals were analysed using Python scripts on Jupyter notebook.
​
Some signals recorded the heartbeat as well as the respiratory signals, disrupting results. This was removed through outlier identification using a dynamic range gate, in which 0.045 seconds of signal was removed from either side of an identified heartbeat.
Results
It was found that the dynamic microphone was the optimal microphone, displaying sufficient sensitivity with minimum power consumption. The effects of clothes movement could be mitigated using a layer of acoustic foam and rubber.
PRODUCT DEVELOPMENT
The testing lead to the selection of the most promising microphone option. The Afflo wearable was then designed to house this microphone and other components.
Component Selection
Battery Selection
Basic current draw tests using an Arduino were carried out alongside duty cycle calculations to select the optimum battery.
Data Storage
The amount of memory required was found to be 397 MB using the equations below.
INDUSTRIAL DESIGN
It was important that the design was not only functional, but desirable and comfortable to wear. By developing the technological and design aspects in parallel, it allowed one to influence the other fluently, meaning that form did not follow function nor did function follow form
Sketching
Blue Foam
Final Design
UI/UX
The Afflo application allows users to review their collected data to properly inform their asthma management. This completes the feedback loop of the end-to-end Afflo system.
Flowchart
01
02
03
Final Interface
ARTIFICIAL INTELLIGENCE
Machine learning was used to identify the difference between different respiratory events, correlating symptoms with triggers.
To demonstrate a proof of concept, a machine learning algorithm was developed that can identify the difference between clips of speech and coughs in a binary classification problem. This is a function that would be required by the final device, to properly ignore unimportant information.
​
Labelled melspectrograms from the Google Audioset were used to train a Convolutional Neural Network (CNN). This was built using Keras in Python, running on a Tensorflow Backend. The final model was found to be 83 % accurate.
The 10 second audio clips from the Google Audioset were converted into Mel-Spectrograms before being fed into the CNN
Results
Results on my own recordings
Results on test dataset