Challenges with Real-World Smartwatch based Audio Monitoring

Wednesday June 6th, 12-1PM @ BA5205

Speaker: Daniyal Liaqat
Title:
Challenges with Real-World Smartwatch based Audio Monitoring

Abstract:
Audio data from a microphone can be a rich source of information. The speech and audio processing community has explored using audio data to detect emotion, depression, Alzheimer’s disease and even children’s age, weight and height. The mobile community has looked at using smartphone based audio to detect coughing and other respiratory sounds and help predict students’ GPA.
However, audio data from these studies tends to be collected in more controlled environments using well placed, high quality microphones or from phone calls. Applying these kinds of analyses to continuous and in-the-wild audio could have tremendous applications, particularly in the context of health monitoring. As part of a health monitoring study, we use smartwatches to collect in-the-wild audio from real patients. In this paper we characterize the quality of the audio data we collected. Our findings include that the smartwatch based audio is good enough to discern speech and respiratory sounds. However, extracting these sounds is difficult because of the wide variety of noise in the signal and current tools perform poorly at dealing with this noise. We also find that the quality of the microphone allows annotators to differentiate the source of speech and coughing, which adds another level of complexity to analyzing this audio.