Computing Reviews

Inferring user intents from motion in hearing healthcare
Johansen B., Korzepa M., Petersen M., Pontoppidan N., Larsen J.  UbiComp 2018 (Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, Singapore, Oct 8-12, 2018)670-675,2018.Type:Proceedings
Date Reviewed: 01/30/19

This short, interesting paper combines the activity that can be detected via a smartphone with the soundscape that encompasses a person. The smartphone can be used to assess the overall sound that surrounds an individual.

The authors note that hearing loss can increase dementia in an individual. They also note that by studying the soundscape that encompasses an individual it is possible to proactively predict possible changes in health--even before an individual realizes the changes are occurring.

There is no mention of using voice-to-text recognition. The technology for this is not quite there. If you try using something like Google Assistant while driving, you have the impression that you are talking to somebody with hearing loss. The background noise interferes with the voice recognition. There are also implications associated with privacy.

Sound, motion, time, and location can provide the data needed to predict the mental and physical health of an individual. While this can be used to proactively study changes in health, the same data can also be used to study aging. There are many useful benefits from combining these technologies. Unfortunately, however, these same technologies could be used to strengthen social media surveillance.

Reviewer:  W. E. Mihalo Review #: CR146405 (1904-0133)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy