TY - JOUR
T1 - A Kinect-Based Wearable Face Recognition System to Aid Visually Impaired Users
AU - Neto, Laurindo Britto
AU - Grijalva, Felipe
AU - Maike, Vanessa Regina Margareth Lima
AU - Martini, Luiz César
AU - Florencio, Dinei
AU - Baranauskas, Maria Cecília Calani
AU - Rocha, Anderson
AU - Goldenstein, Siome
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2017/2
Y1 - 2017/2
N2 - In this paper, we introduce a real-time face recognition (and announcement) system targeted at aiding the blind and low-vision people. The system uses a Microsoft Kinect sensor as a wearable device, performs face detection, and uses temporal coherence along with a simple biometric procedure to generate a sound associated with the identified person, virtualized at his/her estimated 3-D location. Our approach uses a variation of the K-nearest neighbors algorithm over histogram of oriented gradient descriptors dimensionally reduced by principal component analysis. The results show that our approach, on average, outperforms traditional face recognition methods while requiring much less computational resources (memory, processing power, and battery life) when compared with existing techniques in the literature, deeming it suitable for the wearable hardware constraints. We also show the performance of the system in the dark, using depth-only information acquired with Kinect's infrared camera. The validation uses a new dataset available for download, with 600 videos of 30 people, containing variation of illumination, background, and movement patterns. Experiments with existing datasets in the literature are also considered. Finally, we conducted user experience evaluations on both blindfolded and visually impaired users, showing encouraging results.
AB - In this paper, we introduce a real-time face recognition (and announcement) system targeted at aiding the blind and low-vision people. The system uses a Microsoft Kinect sensor as a wearable device, performs face detection, and uses temporal coherence along with a simple biometric procedure to generate a sound associated with the identified person, virtualized at his/her estimated 3-D location. Our approach uses a variation of the K-nearest neighbors algorithm over histogram of oriented gradient descriptors dimensionally reduced by principal component analysis. The results show that our approach, on average, outperforms traditional face recognition methods while requiring much less computational resources (memory, processing power, and battery life) when compared with existing techniques in the literature, deeming it suitable for the wearable hardware constraints. We also show the performance of the system in the dark, using depth-only information acquired with Kinect's infrared camera. The validation uses a new dataset available for download, with 600 videos of 30 people, containing variation of illumination, background, and movement patterns. Experiments with existing datasets in the literature are also considered. Finally, we conducted user experience evaluations on both blindfolded and visually impaired users, showing encouraging results.
KW - Accessibility
KW - Microsoft Kinect
KW - assistive technology
KW - face recognition
KW - wearable device
KW - wearable system
UR - http://www.scopus.com/inward/record.url?scp=84988648761&partnerID=8YFLogxK
U2 - 10.1109/THMS.2016.2604367
DO - 10.1109/THMS.2016.2604367
M3 - Artículo
AN - SCOPUS:84988648761
SN - 2168-2291
VL - 47
SP - 52
EP - 64
JO - IEEE Transactions on Human-Machine Systems
JF - IEEE Transactions on Human-Machine Systems
IS - 1
M1 - 7571103
ER -