Multimodal speaker diarization for meetings using volume-evaluated SRP-PHAT and video analysis
Fecha
2018-04-11
Título de la revista
ISSN de la revista
Título del volumen
Editor
Springer
Resumen
Speaker diarization is traditionally defined as the problem of determining “who speaks when” given an audio or video stream. This is an important task in many applications for meeting rooms, including automatic transcription of conversations, camera steering or content summarization. When the room is equipped with microphone arrays and cameras, speakers can be distinguished according to their location and the problem can be addressed through localization techniques. This article proposes a multimodal speaker diarization system for meeting environments based on a modified SRP-PHAT function evaluated on space volumes rather than discrete points. In our system, this function is used in combination with a circular array, enabling audio-based localization based on the selection of local maxima. Voicing detection is used to detect speech frames, whereas video analysis is introduced to aid in the decision when users move or simultaneously speak. The approach is evaluated on the well-known AMI dataset with approximately 100 hours of realistic meeting recordings and shows an average diarization error rate of 21% – 25%.
Descripción
Palabras clave
Speaker diarization, Meeting rooms, SRP-PHAT, Multimodal processing
Citación
Cabañas-Molero, P., Lucena, M., Fuertes, J.M. et al. Multimodal speaker diarization for meetings using volume-evaluated SRP-PHAT and video analysis. Multimed Tools Appl 77, 27685–27707 (2018). https://doi.org/10.1007/s11042-018-5944-2