Direction-Aware Adaptive Online Neural Speech Enhancement with an Augmented Reality Headset in Real Noisy Conversational Environments
Kouhei Sekiguchi,Aditya Arie Nugraha,Yicheng Du,Yoshiaki Bando,Mathieu Fontaine,Kazuyoshi Yoshii,Kouhei Sekiguchi,Aditya Arie Nugraha,Yicheng Du,Yoshiaki Bando,Mathieu Fontaine,Kazuyoshi Yoshii
This paper describes the practical response- and performance-aware development of online speech enhancement for an augmented reality (AR) headset that helps a user understand conversations made in real noisy echoic environments (e.g., cocktail party). One may use a state-of-the-art blind source separation method called fast multichannel nonnegative matrix factorization (FastMNMF) that works well i...