End-to-End Dereverberation, Beamforming, and Speech Recognition in A Cocktail Party

W. Zhang, X. Chang, C. Boeddeker, T. Nakatani, S. Watanabe, Y. Qian, IEEE/ACM Transactions on Audio, Speech, and Language Processing (2022).

Download
OA End-to-End_Dereverberation_Beamforming_and_Speech_Recognition_in_A_Cocktail_Party.pdf 6.17 MB
Journal Article | Published | English
Author
Zhang, Wangyou; Chang, Xuankai; Boeddeker, ChristophLibreCat; Nakatani, Tomohiro; Watanabe, Shinji; Qian, Yanmin
Abstract
Far-field multi-speaker automatic speech recognition (ASR) has drawn increasing attention in recent years. Most existing methods feature a signal processing frontend and an ASR backend. In realistic scenarios, these modules are usually trained separately or progressively, which suffers from either inter-module mismatch or a complicated training process. In this paper, we propose an end-to-end multi-channel model that jointly optimizes the speech enhancement (including speech dereverberation, denoising, and separation) frontend and the ASR backend as a single system. To the best of our knowledge, this is the first work that proposes to optimize dereverberation, beamforming, and multi-speaker ASR in a fully end-to-end manner. The frontend module consists of a weighted prediction error (WPE) based submodule for dereverberation and a neural beamformer for denoising and speech separation. For the backend, we adopt a widely used end-to-end (E2E) ASR architecture. It is worth noting that the entire model is differentiable and can be optimized in a fully end-to-end manner using only the ASR criterion, without the need of parallel signal-level labels. We evaluate the proposed model on several multi-speaker benchmark datasets, and experimental results show that the fully E2E ASR model can achieve competitive performance on both noisy and reverberant conditions, with over 30% relative word error rate (WER) reduction over the single-channel baseline systems.
Publishing Year
Journal Title
IEEE/ACM Transactions on Audio, Speech, and Language Processing
LibreCat-ID

Cite this

Zhang W, Chang X, Boeddeker C, Nakatani T, Watanabe S, Qian Y. End-to-End Dereverberation, Beamforming, and Speech Recognition in A Cocktail Party. IEEE/ACM Transactions on Audio, Speech, and Language Processing. Published online 2022. doi:10.1109/TASLP.2022.3209942
Zhang, W., Chang, X., Boeddeker, C., Nakatani, T., Watanabe, S., & Qian, Y. (2022). End-to-End Dereverberation, Beamforming, and Speech Recognition in A Cocktail Party. IEEE/ACM Transactions on Audio, Speech, and Language Processing. https://doi.org/10.1109/TASLP.2022.3209942
@article{Zhang_Chang_Boeddeker_Nakatani_Watanabe_Qian_2022, title={End-to-End Dereverberation, Beamforming, and Speech Recognition in A Cocktail Party}, DOI={10.1109/TASLP.2022.3209942}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, author={Zhang, Wangyou and Chang, Xuankai and Boeddeker, Christoph and Nakatani, Tomohiro and Watanabe, Shinji and Qian, Yanmin}, year={2022} }
Zhang, Wangyou, Xuankai Chang, Christoph Boeddeker, Tomohiro Nakatani, Shinji Watanabe, and Yanmin Qian. “End-to-End Dereverberation, Beamforming, and Speech Recognition in A Cocktail Party.” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022. https://doi.org/10.1109/TASLP.2022.3209942.
W. Zhang, X. Chang, C. Boeddeker, T. Nakatani, S. Watanabe, and Y. Qian, “End-to-End Dereverberation, Beamforming, and Speech Recognition in A Cocktail Party,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022, doi: 10.1109/TASLP.2022.3209942.
Zhang, Wangyou, et al. “End-to-End Dereverberation, Beamforming, and Speech Recognition in A Cocktail Party.” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022, doi:10.1109/TASLP.2022.3209942.
All files available under the following license(s):
Main File(s)
Access Level
OA Open Access
Last Uploaded
2022-10-11T07:23:13Z


External material:
Confirmation Letter

Export

Marked Publications

Open Data LibreCat

Search this title in

Google Scholar