Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments

J. Heitkaemper, J. Schmalenströer, R. Haeb-Umbach, in: INTERSPEECH 2020 Virtual Shanghai China, 2020.

Download
Restricted ms.pdf 998.71 KB
Conference Paper | English
Abstract
Speech activity detection (SAD), which often rests on the fact that the noise is "more'' stationary than speech, is particularly challenging in non-stationary environments, because the time variance of the acoustic scene makes it difficult to discriminate speech from noise. We propose two approaches to SAD, where one is based on statistical signal processing, while the other utilizes neural networks. The former employs sophisticated signal processing to track the noise and speech energies and is meant to support the case for a resource efficient, unsupervised signal processing approach. The latter introduces a recurrent network layer that operates on short segments of the input speech to do temporal smoothing in the presence of non-stationary noise. The systems are tested on the Fearless Steps challenge database, which consists of the transmission data from the Apollo-11 space mission. The statistical SAD achieves comparable detection performance to earlier proposed neural network based SADs, while the neural network based approach leads to a decision cost function of 1.07% on the evaluation set of the 2020 Fearless Steps Challenge, which sets a new state of the art.
Publishing Year
Proceedings Title
INTERSPEECH 2020 Virtual Shanghai China
LibreCat-ID

Cite this

Heitkaemper J, Schmalenströer J, Haeb-Umbach R. Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments. In: INTERSPEECH 2020 Virtual Shanghai China. ; 2020.
Heitkaemper, J., Schmalenströer, J., & Haeb-Umbach, R. (2020). Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments. In INTERSPEECH 2020 Virtual Shanghai China.
@inproceedings{Heitkaemper_Schmalenströer_Haeb-Umbach_2020, title={Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments}, booktitle={INTERSPEECH 2020 Virtual Shanghai China}, author={Heitkaemper, Jens and Schmalenströer, Jörg and Haeb-Umbach, Reinhold}, year={2020} }
Heitkaemper, Jens, Jörg Schmalenströer, and Reinhold Haeb-Umbach. “Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments.” In INTERSPEECH 2020 Virtual Shanghai China, 2020.
J. Heitkaemper, J. Schmalenströer, and R. Haeb-Umbach, “Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments,” in INTERSPEECH 2020 Virtual Shanghai China, 2020.
Heitkaemper, Jens, et al. “Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments.” INTERSPEECH 2020 Virtual Shanghai China, 2020.
All files available under the following license(s):
Creative Commons License:
CC-BYCreative Commons Attribution 4.0 International Public License (CC-BY 4.0)
Main File(s)
File Name
ms.pdf 998.71 KB
Access Level
Restricted Closed Access
Last Uploaded
2020-12-11T12:33:04Z


Export

Marked Publications

Open Data LibreCat

Search this title in

Google Scholar