{"ddc":["000"],"file":[{"date_updated":"2020-12-11T12:33:04Z","date_created":"2020-12-11T12:33:04Z","creator":"jensheit","file_size":998706,"content_type":"application/pdf","access_level":"closed","relation":"main_file","file_id":"20697","file_name":"ms.pdf","success":1}],"abstract":[{"text":"Speech activity detection (SAD), which often rests on the fact that the noise is \"more'' stationary than speech, is particularly challenging in non-stationary environments, because the time variance of the acoustic scene makes it difficult to discriminate speech from noise. We propose two approaches to SAD, where one is based on statistical signal processing, while the other utilizes neural networks. The former employs sophisticated signal processing to track the noise and speech energies and is meant to support the case for a resource efficient, unsupervised signal processing approach.\r\nThe latter introduces a recurrent network layer that operates on short segments of the input speech to do temporal smoothing in the presence of non-stationary noise. The systems are tested on the Fearless Steps challenge database, which consists of the transmission data from the Apollo-11 space mission.\r\nThe statistical SAD achieves comparable detection performance to earlier proposed neural network based SADs, while the neural network based approach leads to a decision cost function of 1.07% on the evaluation set of the 2020 Fearless Steps Challenge, which sets a new state of the art.","lang":"eng"}],"project":[{"name":"Computing Resources Provided by the Paderborn Center for Parallel Computing","_id":"52"}],"has_accepted_license":"1","date_created":"2020-11-25T15:03:19Z","keyword":["voice activity detection","speech activity detection","neural network","statistical speech processing"],"file_date_updated":"2020-12-11T12:33:04Z","language":[{"iso":"eng"}],"type":"conference","date_updated":"2023-10-26T08:28:49Z","publication":"INTERSPEECH 2020 Virtual Shanghai China","author":[{"first_name":"Jens","last_name":"Heitkaemper","id":"27643","full_name":"Heitkaemper, Jens"},{"first_name":"Joerg","full_name":"Schmalenstroeer, Joerg","id":"460","last_name":"Schmalenstroeer"},{"first_name":"Reinhold","last_name":"Haeb-Umbach","id":"242","full_name":"Haeb-Umbach, Reinhold"}],"status":"public","year":"2020","user_id":"460","citation":{"apa":"Heitkaemper, J., Schmalenstroeer, J., & Haeb-Umbach, R. (2020). Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments. INTERSPEECH 2020 Virtual Shanghai China.","ieee":"J. Heitkaemper, J. Schmalenstroeer, and R. Haeb-Umbach, “Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments,” 2020.","mla":"Heitkaemper, Jens, et al. “Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments.” INTERSPEECH 2020 Virtual Shanghai China, 2020.","ama":"Heitkaemper J, Schmalenstroeer J, Haeb-Umbach R. Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments. In: INTERSPEECH 2020 Virtual Shanghai China. ; 2020.","chicago":"Heitkaemper, Jens, Joerg Schmalenstroeer, and Reinhold Haeb-Umbach. “Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments.” In INTERSPEECH 2020 Virtual Shanghai China, 2020.","short":"J. Heitkaemper, J. Schmalenstroeer, R. Haeb-Umbach, in: INTERSPEECH 2020 Virtual Shanghai China, 2020.","bibtex":"@inproceedings{Heitkaemper_Schmalenstroeer_Haeb-Umbach_2020, title={Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments}, booktitle={INTERSPEECH 2020 Virtual Shanghai China}, author={Heitkaemper, Jens and Schmalenstroeer, Joerg and Haeb-Umbach, Reinhold}, year={2020} }"},"title":"Statistical and Neural Network Based Speech Activity Detection in Non-Stationary Acoustic Environments","_id":"20505","department":[{"_id":"54"}]}