{"year":"2020","author":[{"full_name":"Ebbers, Janek","last_name":"Ebbers","first_name":"Janek","id":"34851"},{"first_name":"Reinhold","last_name":"Haeb-Umbach","full_name":"Haeb-Umbach, Reinhold","id":"242"}],"file":[{"date_created":"2020-12-16T08:57:22Z","date_updated":"2020-12-16T08:57:22Z","relation":"main_file","creator":"huesera","access_level":"open_access","file_size":108326,"content_type":"application/pdf","file_name":"DCASE2020Workshop_Ebbers_Paper.pdf","file_id":"20754"}],"_id":"20753","language":[{"iso":"eng"}],"file_date_updated":"2020-12-16T08:57:22Z","type":"conference","status":"public","date_created":"2020-12-16T08:55:27Z","user_id":"34851","has_accepted_license":"1","department":[{"_id":"54"}],"date_updated":"2023-11-22T08:27:32Z","oa":"1","citation":{"mla":"Ebbers, Janek, and Reinhold Haeb-Umbach. “Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for Weakly Labeled Semi-Supervised Sound Event Detection.” Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), 2020.","apa":"Ebbers, J., & Haeb-Umbach, R. (2020). Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for Weakly Labeled Semi-Supervised Sound Event Detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020).","bibtex":"@inproceedings{Ebbers_Haeb-Umbach_2020, title={Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for Weakly Labeled Semi-Supervised Sound Event Detection}, booktitle={Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)}, author={Ebbers, Janek and Haeb-Umbach, Reinhold}, year={2020} }","ieee":"J. Ebbers and R. Haeb-Umbach, “Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for Weakly Labeled Semi-Supervised Sound Event Detection,” 2020.","ama":"Ebbers J, Haeb-Umbach R. Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for Weakly Labeled Semi-Supervised Sound Event Detection. In: Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020). ; 2020.","chicago":"Ebbers, Janek, and Reinhold Haeb-Umbach. “Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for Weakly Labeled Semi-Supervised Sound Event Detection.” In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), 2020.","short":"J. Ebbers, R. Haeb-Umbach, in: Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), 2020."},"quality_controlled":"1","license":"https://creativecommons.org/publicdomain/zero/1.0/","abstract":[{"lang":"eng","text":"In this paper we present our system for the detection and classification of acoustic scenes and events (DCASE) 2020 Challenge Task 4: Sound event detection and separation in domestic environments. We introduce two new models: the forward-backward convolutional recurrent neural network (FBCRNN) and the tag-conditioned convolutional neural network (CNN). The FBCRNN employs two recurrent neural network (RNN) classifiers sharing the same CNN for preprocessing. With one RNN processing a recording in forward direction and the other in backward direction, the two networks are trained to jointly predict audio tags, i.e., weak labels, at each time step within a recording, given that at each time step they have jointly processed the whole recording. The proposed training encourages the classifiers to tag events as soon as possible. Therefore, after training, the networks can be applied to shorter audio segments of, e.g., 200ms, allowing sound event detection (SED). Further, we propose a tag-conditioned CNN to complement SED. It is trained to predict strong labels while using (predicted) tags, i.e., weak labels, as additional input. For training pseudo strong labels from a FBCRNN ensemble are used. The presented system scored the fourth and third place in the systems and teams rankings, respectively. Subsequent improvements allow our system to even outperform the challenge baseline and winner systems in average by, respectively, 18.0% and 2.2% event-based F1-score on the validation set. Source code is publicly available at https://github.com/fgnt/pb_sed."}],"ddc":["000"],"title":"Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for Weakly Labeled Semi-Supervised Sound Event Detection","project":[{"name":"PC2: Computing Resources Provided by the Paderborn Center for Parallel Computing","_id":"52"}],"publication":"Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020)"}