{"project":[{"name":"PC2: Computing Resources Provided by the Paderborn Center for Parallel Computing","_id":"52"}],"has_accepted_license":"1","quality_controlled":"1","abstract":[{"text":"Continuous Speech Separation (CSS) has been proposed to address speech overlaps during the analysis of realistic meeting-like conversations by eliminating any overlaps before further processing.\r\nCSS separates a recording of arbitrarily many speakers into a small number of overlap-free output channels, where each output channel may contain speech of multiple speakers.\r\nThis is often done by applying a conventional separation model trained with Utterance-level Permutation Invariant Training (uPIT), which exclusively maps a speaker to an output channel, in sliding window approach called stitching.\r\nRecently, we introduced an alternative training scheme called Graph-PIT that teaches the separation network to directly produce output streams in the required format without stitching.\r\nIt can handle an arbitrary number of speakers as long as never more of them overlap at the same time than the separator has output channels.\r\nIn this contribution, we further investigate the Graph-PIT training scheme.\r\nWe show in extended experiments that models trained with Graph-PIT also work in challenging reverberant conditions.\r\nModels trained in this way are able to perform segment-less CSS, i.e., without stitching, and achieve comparable and often better separation quality than the conventional CSS with uPIT and stitching.\r\nWe simplify the training schedule for Graph-PIT with the recently proposed Source Aggregated Signal-to-Distortion Ratio (SA-SDR) loss.\r\nIt eliminates unfavorable properties of the previously used A-SDR loss and thus enables training with Graph-PIT from scratch.\r\nGraph-PIT training relaxes the constraints w.r.t. the allowed numbers of speakers and speaking patterns which allows using a larger variety of training data.\r\nFurthermore, we introduce novel signal-level evaluation metrics for meeting scenarios, namely the source-aggregated scale- and convolution-invariant Signal-to-Distortion Ratio (SA-SI-SDR and SA-CI-SDR), which are generalizations of the commonly used SDR-based metrics for the CSS case.","lang":"eng"}],"oa":"1","file_date_updated":"2023-01-11T08:50:19Z","keyword":["Continuous Speech Separation","Source Separation","Graph-PIT","Dynamic Programming","Permutation Invariant Training"],"date_created":"2023-01-09T17:24:17Z","ddc":["000"],"publication_status":"published","doi":"10.1109/taslp.2022.3228629","page":"576-589","file":[{"relation":"main_file","access_level":"open_access","content_type":"application/pdf","creator":"haebumb","file_size":7185077,"date_updated":"2023-01-11T08:50:19Z","date_created":"2023-01-09T17:46:05Z","file_name":"main.pdf","file_id":"35607"}],"intvolume":" 31","status":"public","year":"2023","publisher":"Institute of Electrical and Electronics Engineers (IEEE)","author":[{"last_name":"von Neumann","id":"49870","full_name":"von Neumann, Thilo","orcid":"https://orcid.org/0000-0002-7717-8670","first_name":"Thilo"},{"full_name":"Kinoshita, Keisuke","last_name":"Kinoshita","first_name":"Keisuke"},{"full_name":"Boeddeker, Christoph","id":"40767","last_name":"Boeddeker","first_name":"Christoph"},{"full_name":"Delcroix, Marc","last_name":"Delcroix","first_name":"Marc"},{"first_name":"Reinhold","full_name":"Haeb-Umbach, Reinhold","id":"242","last_name":"Haeb-Umbach"}],"department":[{"_id":"54"}],"publication_identifier":{"issn":["2329-9290","2329-9304"]},"title":"Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria","_id":"35602","volume":31,"citation":{"bibtex":"@article{von Neumann_Kinoshita_Boeddeker_Delcroix_Haeb-Umbach_2023, title={Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria}, volume={31}, DOI={10.1109/taslp.2022.3228629}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, publisher={Institute of Electrical and Electronics Engineers (IEEE)}, author={von Neumann, Thilo and Kinoshita, Keisuke and Boeddeker, Christoph and Delcroix, Marc and Haeb-Umbach, Reinhold}, year={2023}, pages={576–589} }","ama":"von Neumann T, Kinoshita K, Boeddeker C, Delcroix M, Haeb-Umbach R. Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2023;31:576-589. doi:10.1109/taslp.2022.3228629","mla":"von Neumann, Thilo, et al. “Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria.” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, Institute of Electrical and Electronics Engineers (IEEE), 2023, pp. 576–89, doi:10.1109/taslp.2022.3228629.","ieee":"T. von Neumann, K. Kinoshita, C. Boeddeker, M. Delcroix, and R. Haeb-Umbach, “Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 31, pp. 576–589, 2023, doi: 10.1109/taslp.2022.3228629.","short":"T. von Neumann, K. Kinoshita, C. Boeddeker, M. Delcroix, R. Haeb-Umbach, IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023) 576–589.","apa":"von Neumann, T., Kinoshita, K., Boeddeker, C., Delcroix, M., & Haeb-Umbach, R. (2023). Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31, 576–589. https://doi.org/10.1109/taslp.2022.3228629","chicago":"Neumann, Thilo von, Keisuke Kinoshita, Christoph Boeddeker, Marc Delcroix, and Reinhold Haeb-Umbach. “Segment-Less Continuous Speech Separation of Meetings: Training and Evaluation Criteria.” IEEE/ACM Transactions on Audio, Speech, and Language Processing 31 (2023): 576–89. https://doi.org/10.1109/taslp.2022.3228629."},"user_id":"49870","language":[{"iso":"eng"}],"article_type":"original","publication":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","type":"journal_article","date_updated":"2023-11-15T12:16:11Z"}