{"status":"public","author":[{"id":"97208","first_name":"David","full_name":"Johnson, David","last_name":"Johnson"},{"last_name":"Hakobyan","full_name":"Hakobyan, Olya","first_name":"Olya"},{"id":"98941","first_name":"Jonas","full_name":"Paletschek, Jonas","last_name":"Paletschek"},{"last_name":"Drimalla","full_name":"Drimalla, Hanna","first_name":"Hanna"}],"department":[{"_id":"660"}],"doi":"10.1109/taffc.2024.3505269","file":[{"success":1,"file_name":"Explainable_AI_for_Audio_and_Visual_Affective_Computing_A_Scoping_Review.pdf","file_size":3252812,"creator":"johnson","access_level":"closed","content_type":"application/pdf","relation":"main_file","date_updated":"2025-09-16T07:34:27Z","file_id":"61291","date_created":"2025-09-16T07:34:27Z"}],"language":[{"iso":"eng"}],"date_created":"2025-09-16T07:24:07Z","has_accepted_license":"1","volume":16,"type":"journal_article","citation":{"ieee":"D. Johnson, O. Hakobyan, J. Paletschek, and H. Drimalla, “Explainable AI for Audio and Visual Affective Computing: A Scoping Review,” IEEE Transactions on Affective Computing, vol. 16, no. 2, pp. 518–536, 2024, doi: 10.1109/taffc.2024.3505269.","mla":"Johnson, David, et al. “Explainable AI for Audio and Visual Affective Computing: A Scoping Review.” IEEE Transactions on Affective Computing, vol. 16, no. 2, Institute of Electrical and Electronics Engineers (IEEE), 2024, pp. 518–36, doi:10.1109/taffc.2024.3505269.","bibtex":"@article{Johnson_Hakobyan_Paletschek_Drimalla_2024, title={Explainable AI for Audio and Visual Affective Computing: A Scoping Review}, volume={16}, DOI={10.1109/taffc.2024.3505269}, number={2}, journal={IEEE Transactions on Affective Computing}, publisher={Institute of Electrical and Electronics Engineers (IEEE)}, author={Johnson, David and Hakobyan, Olya and Paletschek, Jonas and Drimalla, Hanna}, year={2024}, pages={518–536} }","ama":"Johnson D, Hakobyan O, Paletschek J, Drimalla H. Explainable AI for Audio and Visual Affective Computing: A Scoping Review. IEEE Transactions on Affective Computing. 2024;16(2):518-536. doi:10.1109/taffc.2024.3505269","short":"D. Johnson, O. Hakobyan, J. Paletschek, H. Drimalla, IEEE Transactions on Affective Computing 16 (2024) 518–536.","chicago":"Johnson, David, Olya Hakobyan, Jonas Paletschek, and Hanna Drimalla. “Explainable AI for Audio and Visual Affective Computing: A Scoping Review.” IEEE Transactions on Affective Computing 16, no. 2 (2024): 518–36. https://doi.org/10.1109/taffc.2024.3505269.","apa":"Johnson, D., Hakobyan, O., Paletschek, J., & Drimalla, H. (2024). Explainable AI for Audio and Visual Affective Computing: A Scoping Review. IEEE Transactions on Affective Computing, 16(2), 518–536. https://doi.org/10.1109/taffc.2024.3505269"},"_id":"61290","year":"2024","file_date_updated":"2025-09-16T07:34:27Z","issue":"2","ddc":["000"],"publication":"IEEE Transactions on Affective Computing","article_type":"review","publication_status":"published","abstract":[{"lang":"eng","text":"ffective computing often relies on audiovisual data to identify affective states from non-verbal signals, such as facial expressions and vocal cues. Since automatic affect recognition can be used in sensitive applications, such as healthcare and education, it is crucial to understand how models arrive at their decisions. Interpretability of machine learning models is the goal of the emerging research area of Explainable AI (explainable AI (XAI)). This scoping review aims to survey the field of audiovisual affective machine learning to identify how XAI is applied in this domain. We first provide an overview of XAI concepts relevant to affective computing. Next, following the recommended PRISMA guidelines, we perform a literature search in the ACM, IEEE, Web of Science and PubMed databases. After systematically reviewing 1190 articles, a final set of 65 papers is included in our analysis. We quantitatively summarize the scope, methods and evaluation of the XAI techniques used in the identified papers. Our findings show encouraging developments for using XAI to explain models in audiovisual affective computing, yet only a limited set of methods are used in the reviewed works. Following a critical discussion, we provide recommendations for incorporating interpretability in future work for affective machine learnin"}],"date_updated":"2025-09-16T08:02:23Z","intvolume":" 16","publication_identifier":{"issn":["1949-3045","2371-9850"]},"project":[{"_id":"110","name":"TRR 318 - Project Area A"},{"_id":"1204","name":"TRR 318 - Teilprojekt IRG BI"},{"_id":"1200","name":"TRR 318 - Teilprojekt A6 - Inklusive Ko-Konstruktion sozialer Signale des Verstehens"}],"title":"Explainable AI for Audio and Visual Affective Computing: A Scoping Review","page":"518-536","user_id":"97208","publisher":"Institute of Electrical and Electronics Engineers (IEEE)"}