---
_id: '61290'
abstract:
- lang: eng
  text: ffective computing often relies on audiovisual data to identify affective
    states from non-verbal signals, such as facial expressions and vocal cues. Since
    automatic affect recognition can be used in sensitive applications, such as healthcare
    and education, it is crucial to understand how models arrive at their decisions.
    Interpretability of machine learning models is the goal of the emerging research
    area of Explainable AI (explainable AI (XAI)). This scoping review aims to survey
    the field of audiovisual affective machine learning to identify how XAI is applied
    in this domain. We first provide an overview of XAI concepts relevant to affective
    computing. Next, following the recommended PRISMA guidelines, we perform a literature
    search in the ACM, IEEE, Web of Science and PubMed databases. After systematically
    reviewing 1190 articles, a final set of 65 papers is included in our analysis.
    We quantitatively summarize the scope, methods and evaluation of the XAI techniques
    used in the identified papers. Our findings show encouraging developments for
    using XAI to explain models in audiovisual affective computing, yet only a limited
    set of methods are used in the reviewed works. Following a critical discussion,
    we provide recommendations for incorporating interpretability in future work for
    affective machine learnin
article_type: review
author:
- first_name: David
  full_name: Johnson, David
  id: '97208'
  last_name: Johnson
- first_name: Olya
  full_name: Hakobyan, Olya
  last_name: Hakobyan
- first_name: Jonas
  full_name: Paletschek, Jonas
  id: '98941'
  last_name: Paletschek
- first_name: Hanna
  full_name: Drimalla, Hanna
  last_name: Drimalla
citation:
  ama: 'Johnson D, Hakobyan O, Paletschek J, Drimalla H. Explainable AI for Audio
    and Visual Affective Computing: A Scoping Review. <i>IEEE Transactions on Affective
    Computing</i>. 2024;16(2):518-536. doi:<a href="https://doi.org/10.1109/taffc.2024.3505269">10.1109/taffc.2024.3505269</a>'
  apa: 'Johnson, D., Hakobyan, O., Paletschek, J., &#38; Drimalla, H. (2024). Explainable
    AI for Audio and Visual Affective Computing: A Scoping Review. <i>IEEE Transactions
    on Affective Computing</i>, <i>16</i>(2), 518–536. <a href="https://doi.org/10.1109/taffc.2024.3505269">https://doi.org/10.1109/taffc.2024.3505269</a>'
  bibtex: '@article{Johnson_Hakobyan_Paletschek_Drimalla_2024, title={Explainable
    AI for Audio and Visual Affective Computing: A Scoping Review}, volume={16}, DOI={<a
    href="https://doi.org/10.1109/taffc.2024.3505269">10.1109/taffc.2024.3505269</a>},
    number={2}, journal={IEEE Transactions on Affective Computing}, publisher={Institute
    of Electrical and Electronics Engineers (IEEE)}, author={Johnson, David and Hakobyan,
    Olya and Paletschek, Jonas and Drimalla, Hanna}, year={2024}, pages={518–536}
    }'
  chicago: 'Johnson, David, Olya Hakobyan, Jonas Paletschek, and Hanna Drimalla. “Explainable
    AI for Audio and Visual Affective Computing: A Scoping Review.” <i>IEEE Transactions
    on Affective Computing</i> 16, no. 2 (2024): 518–36. <a href="https://doi.org/10.1109/taffc.2024.3505269">https://doi.org/10.1109/taffc.2024.3505269</a>.'
  ieee: 'D. Johnson, O. Hakobyan, J. Paletschek, and H. Drimalla, “Explainable AI
    for Audio and Visual Affective Computing: A Scoping Review,” <i>IEEE Transactions
    on Affective Computing</i>, vol. 16, no. 2, pp. 518–536, 2024, doi: <a href="https://doi.org/10.1109/taffc.2024.3505269">10.1109/taffc.2024.3505269</a>.'
  mla: 'Johnson, David, et al. “Explainable AI for Audio and Visual Affective Computing:
    A Scoping Review.” <i>IEEE Transactions on Affective Computing</i>, vol. 16, no.
    2, Institute of Electrical and Electronics Engineers (IEEE), 2024, pp. 518–36,
    doi:<a href="https://doi.org/10.1109/taffc.2024.3505269">10.1109/taffc.2024.3505269</a>.'
  short: D. Johnson, O. Hakobyan, J. Paletschek, H. Drimalla, IEEE Transactions on
    Affective Computing 16 (2024) 518–536.
date_created: 2025-09-16T07:24:07Z
date_updated: 2025-09-16T08:02:23Z
ddc:
- '000'
department:
- _id: '660'
doi: 10.1109/taffc.2024.3505269
file:
- access_level: closed
  content_type: application/pdf
  creator: johnson
  date_created: 2025-09-16T07:34:27Z
  date_updated: 2025-09-16T07:34:27Z
  file_id: '61291'
  file_name: Explainable_AI_for_Audio_and_Visual_Affective_Computing_A_Scoping_Review.pdf
  file_size: 3252812
  relation: main_file
  success: 1
file_date_updated: 2025-09-16T07:34:27Z
has_accepted_license: '1'
intvolume: '        16'
issue: '2'
language:
- iso: eng
page: 518-536
project:
- _id: '110'
  name: TRR 318 - Project Area A
- _id: '1204'
  name: TRR 318 - Teilprojekt IRG BI
- _id: '1200'
  name: TRR 318 - Teilprojekt A6 - Inklusive Ko-Konstruktion sozialer Signale des
    Verstehens
publication: IEEE Transactions on Affective Computing
publication_identifier:
  issn:
  - 1949-3045
  - 2371-9850
publication_status: published
publisher: Institute of Electrical and Electronics Engineers (IEEE)
status: public
title: 'Explainable AI for Audio and Visual Affective Computing: A Scoping Review'
type: journal_article
user_id: '97208'
volume: 16
year: '2024'
...
