---
_id: '61156'
abstract:
- lang: eng
  text: Explainability has become an important topic in computer science and artificial
    intelligence, leading to a subfield called Explainable Artificial Intelligence
    (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’
    on the part of the explainee. However, what it means to ‘understand’ is still
    not clearly defined, and the concept itself is rarely the subject of scientific
    investigation. This conceptual article aims to present a model of forms of understanding
    for XAI-explanations and beyond. From an interdisciplinary perspective bringing
    together computer science, linguistics, sociology, philosophy and psychology,
    a definition of understanding and its forms, assessment, and dynamics during the
    process of giving everyday explanations are explored. Two types of understanding
    are considered as possible outcomes of explanations, namely enabledness, ‘knowing
    how’ to do or decide something, and comprehension, ‘knowing that’ – both in different
    degrees (from shallow to deep). Explanations regularly start with shallow understanding
    in a specific domain and can lead to deep comprehension and enabledness of the
    explanandum, which we see as a prerequisite for human users to gain agency. In
    this process, the increase of comprehension and enabledness are highly interdependent.
    Against the background of this systematization, special challenges of understanding
    in XAI are discussed.
article_number: '101419'
article_type: original
author:
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  id: '76456'
  last_name: Buschmeier
  orcid: 0000-0002-9613-5713
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Helen
  full_name: Beierling, Helen
  id: '50995'
  last_name: Beierling
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
  orcid: 0000-0002-9997-9241
- first_name: André
  full_name: Groß, André
  id: '93405'
  last_name: Groß
  orcid: 0000-0002-9593-7220
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Michael
  full_name: Lenke, Michael
  last_name: Lenke
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  id: '86589'
  last_name: Vollmer
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Annedore
  full_name: Wilmes, Annedore
  last_name: Wilmes
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>. 2025;94. doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>
  apa: Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher,
    J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer,
    V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang,
    Y., Wilmes, A., &#38; Wrede, B. (2025). Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>, <i>94</i>, Article 101419. <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>
  bibtex: '@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et
    al._2025, title={Forms of Understanding for XAI-Explanations}, volume={94}, DOI={<a
    href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>},
    number={101419}, journal={Cognitive Systems Research}, author={Buschmeier, Hendrik
    and Buhl, Heike M. and Kern, Friederike and Grimminger, Angela and Beierling,
    Helen and Fisher, Josephine Beryl and Groß, André and Horwath, Ilona and Klowait,
    Nils and Lazarov, Stefan Teodorov and et al.}, year={2025} }'
  chicago: Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger,
    Helen Beierling, Josephine Beryl Fisher, André Groß, et al. “Forms of Understanding
    for XAI-Explanations.” <i>Cognitive Systems Research</i> 94 (2025). <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>.
  ieee: 'H. Buschmeier <i>et al.</i>, “Forms of Understanding for XAI-Explanations,”
    <i>Cognitive Systems Research</i>, vol. 94, Art. no. 101419, 2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.'
  mla: Buschmeier, Hendrik, et al. “Forms of Understanding for XAI-Explanations.”
    <i>Cognitive Systems Research</i>, vol. 94, 101419, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.
  short: H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher,
    A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing,
    I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede,
    Cognitive Systems Research 94 (2025).
date_created: 2025-09-08T14:24:32Z
date_updated: 2025-12-05T15:32:25Z
ddc:
- '006'
department:
- _id: '660'
doi: 10.1016/j.cogsys.2025.101419
file:
- access_level: closed
  content_type: application/pdf
  creator: hbuschme
  date_created: 2025-12-01T21:02:20Z
  date_updated: 2025-12-01T21:02:20Z
  file_id: '62730'
  file_name: Buschmeier-etal-2025-COGSYS.pdf
  file_size: 10114981
  relation: main_file
  success: 1
file_date_updated: 2025-12-01T21:02:20Z
has_accepted_license: '1'
intvolume: '        94'
keyword:
- understanding
- explaining
- explanations
- explainable
- AI
- interdisciplinarity
- comprehension
- enabledness
- agency
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.sciencedirect.com/science/article/pii/S1389041725000993?via%3Dihub
oa: '1'
project:
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '113'
  name: TRR 318 - Subproject A3
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '122'
  name: TRR 318 - Subproject B3
- _id: '123'
  name: TRR 318 - Subproject B5
- _id: '119'
  name: TRR 318 - Project Area Ö
publication: Cognitive Systems Research
publication_status: published
quality_controlled: '1'
status: public
title: Forms of Understanding for XAI-Explanations
type: journal_article
user_id: '57578'
volume: 94
year: '2025'
...
---
_id: '58109'
abstract:
- lang: eng
  text: The present study aims to understand how metaphors are used in explanations.
    According to many current theories, metaphors have a conceptual function for the
    understanding of abstract objects. From this theoretical assumption, we derived
    the hypothesis that the lower the expertise of the addressee of an explanation,
    the more metaphors should be used. We tested this hypothesis on a relatively natural
    data set of 24 published videos with close to 100,000 words overall in which experts
    explain abstract, mostly scientific concepts to persons of different expertise,
    varying from minimal (children) to profound (expert). Contrary to our expectations,
    the frequency of metaphors did not decrease with expertise, but actually increased.
    This increase could be statistically substantiated with higher differences in
    expertise. The study contributes to a better understanding of the use of metaphors
    in actual explanatory processes and how metaphor use depends on contextual factors.
    It thus supports the expansion of the conceptual and linguistic perspective on
    metaphors to include the aspect of how metaphors are used by speakers.
article_type: original
author:
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Miriam
  full_name: Körber, Miriam
  last_name: Körber
- first_name: Meghdut
  full_name: Sengupta, Meghdut
  last_name: Sengupta
- first_name: Henning
  full_name: Wachsmuth, Henning
  last_name: Wachsmuth
citation:
  ama: 'Scharlau I, Körber M, Sengupta M, Wachsmuth H. When to use a metaphor: Metaphors
    in dialogical explanations with addressees of different expertise. <i>Frontiers
    in Language Sciences</i>. 2024;3:1474924.'
  apa: 'Scharlau, I., Körber, M., Sengupta, M., &#38; Wachsmuth, H. (2024). When to
    use a metaphor: Metaphors in dialogical explanations with addressees of different
    expertise. <i>Frontiers in Language Sciences</i>, <i>3</i>, 1474924.'
  bibtex: '@article{Scharlau_Körber_Sengupta_Wachsmuth_2024, title={When to use a
    metaphor: Metaphors in dialogical explanations with addressees of different expertise},
    volume={3}, journal={Frontiers in Language Sciences}, author={Scharlau, Ingrid
    and Körber, Miriam and Sengupta, Meghdut and Wachsmuth, Henning}, year={2024},
    pages={1474924} }'
  chicago: 'Scharlau, Ingrid, Miriam Körber, Meghdut Sengupta, and Henning Wachsmuth.
    “When to Use a Metaphor: Metaphors in Dialogical Explanations with Addressees
    of Different Expertise.” <i>Frontiers in Language Sciences</i> 3 (2024): 1474924.'
  ieee: 'I. Scharlau, M. Körber, M. Sengupta, and H. Wachsmuth, “When to use a metaphor:
    Metaphors in dialogical explanations with addressees of different expertise,”
    <i>Frontiers in Language Sciences</i>, vol. 3, p. 1474924, 2024.'
  mla: 'Scharlau, Ingrid, et al. “When to Use a Metaphor: Metaphors in Dialogical
    Explanations with Addressees of Different Expertise.” <i>Frontiers in Language
    Sciences</i>, vol. 3, 2024, p. 1474924.'
  short: I. Scharlau, M. Körber, M. Sengupta, H. Wachsmuth, Frontiers in Language
    Sciences 3 (2024) 1474924.
date_created: 2025-01-08T11:59:24Z
date_updated: 2025-01-08T11:59:34Z
department:
- _id: '660'
funded_apc: '1'
intvolume: '         3'
keyword:
- metaphor
- conceptual metaphor
- conceptual metaphor theory
- metaphor usage
- explaining
- explanation
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.frontiersin.org/journals/language-sciences/articles/10.3389/flang.2024.1474924/full
oa: '1'
page: '1474924'
project:
- _id: '127'
  name: 'TRR 318 - C4: TRR 318 - Subproject C4 - Metaphern als Werkzeug des Erklärens'
publication: Frontiers in Language Sciences
quality_controlled: '1'
status: public
title: 'When to use a metaphor: Metaphors in dialogical explanations with addressees
  of different expertise'
type: journal_article
user_id: '451'
volume: 3
year: '2024'
...
---
_id: '48543'
abstract:
- lang: eng
  text: Explanation has been identified as an important capability for AI-based systems,
    but research on systematic strategies for achieving understanding in interaction
    with such systems is still sparse. Negation is a linguistic strategy that is often
    used in explanations. It creates a contrast space between the affirmed and the
    negated item that enriches explaining processes with additional contextual information.
    While negation in human speech has been shown to lead to higher processing costs
    and worse task performance in terms of recall or action execution when used in
    isolation, it can decrease processing costs when used in context. So far, it has
    not been considered as a guiding strategy for explanations in human-robot interaction.
    We conducted an empirical study to investigate the use of negation as a guiding
    strategy in explanatory human-robot dialogue, in which a virtual robot explains
    tasks and possible actions to a human explainee to solve them in terms of gestures
    on a touchscreen. Our results show that negation vs. affirmation 1) increases
    processing costs measured as reaction time and 2) increases several aspects of
    task performance. While there was no significant effect of negation on the number
    of initially correctly executed gestures, we found a significantly lower number
    of attempts—measured as breaks in the finger movement data before the correct
    gesture was carried out—when being instructed through a negation. We further found
    that the gestures significantly resembled the presented prototype gesture more
    following an instruction with a negation as opposed to an affirmation. Also, the
    participants rated the benefit of contrastive vs. affirmative explanations significantly
    higher. Repeating the instructions decreased the effects of negation, yielding
    similar processing costs and task performance measures for negation and affirmation
    after several iterations. We discuss our results with respect to possible effects
    of negation on linguistic processing of explanations and limitations of our study.
article_type: original
author:
- first_name: A.
  full_name: Groß, A.
  last_name: Groß
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Ngoc Chi
  full_name: Banh, Ngoc Chi
  id: '38219'
  last_name: Banh
  orcid: 0000-0002-5946-4542
- first_name: B.
  full_name: Richter, B.
  last_name: Richter
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
- first_name: B.
  full_name: Wrede, B.
  last_name: Wrede
citation:
  ama: Groß A, Singh A, Banh NC, et al. Scaffolding the human partner by contrastive
    guidance in an explanatory human-robot dialogue. <i>Frontiers in Robotics and
    AI</i>. 2023;10. doi:<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>
  apa: Groß, A., Singh, A., Banh, N. C., Richter, B., Scharlau, I., Rohlfing, K. J.,
    &#38; Wrede, B. (2023). Scaffolding the human partner by contrastive guidance
    in an explanatory human-robot dialogue. <i>Frontiers in Robotics and AI</i>, <i>10</i>.
    <a href="https://doi.org/10.3389/frobt.2023.1236184">https://doi.org/10.3389/frobt.2023.1236184</a>
  bibtex: '@article{Groß_Singh_Banh_Richter_Scharlau_Rohlfing_Wrede_2023, title={Scaffolding
    the human partner by contrastive guidance in an explanatory human-robot dialogue},
    volume={10}, DOI={<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>},
    journal={Frontiers in Robotics and AI}, author={Groß, A. and Singh, Amit and Banh,
    Ngoc Chi and Richter, B. and Scharlau, Ingrid and Rohlfing, Katharina J. and Wrede,
    B.}, year={2023} }'
  chicago: Groß, A., Amit Singh, Ngoc Chi Banh, B. Richter, Ingrid Scharlau, Katharina
    J. Rohlfing, and B. Wrede. “Scaffolding the Human Partner by Contrastive Guidance
    in an Explanatory Human-Robot Dialogue.” <i>Frontiers in Robotics and AI</i> 10
    (2023). <a href="https://doi.org/10.3389/frobt.2023.1236184">https://doi.org/10.3389/frobt.2023.1236184</a>.
  ieee: 'A. Groß <i>et al.</i>, “Scaffolding the human partner by contrastive guidance
    in an explanatory human-robot dialogue,” <i>Frontiers in Robotics and AI</i>,
    vol. 10, 2023, doi: <a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>.'
  mla: Groß, A., et al. “Scaffolding the Human Partner by Contrastive Guidance in
    an Explanatory Human-Robot Dialogue.” <i>Frontiers in Robotics and AI</i>, vol.
    10, 2023, doi:<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>.
  short: A. Groß, A. Singh, N.C. Banh, B. Richter, I. Scharlau, K.J. Rohlfing, B.
    Wrede, Frontiers in Robotics and AI 10 (2023).
date_created: 2023-10-30T09:29:16Z
date_updated: 2024-06-26T08:01:50Z
department:
- _id: '749'
doi: 10.3389/frobt.2023.1236184
funded_apc: '1'
intvolume: '        10'
keyword:
- HRI
- XAI
- negation
- understanding
- explaining
- touch interaction
- gesture
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.frontiersin.org/articles/10.3389/frobt.2023.1236184/full
oa: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Frontiers in Robotics and AI
publication_status: published
quality_controlled: '1'
status: public
title: Scaffolding the human partner by contrastive guidance in an explanatory human-robot
  dialogue
type: journal_article
user_id: '38219'
volume: 10
year: '2023'
...
