---
_id: '61220'
abstract:
- lang: eng
  text: This chapter presents recurring structures of interactions—and their associated
    goals—as they occur in explaining processes. It explores how explanations are
    not delivered in isolation but unfold through dynamic, structured sequences of
    interaction between participants. Beginning with the smallest units, we examine
    how individual dialog acts and multimodal signals form micro-patterns within turns.
    These, in turn, compose meso-level structures such as pragmatic frames, that organize
    sequences of interaction into meaningful, goal-oriented episodes. At the macro-level,
    we identify common types of explanatory dialogues, such as inquiry, information-seeking,
    or deliberation, which are shaped by participants’ goals and situational demands.
    The chapter highlights how these abstract patterns of structure are instantiated
    differently across social and situational contexts and proposes that understanding
    them is crucial for designing socially intelligent and adaptive XAI systems. By
    analyzing how these structures emerge and function, we o!er a framework for operationalizing
    explanation structures in a way that supports co-constructive and context-sensitive
    human-AI interaction.
author:
- first_name: Patricia
  full_name: Jimenez, Patricia
  id: '103339'
  last_name: Jimenez
- first_name: Anna Lisa
  full_name: Vollmer, Anna Lisa
  last_name: Vollmer
- first_name: 'Henning '
  full_name: 'Wachsmuth, Henning '
  last_name: Wachsmuth
citation:
  ama: 'Jimenez P, Vollmer AL, Wachsmuth H. Structures Underlying Explanations. In:
    Rohlfing K, Främling K, Lim B, Alpsancar S, Thommes K, eds. <i>Social Explainable
    AI: Communications of NII Shonan Meetings</i>. Springer Singapore.'
  apa: 'Jimenez, P., Vollmer, A. L., &#38; Wachsmuth, H. (n.d.). Structures Underlying
    Explanations. In K. Rohlfing, K. Främling, B. Lim, S. Alpsancar, &#38; K. Thommes
    (Eds.), <i>Social Explainable AI: Communications of NII Shonan Meetings</i>. Springer
    Singapore.'
  bibtex: '@inbook{Jimenez_Vollmer_Wachsmuth, title={Structures Underlying Explanations},
    booktitle={Social Explainable AI: Communications of NII Shonan Meetings}, publisher={Springer
    Singapore}, author={Jimenez, Patricia and Vollmer, Anna Lisa and Wachsmuth, Henning
    }, editor={Rohlfing, Katharina and Främling, Kary and Lim, Brian and Alpsancar,
    Suzana and Thommes, Kirsten} }'
  chicago: 'Jimenez, Patricia, Anna Lisa Vollmer, and Henning  Wachsmuth. “Structures
    Underlying Explanations.” In <i>Social Explainable AI: Communications of NII Shonan
    Meetings</i>, edited by Katharina Rohlfing, Kary Främling, Brian Lim, Suzana Alpsancar,
    and Kirsten Thommes. Springer Singapore, n.d.'
  ieee: 'P. Jimenez, A. L. Vollmer, and H. Wachsmuth, “Structures Underlying Explanations,”
    in <i>Social Explainable AI: Communications of NII Shonan Meetings</i>, K. Rohlfing,
    K. Främling, B. Lim, S. Alpsancar, and K. Thommes, Eds. Springer Singapore.'
  mla: 'Jimenez, Patricia, et al. “Structures Underlying Explanations.” <i>Social
    Explainable AI: Communications of NII Shonan Meetings</i>, edited by Katharina
    Rohlfing et al., Springer Singapore.'
  short: 'P. Jimenez, A.L. Vollmer, H. Wachsmuth, in: K. Rohlfing, K. Främling, B.
    Lim, S. Alpsancar, K. Thommes (Eds.), Social Explainable AI: Communications of
    NII Shonan Meetings, Springer Singapore, n.d.'
date_created: 2025-09-11T13:54:27Z
date_updated: 2025-09-12T11:43:07Z
editor:
- first_name: Katharina
  full_name: Rohlfing, Katharina
  last_name: Rohlfing
- first_name: Kary
  full_name: Främling, Kary
  last_name: Främling
- first_name: Brian
  full_name: Lim, Brian
  last_name: Lim
- first_name: Suzana
  full_name: Alpsancar, Suzana
  last_name: Alpsancar
- first_name: Kirsten
  full_name: Thommes, Kirsten
  last_name: Thommes
language:
- iso: eng
project:
- _id: '122'
  name: TRR 318 - Subproject B3
publication: 'Social Explainable AI: Communications of NII Shonan Meetings'
publication_identifier:
  eisbn:
  - 978-981-96-5290-7
publication_status: inpress
publisher: Springer Singapore
status: public
title: Structures Underlying Explanations
type: book_chapter
user_id: '103339'
year: '2026'
...
---
_id: '60718'
abstract:
- lang: eng
  text: "The ability to generate explanations that are understood by explainees is
    the\r\nquintessence of explainable artificial intelligence. Since understanding\r\ndepends
    on the explainee's background and needs, recent research focused on\r\nco-constructive
    explanation dialogues, where an explainer continuously monitors\r\nthe explainee's
    understanding and adapts their explanations dynamically. We\r\ninvestigate the
    ability of large language models (LLMs) to engage as explainers\r\nin co-constructive
    explanation dialogues. In particular, we present a user\r\nstudy in which explainees
    interact with an LLM in two settings, one of which\r\ninvolves the LLM being instructed
    to explain a topic co-constructively. We\r\nevaluate the explainees' understanding
    before and after the dialogue, as well\r\nas their perception of the LLMs' co-constructive
    behavior. Our results suggest\r\nthat LLMs show some co-constructive behaviors,
    such as asking verification\r\nquestions, that foster the explainees' engagement
    and can improve understanding\r\nof a topic. However, their ability to effectively
    monitor the current\r\nunderstanding and scaffold the explanations accordingly
    remains limited."
author:
- first_name: Leandra
  full_name: Fichtel, Leandra
  last_name: Fichtel
- first_name: Maximilian
  full_name: Spliethöver, Maximilian
  last_name: Spliethöver
- first_name: Eyke
  full_name: Hüllermeier, Eyke
  last_name: Hüllermeier
- first_name: Patricia
  full_name: Jimenez, Patricia
  id: '103339'
  last_name: Jimenez
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
- first_name: Axel-Cyrille
  full_name: Ngonga Ngomo, Axel-Cyrille
  id: '65716'
  last_name: Ngonga Ngomo
- first_name: Amelie
  full_name: Robrecht, Amelie
  last_name: Robrecht
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  last_name: Vollmer
- first_name: Henning
  full_name: Wachsmuth, Henning
  last_name: Wachsmuth
citation:
  ama: Fichtel L, Spliethöver M, Hüllermeier E, et al. Investigating Co-Constructive
    Behavior of Large Language Models in  Explanation Dialogues. <i>arXiv:250418483</i>.
    Published online 2025.
  apa: Fichtel, L., Spliethöver, M., Hüllermeier, E., Jimenez, P., Klowait, N., Kopp,
    S., Ngonga Ngomo, A.-C., Robrecht, A., Scharlau, I., Terfloth, L., Vollmer, A.-L.,
    &#38; Wachsmuth, H. (2025). Investigating Co-Constructive Behavior of Large Language
    Models in  Explanation Dialogues. In <i>arXiv:2504.18483</i>.
  bibtex: '@article{Fichtel_Spliethöver_Hüllermeier_Jimenez_Klowait_Kopp_Ngonga Ngomo_Robrecht_Scharlau_Terfloth_et
    al._2025, title={Investigating Co-Constructive Behavior of Large Language Models
    in  Explanation Dialogues}, journal={arXiv:2504.18483}, author={Fichtel, Leandra
    and Spliethöver, Maximilian and Hüllermeier, Eyke and Jimenez, Patricia and Klowait,
    Nils and Kopp, Stefan and Ngonga Ngomo, Axel-Cyrille and Robrecht, Amelie and
    Scharlau, Ingrid and Terfloth, Lutz and et al.}, year={2025} }'
  chicago: Fichtel, Leandra, Maximilian Spliethöver, Eyke Hüllermeier, Patricia Jimenez,
    Nils Klowait, Stefan Kopp, Axel-Cyrille Ngonga Ngomo, et al. “Investigating Co-Constructive
    Behavior of Large Language Models in  Explanation Dialogues.” <i>ArXiv:2504.18483</i>,
    2025.
  ieee: L. Fichtel <i>et al.</i>, “Investigating Co-Constructive Behavior of Large
    Language Models in  Explanation Dialogues,” <i>arXiv:2504.18483</i>. 2025.
  mla: Fichtel, Leandra, et al. “Investigating Co-Constructive Behavior of Large Language
    Models in  Explanation Dialogues.” <i>ArXiv:2504.18483</i>, 2025.
  short: L. Fichtel, M. Spliethöver, E. Hüllermeier, P. Jimenez, N. Klowait, S. Kopp,
    A.-C. Ngonga Ngomo, A. Robrecht, I. Scharlau, L. Terfloth, A.-L. Vollmer, H. Wachsmuth,
    ArXiv:2504.18483 (2025).
date_created: 2025-07-22T13:10:42Z
date_updated: 2025-07-23T11:23:32Z
external_id:
  arxiv:
  - '2504.18483'
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- url: https://arxiv.org/pdf/2504.18483
page: '20'
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
- _id: '127'
  name: 'TRR 318 - C4: TRR 318 - Subproject C4 - Metaphern als Werkzeug des Erklärens'
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
- _id: '119'
  name: 'TRR 318 - Ö: TRR 318 - Project Area Ö'
- _id: '114'
  grant_number: '438445824'
  name: 'TRR 318 - A04: TRR 318 - Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten (Teilprojekt A04)'
publication: arXiv:2504.18483
status: public
title: Investigating Co-Constructive Behavior of Large Language Models in  Explanation
  Dialogues
type: preprint
user_id: '98454'
year: '2025'
...
---
_id: '61234'
abstract:
- lang: eng
  text: "The ability to generate explanations that are understood by explainees is
    the\r\nquintessence of explainable artificial intelligence. Since understanding\r\ndepends
    on the explainee's background and needs, recent research focused on\r\nco-constructive
    explanation dialogues, where an explainer continuously monitors\r\nthe explainee's
    understanding and adapts their explanations dynamically. We\r\ninvestigate the
    ability of large language models (LLMs) to engage as explainers\r\nin co-constructive
    explanation dialogues. In particular, we present a user\r\nstudy in which explainees
    interact with an LLM in two settings, one of which\r\ninvolves the LLM being instructed
    to explain a topic co-constructively. We\r\nevaluate the explainees' understanding
    before and after the dialogue, as well\r\nas their perception of the LLMs' co-constructive
    behavior. Our results suggest\r\nthat LLMs show some co-constructive behaviors,
    such as asking verification\r\nquestions, that foster the explainees' engagement
    and can improve understanding\r\nof a topic. However, their ability to effectively
    monitor the current\r\nunderstanding and scaffold the explanations accordingly
    remains limited."
author:
- first_name: Leandra
  full_name: Fichtel, Leandra
  last_name: Fichtel
- first_name: Maximilian
  full_name: Spliethöver, Maximilian
  id: '84035'
  last_name: Spliethöver
  orcid: 0000-0003-4364-1409
- first_name: Eyke
  full_name: Hüllermeier, Eyke
  id: '48129'
  last_name: Hüllermeier
- first_name: Patricia
  full_name: Jimenez, Patricia
  id: '103339'
  last_name: Jimenez
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
- first_name: Axel-Cyrille
  full_name: Ngonga Ngomo, Axel-Cyrille
  id: '65716'
  last_name: Ngonga Ngomo
- first_name: Amelie
  full_name: Robrecht, Amelie
  id: '91982'
  last_name: Robrecht
  orcid: 0000-0001-5622-8248
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  id: '86589'
  last_name: Vollmer
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
citation:
  ama: 'Fichtel L, Spliethöver M, Hüllermeier E, et al. Investigating Co-Constructive
    Behavior of Large Language Models in  Explanation Dialogues. In: <i>Proceedings
    of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue</i>.
    Association for Computational Linguistics.'
  apa: Fichtel, L., Spliethöver, M., Hüllermeier, E., Jimenez, P., Klowait, N., Kopp,
    S., Ngonga Ngomo, A.-C., Robrecht, A., Scharlau, I., Terfloth, L., Vollmer, A.-L.,
    &#38; Wachsmuth, H. (n.d.). Investigating Co-Constructive Behavior of Large Language
    Models in  Explanation Dialogues. <i>Proceedings of the 26th Annual Meeting of
    the Special Interest Group on Discourse and Dialogue</i>. Annual Meeting of the
    Special Interest Group on Discourse and Dialogue.
  bibtex: '@inproceedings{Fichtel_Spliethöver_Hüllermeier_Jimenez_Klowait_Kopp_Ngonga
    Ngomo_Robrecht_Scharlau_Terfloth_et al., place={Avignon, France}, title={Investigating
    Co-Constructive Behavior of Large Language Models in  Explanation Dialogues},
    booktitle={Proceedings of the 26th Annual Meeting of the Special Interest Group
    on Discourse and Dialogue}, publisher={Association for Computational Linguistics},
    author={Fichtel, Leandra and Spliethöver, Maximilian and Hüllermeier, Eyke and
    Jimenez, Patricia and Klowait, Nils and Kopp, Stefan and Ngonga Ngomo, Axel-Cyrille
    and Robrecht, Amelie and Scharlau, Ingrid and Terfloth, Lutz and et al.} }'
  chicago: 'Fichtel, Leandra, Maximilian Spliethöver, Eyke Hüllermeier, Patricia Jimenez,
    Nils Klowait, Stefan Kopp, Axel-Cyrille Ngonga Ngomo, et al. “Investigating Co-Constructive
    Behavior of Large Language Models in  Explanation Dialogues.” In <i>Proceedings
    of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue</i>.
    Avignon, France: Association for Computational Linguistics, n.d.'
  ieee: L. Fichtel <i>et al.</i>, “Investigating Co-Constructive Behavior of Large
    Language Models in  Explanation Dialogues,” presented at the Annual Meeting of
    the Special Interest Group on Discourse and Dialogue.
  mla: Fichtel, Leandra, et al. “Investigating Co-Constructive Behavior of Large Language
    Models in  Explanation Dialogues.” <i>Proceedings of the 26th Annual Meeting of
    the Special Interest Group on Discourse and Dialogue</i>, Association for Computational
    Linguistics.
  short: 'L. Fichtel, M. Spliethöver, E. Hüllermeier, P. Jimenez, N. Klowait, S. Kopp,
    A.-C. Ngonga Ngomo, A. Robrecht, I. Scharlau, L. Terfloth, A.-L. Vollmer, H. Wachsmuth,
    in: Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse
    and Dialogue, Association for Computational Linguistics, Avignon, France, n.d.'
conference:
  name: Annual Meeting of the Special Interest Group on Discourse and Dialogue
date_created: 2025-09-11T16:11:17Z
date_updated: 2025-09-12T09:50:48Z
department:
- _id: '660'
external_id:
  arxiv:
  - '2504.18483'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2504.18483
oa: '1'
place: Avignon, France
project:
- _id: '118'
  name: 'TRR 318: Project Area INF'
- _id: '121'
  name: 'TRR 318; TP B01: Ein dialogbasierter Ansatz zur Erklärung von Modellen des
    maschinellen Lernens'
- _id: '127'
  name: 'TRR 318; TP C04: Metaphern als Werkzeug des Erklärens'
- _id: '122'
  name: TRR 318 - Subproject B3
- _id: '119'
  name: TRR 318 - Project Area Ö
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
publication: Proceedings of the 26th Annual Meeting of the Special Interest Group
  on Discourse and Dialogue
publication_status: accepted
publisher: Association for Computational Linguistics
related_material:
  link:
  - relation: software
    url: https://github.com/webis-de/sigdial25-co-constructive-llms
  - relation: research_data
    url: https://github.com/webis-de/sigdial25-co-constructive-llms-data
status: public
title: Investigating Co-Constructive Behavior of Large Language Models in  Explanation
  Dialogues
type: conference
user_id: '84035'
year: '2025'
...
---
_id: '59917'
abstract:
- lang: eng
  text: nder the slogan of trustworthy AI, much of contemporary AI research is focused
    on designing AI systems and usage practices that inspire human trust and, thus,
    enhance adoption of AI systems. However, a person affected by an AI system may
    not be convinced by AI system design alone---neither should they, if the AI system
    is embedded in a social context that gives good reason to believe that it is used
    in tension with a person’s interest. In such cases,  distrust in the system may
    be justified and necessary to build meaningful trust in the first place. We propose
    the term \emph{healthy distrust} to describe such a justified, careful stance
    towards certain AI usage practices. We investigate prior notions of trust and
    distrust in computer science, sociology, history, psychology, and philosophy,
    outline a remaining gap that healthy distrust might fill and conceptualize healthy
    distrust as a crucial part for AI usage that respects human autonomy.
author:
- first_name: Benjamin
  full_name: Paaßen, Benjamin
  last_name: Paaßen
- first_name: Suzana
  full_name: Alpsancar, Suzana
  id: '93637'
  last_name: Alpsancar
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: Paaßen B, Alpsancar S, Matzner T, Scharlau I. Healthy Distrust in AI systems.
    <i>arXiv</i>. Published online 2025.
  apa: Paaßen, B., Alpsancar, S., Matzner, T., &#38; Scharlau, I. (2025). Healthy
    Distrust in AI systems. In <i>arXiv</i>.
  bibtex: '@article{Paaßen_Alpsancar_Matzner_Scharlau_2025, title={Healthy Distrust
    in AI systems}, journal={arXiv}, author={Paaßen, Benjamin and Alpsancar, Suzana
    and Matzner, Tobias and Scharlau, Ingrid}, year={2025} }'
  chicago: Paaßen, Benjamin, Suzana Alpsancar, Tobias Matzner, and Ingrid Scharlau.
    “Healthy Distrust in AI Systems.” <i>ArXiv</i>, 2025.
  ieee: B. Paaßen, S. Alpsancar, T. Matzner, and I. Scharlau, “Healthy Distrust in
    AI systems,” <i>arXiv</i>. 2025.
  mla: Paaßen, Benjamin, et al. “Healthy Distrust in AI Systems.” <i>ArXiv</i>, 2025.
  short: B. Paaßen, S. Alpsancar, T. Matzner, I. Scharlau, ArXiv (2025).
date_created: 2025-05-16T09:39:13Z
date_updated: 2025-11-18T09:38:01Z
department:
- _id: '424'
- _id: '26'
- _id: '756'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2505.09747
oa: '1'
project:
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
- _id: '124'
  name: 'TRR 318 - C1: TRR 318 - Subproject C1 - Gesundes Misstrauen in Erklärungen'
- _id: '370'
  name: 'TRR 318 - B06: TRR 318 - Teilprojekt B6 - Ethik und Normativität der erklärbaren
    KI'
publication: arXiv
status: public
title: Healthy Distrust in AI systems
type: preprint
user_id: '93637'
year: '2025'
...
---
_id: '61156'
abstract:
- lang: eng
  text: Explainability has become an important topic in computer science and artificial
    intelligence, leading to a subfield called Explainable Artificial Intelligence
    (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’
    on the part of the explainee. However, what it means to ‘understand’ is still
    not clearly defined, and the concept itself is rarely the subject of scientific
    investigation. This conceptual article aims to present a model of forms of understanding
    for XAI-explanations and beyond. From an interdisciplinary perspective bringing
    together computer science, linguistics, sociology, philosophy and psychology,
    a definition of understanding and its forms, assessment, and dynamics during the
    process of giving everyday explanations are explored. Two types of understanding
    are considered as possible outcomes of explanations, namely enabledness, ‘knowing
    how’ to do or decide something, and comprehension, ‘knowing that’ – both in different
    degrees (from shallow to deep). Explanations regularly start with shallow understanding
    in a specific domain and can lead to deep comprehension and enabledness of the
    explanandum, which we see as a prerequisite for human users to gain agency. In
    this process, the increase of comprehension and enabledness are highly interdependent.
    Against the background of this systematization, special challenges of understanding
    in XAI are discussed.
article_number: '101419'
article_type: original
author:
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  id: '76456'
  last_name: Buschmeier
  orcid: 0000-0002-9613-5713
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Helen
  full_name: Beierling, Helen
  id: '50995'
  last_name: Beierling
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
  orcid: 0000-0002-9997-9241
- first_name: André
  full_name: Groß, André
  id: '93405'
  last_name: Groß
  orcid: 0000-0002-9593-7220
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Michael
  full_name: Lenke, Michael
  last_name: Lenke
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  id: '86589'
  last_name: Vollmer
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Annedore
  full_name: Wilmes, Annedore
  last_name: Wilmes
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>. 2025;94. doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>
  apa: Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher,
    J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer,
    V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang,
    Y., Wilmes, A., &#38; Wrede, B. (2025). Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>, <i>94</i>, Article 101419. <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>
  bibtex: '@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et
    al._2025, title={Forms of Understanding for XAI-Explanations}, volume={94}, DOI={<a
    href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>},
    number={101419}, journal={Cognitive Systems Research}, author={Buschmeier, Hendrik
    and Buhl, Heike M. and Kern, Friederike and Grimminger, Angela and Beierling,
    Helen and Fisher, Josephine Beryl and Groß, André and Horwath, Ilona and Klowait,
    Nils and Lazarov, Stefan Teodorov and et al.}, year={2025} }'
  chicago: Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger,
    Helen Beierling, Josephine Beryl Fisher, André Groß, et al. “Forms of Understanding
    for XAI-Explanations.” <i>Cognitive Systems Research</i> 94 (2025). <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>.
  ieee: 'H. Buschmeier <i>et al.</i>, “Forms of Understanding for XAI-Explanations,”
    <i>Cognitive Systems Research</i>, vol. 94, Art. no. 101419, 2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.'
  mla: Buschmeier, Hendrik, et al. “Forms of Understanding for XAI-Explanations.”
    <i>Cognitive Systems Research</i>, vol. 94, 101419, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.
  short: H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher,
    A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing,
    I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede,
    Cognitive Systems Research 94 (2025).
date_created: 2025-09-08T14:24:32Z
date_updated: 2025-12-05T15:32:25Z
ddc:
- '006'
department:
- _id: '660'
doi: 10.1016/j.cogsys.2025.101419
file:
- access_level: closed
  content_type: application/pdf
  creator: hbuschme
  date_created: 2025-12-01T21:02:20Z
  date_updated: 2025-12-01T21:02:20Z
  file_id: '62730'
  file_name: Buschmeier-etal-2025-COGSYS.pdf
  file_size: 10114981
  relation: main_file
  success: 1
file_date_updated: 2025-12-01T21:02:20Z
has_accepted_license: '1'
intvolume: '        94'
keyword:
- understanding
- explaining
- explanations
- explainable
- AI
- interdisciplinarity
- comprehension
- enabledness
- agency
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.sciencedirect.com/science/article/pii/S1389041725000993?via%3Dihub
oa: '1'
project:
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '113'
  name: TRR 318 - Subproject A3
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '122'
  name: TRR 318 - Subproject B3
- _id: '123'
  name: TRR 318 - Subproject B5
- _id: '119'
  name: TRR 318 - Project Area Ö
publication: Cognitive Systems Research
publication_status: published
quality_controlled: '1'
status: public
title: Forms of Understanding for XAI-Explanations
type: journal_article
user_id: '57578'
volume: 94
year: '2025'
...
---
_id: '51345'
abstract:
- lang: eng
  text: <jats:p> The algorithmic imaginary as a theoretical concept has received increasing
    attention in recent years as it aims at users’ appropriation of algorithmic processes
    operating in opacity. But the concept originally only starts from the users’ point
    of view, while the processes on the platforms’ side are largely left out. In contrast,
    this paper argues that what is true for users is also valid for algorithmic processes
    and the designers behind. On the one hand, the algorithm imagines users’ future
    behavior via machine learning, which is supposed to predict all their future actions.
    On the other hand, the designers anticipate different actions that could potentially
    performed by users with every new implementation of features such as social media
    feeds. In order to bring into view this permanently reciprocal interplay coupled
    to the imaginary, in which not only the users are involved, I will argue for a
    more comprehensive and theoretically precise algorithmic imaginary referring to
    the theory of Cornelius Castoriadis. In such a perspective, an important contribution
    can be formulated for a theory of social media platforms that goes beyond praxeocentrism
    or structural determinism. </jats:p>
author:
- first_name: Christian
  full_name: Schulz, Christian
  id: '72684'
  last_name: Schulz
citation:
  ama: Schulz C. A new algorithmic imaginary. <i>Media, Culture &#38; Society</i>.
    2023;45(3):646-655. doi:<a href="https://doi.org/10.1177/01634437221136014">10.1177/01634437221136014</a>
  apa: Schulz, C. (2023). A new algorithmic imaginary. <i>Media, Culture &#38; Society</i>,
    <i>45</i>(3), 646–655. <a href="https://doi.org/10.1177/01634437221136014">https://doi.org/10.1177/01634437221136014</a>
  bibtex: '@article{Schulz_2023, title={A new algorithmic imaginary}, volume={45},
    DOI={<a href="https://doi.org/10.1177/01634437221136014">10.1177/01634437221136014</a>},
    number={3}, journal={Media, Culture &#38; Society}, publisher={SAGE Publications},
    author={Schulz, Christian}, year={2023}, pages={646–655} }'
  chicago: 'Schulz, Christian. “A New Algorithmic Imaginary.” <i>Media, Culture &#38;
    Society</i> 45, no. 3 (2023): 646–55. <a href="https://doi.org/10.1177/01634437221136014">https://doi.org/10.1177/01634437221136014</a>.'
  ieee: 'C. Schulz, “A new algorithmic imaginary,” <i>Media, Culture &#38; Society</i>,
    vol. 45, no. 3, pp. 646–655, 2023, doi: <a href="https://doi.org/10.1177/01634437221136014">10.1177/01634437221136014</a>.'
  mla: Schulz, Christian. “A New Algorithmic Imaginary.” <i>Media, Culture &#38; Society</i>,
    vol. 45, no. 3, SAGE Publications, 2023, pp. 646–55, doi:<a href="https://doi.org/10.1177/01634437221136014">10.1177/01634437221136014</a>.
  short: C. Schulz, Media, Culture &#38; Society 45 (2023) 646–655.
date_created: 2024-02-14T09:21:17Z
date_updated: 2024-02-26T08:39:45Z
department:
- _id: '660'
doi: 10.1177/01634437221136014
intvolume: '        45'
issue: '3'
keyword:
- Sociology and Political Science
- Communication
language:
- iso: eng
page: 646-655
project:
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
publication: Media, Culture & Society
publication_identifier:
  issn:
  - 0163-4437
  - 1460-3675
publication_status: published
publisher: SAGE Publications
status: public
title: A new algorithmic imaginary
type: journal_article
user_id: '54779'
volume: 45
year: '2023'
...
---
_id: '51766'
author:
- first_name: Christian
  full_name: Schulz, Christian
  id: '72684'
  last_name: Schulz
- first_name: 'Annedore '
  full_name: 'Wilmes , Annedore '
  last_name: 'Wilmes '
citation:
  ama: Schulz C, Wilmes  A. Vernacular Metaphors of AI .
  apa: Schulz, C., &#38; Wilmes , A. (n.d.). <i>Vernacular Metaphors of AI </i>.
  bibtex: '@inproceedings{Schulz_Wilmes , place={ICA Preconference Workshop “History
    of Digital Metaphors”, University of Toronto, May 25 }, title={Vernacular Metaphors
    of AI }, author={Schulz, Christian and Wilmes , Annedore } }'
  chicago: Schulz, Christian, and Annedore  Wilmes . “Vernacular Metaphors of AI .”
    ICA Preconference Workshop “History of Digital Metaphors”, University of Toronto,
    May 25 , n.d.
  ieee: C. Schulz and A. Wilmes , “Vernacular Metaphors of AI .”
  mla: Schulz, Christian, and Annedore Wilmes . <i>Vernacular Metaphors of AI </i>.
  short: 'C. Schulz, A. Wilmes , in: ICA Preconference Workshop “History of Digital
    Metaphors”, University of Toronto, May 25 , n.d.'
date_created: 2024-02-22T15:11:29Z
date_updated: 2024-08-14T06:04:55Z
department:
- _id: '660'
language:
- iso: eng
place: 'ICA Preconference Workshop "History of Digital Metaphors", University of Toronto,
  May 25 '
project:
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
publication_status: unpublished
status: public
title: 'Vernacular Metaphors of AI '
type: conference
user_id: '72684'
year: '2023'
...
---
_id: '51752'
author:
- first_name: Josefine
  full_name: Finke, Josefine
  last_name: Finke
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Christian
  full_name: Schulz, Christian
  id: '72684'
  last_name: Schulz
citation:
  ama: 'Finke J, Horwath I, Matzner T, Schulz C. (De)Coding social practice in the
    field of XAI: Towards a co-constructive framework of explanations and understanding
    between lay users and algorithmic systems. In: <i>Artificial Intelligence in HCI</i>.
    Lecture Notes in Computer Science. Springer International Publishing ; 2022:149-160.
    doi:<a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>'
  apa: 'Finke, J., Horwath, I., Matzner, T., &#38; Schulz, C. (2022). (De)Coding social
    practice in the field of XAI: Towards a co-constructive framework of explanations
    and understanding between lay users and algorithmic systems. <i>Artificial Intelligence
    in HCI</i>, 149–160. <a href="https://doi.org/10.1007/978-3-031-05643-7_10">https://doi.org/10.1007/978-3-031-05643-7_10</a>'
  bibtex: '@inproceedings{Finke_Horwath_Matzner_Schulz_2022, place={Cham}, series={Lecture
    Notes in Computer Science}, title={(De)Coding social practice in the field of
    XAI: Towards a co-constructive framework of explanations and understanding between
    lay users and algorithmic systems}, DOI={<a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>},
    booktitle={Artificial Intelligence in HCI}, publisher={Springer International
    Publishing }, author={Finke, Josefine and Horwath, Ilona and Matzner, Tobias and
    Schulz, Christian}, year={2022}, pages={149–160}, collection={Lecture Notes in
    Computer Science} }'
  chicago: 'Finke, Josefine, Ilona Horwath, Tobias Matzner, and Christian Schulz.
    “(De)Coding Social Practice in the Field of XAI: Towards a Co-Constructive Framework
    of Explanations and Understanding between Lay Users and Algorithmic Systems.”
    In <i>Artificial Intelligence in HCI</i>, 149–60. Lecture Notes in Computer Science.
    Cham: Springer International Publishing , 2022. <a href="https://doi.org/10.1007/978-3-031-05643-7_10">https://doi.org/10.1007/978-3-031-05643-7_10</a>.'
  ieee: 'J. Finke, I. Horwath, T. Matzner, and C. Schulz, “(De)Coding social practice
    in the field of XAI: Towards a co-constructive framework of explanations and understanding
    between lay users and algorithmic systems,” in <i>Artificial Intelligence in HCI</i>,
    2022, pp. 149–160, doi: <a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>.'
  mla: 'Finke, Josefine, et al. “(De)Coding Social Practice in the Field of XAI: Towards
    a Co-Constructive Framework of Explanations and Understanding between Lay Users
    and Algorithmic Systems.” <i>Artificial Intelligence in HCI</i>, Springer International
    Publishing , 2022, pp. 149–60, doi:<a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>.'
  short: 'J. Finke, I. Horwath, T. Matzner, C. Schulz, in: Artificial Intelligence
    in HCI, Springer International Publishing , Cham, 2022, pp. 149–160.'
date_created: 2024-02-22T14:41:24Z
date_updated: 2024-07-02T06:19:43Z
department:
- _id: '757'
doi: 10.1007/978-3-031-05643-7_10
language:
- iso: eng
page: 149-160
place: Cham
project:
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
publication: Artificial Intelligence in HCI
publication_status: published
publisher: 'Springer International Publishing '
series_title: Lecture Notes in Computer Science
status: public
title: '(De)Coding social practice in the field of XAI: Towards a co-constructive
  framework of explanations and understanding between lay users and algorithmic systems'
type: conference
user_id: '72684'
year: '2022'
...
---
_id: '39639'
author:
- first_name: Josefine
  full_name: Finke, Josefine
  last_name: Finke
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Christian
  full_name: Schulz, Christian
  last_name: Schulz
citation:
  ama: 'Finke J, Horwath I, Matzner T, Schulz C. (De)Coding Social Practice in the
    Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding
    Between Lay Users and Algorithmic Systems. In: <i>Artificial Intelligence in HCI</i>.
    Springer International Publishing; 2022:149-160. doi:<a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>'
  apa: 'Finke, J., Horwath, I., Matzner, T., &#38; Schulz, C. (2022). (De)Coding Social
    Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations
    and Understanding Between Lay Users and Algorithmic Systems. <i>Artificial Intelligence
    in HCI</i>, 149–160. <a href="https://doi.org/10.1007/978-3-031-05643-7_10">https://doi.org/10.1007/978-3-031-05643-7_10</a>'
  bibtex: '@inproceedings{Finke_Horwath_Matzner_Schulz_2022, place={Cham}, title={(De)Coding
    Social Practice in the Field of XAI: Towards a Co-constructive Framework of Explanations
    and Understanding Between Lay Users and Algorithmic Systems}, DOI={<a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>},
    booktitle={Artificial Intelligence in HCI}, publisher={Springer International
    Publishing}, author={Finke, Josefine and Horwath, Ilona and Matzner, Tobias and
    Schulz, Christian}, year={2022}, pages={149–160} }'
  chicago: 'Finke, Josefine, Ilona Horwath, Tobias Matzner, and Christian Schulz.
    “(De)Coding Social Practice in the Field of XAI: Towards a Co-Constructive Framework
    of Explanations and Understanding Between Lay Users and Algorithmic Systems.”
    In <i>Artificial Intelligence in HCI</i>, 149–60. Cham: Springer International
    Publishing, 2022. <a href="https://doi.org/10.1007/978-3-031-05643-7_10">https://doi.org/10.1007/978-3-031-05643-7_10</a>.'
  ieee: 'J. Finke, I. Horwath, T. Matzner, and C. Schulz, “(De)Coding Social Practice
    in the Field of XAI: Towards a Co-constructive Framework of Explanations and Understanding
    Between Lay Users and Algorithmic Systems,” in <i>Artificial Intelligence in HCI</i>,
    2022, pp. 149–160, doi: <a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>.'
  mla: 'Finke, Josefine, et al. “(De)Coding Social Practice in the Field of XAI: Towards
    a Co-Constructive Framework of Explanations and Understanding Between Lay Users
    and Algorithmic Systems.” <i>Artificial Intelligence in HCI</i>, Springer International
    Publishing, 2022, pp. 149–60, doi:<a href="https://doi.org/10.1007/978-3-031-05643-7_10">10.1007/978-3-031-05643-7_10</a>.'
  short: 'J. Finke, I. Horwath, T. Matzner, C. Schulz, in: Artificial Intelligence
    in HCI, Springer International Publishing, Cham, 2022, pp. 149–160.'
conference:
  name: AI in International Conference on Human-Computer Interaction
date_created: 2023-01-24T16:09:42Z
date_updated: 2023-05-03T08:24:22Z
department:
- _id: '603'
- _id: '757'
doi: 10.1007/978-3-031-05643-7_10
language:
- iso: eng
page: 149-160
place: Cham
project:
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
publication: Artificial Intelligence in HCI
publication_status: published
publisher: Springer International Publishing
quality_controlled: '1'
status: public
title: '(De)Coding Social Practice in the Field of XAI: Towards a Co-constructive
  Framework of Explanations and Understanding Between Lay Users and Algorithmic Systems'
type: conference
user_id: '68836'
year: '2022'
...
