---
_id: '61156'
abstract:
- lang: eng
  text: Explainability has become an important topic in computer science and artificial
    intelligence, leading to a subfield called Explainable Artificial Intelligence
    (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’
    on the part of the explainee. However, what it means to ‘understand’ is still
    not clearly defined, and the concept itself is rarely the subject of scientific
    investigation. This conceptual article aims to present a model of forms of understanding
    for XAI-explanations and beyond. From an interdisciplinary perspective bringing
    together computer science, linguistics, sociology, philosophy and psychology,
    a definition of understanding and its forms, assessment, and dynamics during the
    process of giving everyday explanations are explored. Two types of understanding
    are considered as possible outcomes of explanations, namely enabledness, ‘knowing
    how’ to do or decide something, and comprehension, ‘knowing that’ – both in different
    degrees (from shallow to deep). Explanations regularly start with shallow understanding
    in a specific domain and can lead to deep comprehension and enabledness of the
    explanandum, which we see as a prerequisite for human users to gain agency. In
    this process, the increase of comprehension and enabledness are highly interdependent.
    Against the background of this systematization, special challenges of understanding
    in XAI are discussed.
article_number: '101419'
article_type: original
author:
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  id: '76456'
  last_name: Buschmeier
  orcid: 0000-0002-9613-5713
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Helen
  full_name: Beierling, Helen
  id: '50995'
  last_name: Beierling
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
  orcid: 0000-0002-9997-9241
- first_name: André
  full_name: Groß, André
  id: '93405'
  last_name: Groß
  orcid: 0000-0002-9593-7220
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Michael
  full_name: Lenke, Michael
  last_name: Lenke
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  id: '86589'
  last_name: Vollmer
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Annedore
  full_name: Wilmes, Annedore
  last_name: Wilmes
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>. 2025;94. doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>
  apa: Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher,
    J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer,
    V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang,
    Y., Wilmes, A., &#38; Wrede, B. (2025). Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>, <i>94</i>, Article 101419. <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>
  bibtex: '@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et
    al._2025, title={Forms of Understanding for XAI-Explanations}, volume={94}, DOI={<a
    href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>},
    number={101419}, journal={Cognitive Systems Research}, author={Buschmeier, Hendrik
    and Buhl, Heike M. and Kern, Friederike and Grimminger, Angela and Beierling,
    Helen and Fisher, Josephine Beryl and Groß, André and Horwath, Ilona and Klowait,
    Nils and Lazarov, Stefan Teodorov and et al.}, year={2025} }'
  chicago: Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger,
    Helen Beierling, Josephine Beryl Fisher, André Groß, et al. “Forms of Understanding
    for XAI-Explanations.” <i>Cognitive Systems Research</i> 94 (2025). <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>.
  ieee: 'H. Buschmeier <i>et al.</i>, “Forms of Understanding for XAI-Explanations,”
    <i>Cognitive Systems Research</i>, vol. 94, Art. no. 101419, 2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.'
  mla: Buschmeier, Hendrik, et al. “Forms of Understanding for XAI-Explanations.”
    <i>Cognitive Systems Research</i>, vol. 94, 101419, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.
  short: H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher,
    A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing,
    I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede,
    Cognitive Systems Research 94 (2025).
date_created: 2025-09-08T14:24:32Z
date_updated: 2025-12-05T15:32:25Z
ddc:
- '006'
department:
- _id: '660'
doi: 10.1016/j.cogsys.2025.101419
file:
- access_level: closed
  content_type: application/pdf
  creator: hbuschme
  date_created: 2025-12-01T21:02:20Z
  date_updated: 2025-12-01T21:02:20Z
  file_id: '62730'
  file_name: Buschmeier-etal-2025-COGSYS.pdf
  file_size: 10114981
  relation: main_file
  success: 1
file_date_updated: 2025-12-01T21:02:20Z
has_accepted_license: '1'
intvolume: '        94'
keyword:
- understanding
- explaining
- explanations
- explainable
- AI
- interdisciplinarity
- comprehension
- enabledness
- agency
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.sciencedirect.com/science/article/pii/S1389041725000993?via%3Dihub
oa: '1'
project:
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '113'
  name: TRR 318 - Subproject A3
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '122'
  name: TRR 318 - Subproject B3
- _id: '123'
  name: TRR 318 - Subproject B5
- _id: '119'
  name: TRR 318 - Project Area Ö
publication: Cognitive Systems Research
publication_status: published
quality_controlled: '1'
status: public
title: Forms of Understanding for XAI-Explanations
type: journal_article
user_id: '57578'
volume: 94
year: '2025'
...
---
_id: '53073'
abstract:
- lang: eng
  text: While shallow decision trees may be interpretable, larger ensemble models
    like gradient-boosted trees, which often set the state of the art in machine learning
    problems involving tabular data, still remain black box models. As a remedy, the
    Shapley value (SV) is a well-known concept in explainable artificial intelligence
    (XAI) research for quantifying additive feature attributions of predictions. The
    model-specific TreeSHAP methodology solves the exponential complexity for retrieving
    exact SVs from tree-based models. Expanding beyond individual feature attribution,
    Shapley interactions reveal the impact of intricate feature interactions of any
    order. In this work, we present TreeSHAP-IQ, an efficient method to compute any-order
    additive Shapley interactions for predictions of tree-based models. TreeSHAP-IQ
    is supported by a mathematical framework that exploits polynomial arithmetic to
    compute the interaction scores in a single recursive traversal of the tree, akin
    to Linear TreeSHAP. We apply TreeSHAP-IQ on state-of-the-art tree ensembles and
    explore interactions on well-established benchmark datasets.
author:
- first_name: Maximilian
  full_name: Muschalik, Maximilian
  last_name: Muschalik
- first_name: Fabian
  full_name: Fumagalli, Fabian
  id: '93420'
  last_name: Fumagalli
- first_name: Barbara
  full_name: Hammer, Barbara
  last_name: Hammer
- first_name: Eyke
  full_name: Huellermeier, Eyke
  id: '48129'
  last_name: Huellermeier
citation:
  ama: 'Muschalik M, Fumagalli F, Hammer B, Huellermeier E. Beyond TreeSHAP: Efficient
    Computation of Any-Order Shapley Interactions for Tree Ensembles. In: <i>Proceedings
    of the AAAI Conference on Artificial Intelligence (AAAI)</i>. Vol 38. ; 2024:14388-14396.
    doi:<a href="https://doi.org/10.1609/aaai.v38i13.29352">10.1609/aaai.v38i13.29352</a>'
  apa: 'Muschalik, M., Fumagalli, F., Hammer, B., &#38; Huellermeier, E. (2024). Beyond
    TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles.
    <i>Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)</i>, <i>38</i>(13),
    14388–14396. <a href="https://doi.org/10.1609/aaai.v38i13.29352">https://doi.org/10.1609/aaai.v38i13.29352</a>'
  bibtex: '@inproceedings{Muschalik_Fumagalli_Hammer_Huellermeier_2024, title={Beyond
    TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles},
    volume={38}, DOI={<a href="https://doi.org/10.1609/aaai.v38i13.29352">10.1609/aaai.v38i13.29352</a>},
    number={13}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence
    (AAAI)}, author={Muschalik, Maximilian and Fumagalli, Fabian and Hammer, Barbara
    and Huellermeier, Eyke}, year={2024}, pages={14388–14396} }'
  chicago: 'Muschalik, Maximilian, Fabian Fumagalli, Barbara Hammer, and Eyke Huellermeier.
    “Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for
    Tree Ensembles.” In <i>Proceedings of the AAAI Conference on Artificial Intelligence
    (AAAI)</i>, 38:14388–96, 2024. <a href="https://doi.org/10.1609/aaai.v38i13.29352">https://doi.org/10.1609/aaai.v38i13.29352</a>.'
  ieee: 'M. Muschalik, F. Fumagalli, B. Hammer, and E. Huellermeier, “Beyond TreeSHAP:
    Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles,” in
    <i>Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)</i>, 2024,
    vol. 38, no. 13, pp. 14388–14396, doi: <a href="https://doi.org/10.1609/aaai.v38i13.29352">10.1609/aaai.v38i13.29352</a>.'
  mla: 'Muschalik, Maximilian, et al. “Beyond TreeSHAP: Efficient Computation of Any-Order
    Shapley Interactions for Tree Ensembles.” <i>Proceedings of the AAAI Conference
    on Artificial Intelligence (AAAI)</i>, vol. 38, no. 13, 2024, pp. 14388–96, doi:<a
    href="https://doi.org/10.1609/aaai.v38i13.29352">10.1609/aaai.v38i13.29352</a>.'
  short: 'M. Muschalik, F. Fumagalli, B. Hammer, E. Huellermeier, in: Proceedings
    of the AAAI Conference on Artificial Intelligence (AAAI), 2024, pp. 14388–14396.'
date_created: 2024-03-27T14:50:04Z
date_updated: 2025-09-11T16:20:11Z
department:
- _id: '660'
doi: 10.1609/aaai.v38i13.29352
intvolume: '        38'
issue: '13'
keyword:
- Explainable Artificial Intelligence
language:
- iso: eng
page: 14388-14396
project:
- _id: '126'
  name: 'TRR 318 - C3: TRR 318 - Subproject C3'
- _id: '109'
  name: 'TRR 318: TRR 318 - Erklärbarkeit konstruieren'
- _id: '117'
  name: 'TRR 318 - C: TRR 318 - Project Area C'
publication: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)
publication_identifier:
  issn:
  - 2374-3468
  - 2159-5399
publication_status: published
status: public
title: 'Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for
  Tree Ensembles'
type: conference
user_id: '93420'
volume: 38
year: '2024'
...
---
_id: '51368'
abstract:
- lang: eng
  text: Dealing with opaque algorithms, the frequent overlap between transparency
    and explainability produces seemingly unsolvable dilemmas, as the much-discussed
    trade-off between model performance and model transparency. Referring to Niklas
    Luhmann's notion of communication, the paper argues that explainability does not
    necessarily require transparency and proposes an alternative approach. Explanations
    as communicative processes do not imply any disclosure of thoughts or neural processes,
    but only reformulations that provide the partners with additional elements and
    enable them to understand (from their perspective) what has been done and why.
    Recent computational approaches aiming at post-hoc explainability reproduce what
    happens in communication, producing explanations of the working of algorithms
    that can be different from the processes of the algorithms.
author:
- first_name: 'Elena '
  full_name: 'Esposito, Elena '
  last_name: Esposito
citation:
  ama: Esposito E. Does Explainability Require Transparency? <i>Sociologica</i>. 2023;16(3):17-27.
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>
  apa: Esposito, E. (2023). Does Explainability Require Transparency? <i>Sociologica</i>,
    <i>16</i>(3), 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>
  bibtex: '@article{Esposito_2023, title={Does Explainability Require Transparency?},
    volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena }, year={2023}, pages={17–27}
    }'
  chicago: 'Esposito, Elena . “Does Explainability Require Transparency?” <i>Sociologica</i>
    16, no. 3 (2023): 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>.'
  ieee: 'E. Esposito, “Does Explainability Require Transparency?,” <i>Sociologica</i>,
    vol. 16, no. 3, pp. 17–27, 2023, doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.'
  mla: Esposito, Elena. “Does Explainability Require Transparency?” <i>Sociologica</i>,
    vol. 16, no. 3, 2023, pp. 17–27, doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.
  short: E. Esposito, Sociologica 16 (2023) 17–27.
date_created: 2024-02-18T10:16:43Z
date_updated: 2024-02-26T08:46:26Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/15804
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Transparency
- Explanation
- Communication
- Sociological systems theory
language:
- iso: eng
page: 17-27
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: Does Explainability Require Transparency?
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
---
_id: '51369'
abstract:
- lang: eng
  text: This short introduction presents the symposium ‘Explaining Machines’. It locates
    the debate about Explainable AI in the history of the reflection about AI and
    outlines the issues discussed in the contributions.
author:
- first_name: Elena
  full_name: Esposito, Elena
  last_name: Esposito
citation:
  ama: 'Esposito E. Explaining Machines: Social Management of Incomprehensible Algorithms.
    Introduction. <i>Sociologica</i>. 2023;16(3):1-4. doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>'
  apa: 'Esposito, E. (2023). Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction. <i>Sociologica</i>, <i>16</i>(3), 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>'
  bibtex: '@article{Esposito_2023, title={Explaining Machines: Social Management of
    Incomprehensible Algorithms. Introduction}, volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena}, year={2023}, pages={1–4}
    }'
  chicago: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i> 16, no. 3 (2023): 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>.'
  ieee: 'E. Esposito, “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction,” <i>Sociologica</i>, vol. 16, no. 3, pp. 1–4, 2023,
    doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  mla: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i>, vol. 16, no. 3, 2023, pp. 1–4,
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  short: E. Esposito, Sociologica 16 (2023) 1–4.
date_created: 2024-02-18T10:23:23Z
date_updated: 2024-02-26T08:45:56Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/16265
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Inexplicability
- Transparency
- Explanation
- Opacity
- Contestability
language:
- iso: eng
page: 1-4
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: 'Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction'
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
---
_id: '45299'
abstract:
- lang: eng
  text: Many applications are driven by Machine Learning (ML) today. While complex
    ML models lead to an accurate prediction, their inner decision-making is obfuscated.
    However, especially for high-stakes decisions, interpretability and explainability
    of the model are necessary. Therefore, we develop a holistic interpretability
    and explainability framework (HIEF) to objectively describe and evaluate an intelligent
    system’s explainable AI (XAI) capacities. This guides data scientists to create
    more transparent models. To evaluate our framework, we analyse 50 real estate
    appraisal papers to ensure the robustness of HIEF. Additionally, we identify six
    typical types of intelligent systems, so-called archetypes, which range from explanatory
    to predictive, and demonstrate how researchers can use the framework to identify
    blind-spot topics in their domain. Finally, regarding comprehensiveness, we used
    a random sample of six intelligent systems and conducted an applicability check
    to provide external validity.
author:
- first_name: Jan-Peter
  full_name: Kucklick, Jan-Peter
  id: '77066'
  last_name: Kucklick
citation:
  ama: 'Kucklick J-P. HIEF: a holistic interpretability and explainability framework.
    <i>Journal of Decision Systems</i>. Published online 2023:1-41. doi:<a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>'
  apa: 'Kucklick, J.-P. (2023). HIEF: a holistic interpretability and explainability
    framework. <i>Journal of Decision Systems</i>, 1–41. <a href="https://doi.org/10.1080/12460125.2023.2207268">https://doi.org/10.1080/12460125.2023.2207268</a>'
  bibtex: '@article{Kucklick_2023, title={HIEF: a holistic interpretability and explainability
    framework}, DOI={<a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>},
    journal={Journal of Decision Systems}, publisher={Taylor &#38; Francis}, author={Kucklick,
    Jan-Peter}, year={2023}, pages={1–41} }'
  chicago: 'Kucklick, Jan-Peter. “HIEF: A Holistic Interpretability and Explainability
    Framework.” <i>Journal of Decision Systems</i>, 2023, 1–41. <a href="https://doi.org/10.1080/12460125.2023.2207268">https://doi.org/10.1080/12460125.2023.2207268</a>.'
  ieee: 'J.-P. Kucklick, “HIEF: a holistic interpretability and explainability framework,”
    <i>Journal of Decision Systems</i>, pp. 1–41, 2023, doi: <a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>.'
  mla: 'Kucklick, Jan-Peter. “HIEF: A Holistic Interpretability and Explainability
    Framework.” <i>Journal of Decision Systems</i>, Taylor &#38; Francis, 2023, pp.
    1–41, doi:<a href="https://doi.org/10.1080/12460125.2023.2207268">10.1080/12460125.2023.2207268</a>.'
  short: J.-P. Kucklick, Journal of Decision Systems (2023) 1–41.
date_created: 2023-05-26T05:04:45Z
date_updated: 2023-05-26T05:08:36Z
department:
- _id: '195'
- _id: '196'
doi: 10.1080/12460125.2023.2207268
keyword:
- Explainable AI (XAI)
- machine learning
- interpretability
- real estate appraisal
- framework
- taxonomy
language:
- iso: eng
main_file_link:
- url: https://www.tandfonline.com/doi/full/10.1080/12460125.2023.2207268
page: 1-41
publication: Journal of Decision Systems
publication_identifier:
  issn:
  - 1246-0125
  - 2116-7052
publication_status: published
publisher: Taylor & Francis
status: public
title: 'HIEF: a holistic interpretability and explainability framework'
type: journal_article
user_id: '77066'
year: '2023'
...
---
_id: '56477'
abstract:
- lang: eng
  text: We describe a prototype of a Clinical Decision Support System (CDSS) that
    provides (counterfactual) explanations to support accurate medical diagnosis.
    The prototype is based on an inherently interpretable Bayesian network (BN). Our
    research aims to investigate which explanations are most useful for medical experts
    and whether co-constructing explanations can foster trust and acceptance of CDSS.
author:
- first_name: Felix
  full_name: Liedeker, Felix
  id: '93275'
  last_name: Liedeker
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
citation:
  ama: 'Liedeker F, Cimiano P. A Prototype of an Interactive Clinical Decision Support
    System with Counterfactual Explanations. In: ; 2023.'
  apa: Liedeker, F., &#38; Cimiano, P. (2023). <i>A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations</i>. xAI-2023 Late-breaking
    Work, Demos and Doctoral Consortium co-located with the 1st World Conference on
    eXplainable Artificial Intelligence (xAI-2023), Lissabon.
  bibtex: '@inproceedings{Liedeker_Cimiano_2023, title={A Prototype of an Interactive
    Clinical Decision Support System with Counterfactual Explanations}, author={Liedeker,
    Felix and Cimiano, Philipp}, year={2023} }'
  chicago: Liedeker, Felix, and Philipp Cimiano. “A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations,” 2023.
  ieee: F. Liedeker and P. Cimiano, “A Prototype of an Interactive Clinical Decision
    Support System with Counterfactual Explanations,” presented at the xAI-2023 Late-breaking
    Work, Demos and Doctoral Consortium co-located with the 1st World Conference on
    eXplainable Artificial Intelligence (xAI-2023), Lissabon, 2023.
  mla: Liedeker, Felix, and Philipp Cimiano. <i>A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations</i>. 2023.
  short: 'F. Liedeker, P. Cimiano, in: 2023.'
conference:
  end_date: 2023-07-28
  location: Lissabon
  name: xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with
    the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)
  start_date: 2023-07-26
date_created: 2024-10-09T14:50:09Z
date_updated: 2024-10-09T15:04:53Z
department:
- _id: '660'
keyword:
- Explainable AI
- Clinical decision support
- Bayesian network
- Counterfactual explanations
language:
- iso: eng
project:
- _id: '128'
  name: 'TRR 318 - C5: TRR 318 - Subproject C5'
status: public
title: A Prototype of an Interactive Clinical Decision Support System with Counterfactual
  Explanations
type: conference
user_id: '93275'
year: '2023'
...
---
_id: '27506'
abstract:
- lang: eng
  text: Explainability for machine learning gets more and more important in high-stakes
    decisions like real estate appraisal. While traditional hedonic house pricing
    models are fed with hard information based on housing attributes, recently also
    soft information has been incorporated to increase the predictive performance.
    This soft information can be extracted from image data by complex models like
    Convolutional Neural Networks (CNNs). However, these are intransparent which excludes
    their use for high-stakes financial decisions. To overcome this limitation, we
    examine if a two-stage modeling approach can provide explainability. We combine
    visual interpretability by Regression Activation Maps (RAM) for the CNN and a
    linear regression for the overall prediction. Our experiments are based on 62.000
    family homes in Philadelphia and the results indicate that the CNN learns aspects
    related to vegetation and quality aspects of the house from exterior images, improving
    the predictive accuracy of real estate appraisal by up to 5.4%.
author:
- first_name: Jan-Peter
  full_name: Kucklick, Jan-Peter
  id: '77066'
  last_name: Kucklick
citation:
  ama: 'Kucklick J-P. Visual Interpretability of Image-based Real Estate Appraisal.
    In: <i>55th Annual Hawaii International Conference on System Sciences (HICSS-55)</i>.
    ; 2022.'
  apa: Kucklick, J.-P. (2022). Visual Interpretability of Image-based Real Estate
    Appraisal. <i>55th Annual Hawaii International Conference on System Sciences (HICSS-55)</i>.
    Hawaii International Conference on System Science (HICSS), Virtual.
  bibtex: '@inproceedings{Kucklick_2022, title={Visual Interpretability of Image-based
    Real Estate Appraisal}, booktitle={55th Annual Hawaii International Conference
    on System Sciences (HICSS-55)}, author={Kucklick, Jan-Peter}, year={2022} }'
  chicago: Kucklick, Jan-Peter. “Visual Interpretability of Image-Based Real Estate
    Appraisal.” In <i>55th Annual Hawaii International Conference on System Sciences
    (HICSS-55)</i>, 2022.
  ieee: J.-P. Kucklick, “Visual Interpretability of Image-based Real Estate Appraisal,”
    presented at the Hawaii International Conference on System Science (HICSS), Virtual,
    2022.
  mla: Kucklick, Jan-Peter. “Visual Interpretability of Image-Based Real Estate Appraisal.”
    <i>55th Annual Hawaii International Conference on System Sciences (HICSS-55)</i>,
    2022.
  short: 'J.-P. Kucklick, in: 55th Annual Hawaii International Conference on System
    Sciences (HICSS-55), 2022.'
conference:
  end_date: 2022-01-07
  location: Virtual
  name: Hawaii International Conference on System Science (HICSS)
  start_date: 2022-01-03
date_created: 2021-11-17T07:08:15Z
date_updated: 2022-01-06T06:57:40Z
department:
- _id: '195'
- _id: '196'
keyword:
- Explainable Artificial Intelligence (XAI)
- Regression Activation Maps
- Real Estate Appraisal
- Convolutional Block Attention Module
- Computer Vision
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://scholarspace.manoa.hawaii.edu/bitstream/10125/79519/0149.pdf
oa: '1'
publication: 55th Annual Hawaii International Conference on System Sciences (HICSS-55)
status: public
title: Visual Interpretability of Image-based Real Estate Appraisal
type: conference
user_id: '77066'
year: '2022'
...
---
_id: '29539'
abstract:
- lang: eng
  text: Explainable Artificial Intelligence (XAI) is currently an important topic
    for the application of Machine Learning (ML) in high-stakes decision scenarios.
    Related research focuses on evaluating ML algorithms in terms of interpretability.
    However, providing a human understandable explanation of an intelligent system
    does not only relate to the used ML algorithm. The data and features used also
    have a considerable impact on interpretability. In this paper, we develop a taxonomy
    for describing XAI systems based on aspects about the algorithm and data. The
    proposed taxonomy gives researchers and practitioners opportunities to describe
    and evaluate current XAI systems with respect to interpretability and guides the
    future development of this class of systems.
author:
- first_name: Jan-Peter
  full_name: Kucklick, Jan-Peter
  id: '77066'
  last_name: Kucklick
citation:
  ama: 'Kucklick J-P. Towards a model- and data-focused taxonomy of XAI systems. In:
    <i>Wirtschaftsinformatik 2022 Proceedings</i>. ; 2022.'
  apa: Kucklick, J.-P. (2022). Towards a model- and data-focused taxonomy of XAI systems.
    <i>Wirtschaftsinformatik 2022 Proceedings</i>. Wirtschaftsinformatik 2022 (WI22),
    Nürnberg (online).
  bibtex: '@inproceedings{Kucklick_2022, title={Towards a model- and data-focused
    taxonomy of XAI systems}, booktitle={Wirtschaftsinformatik 2022 Proceedings},
    author={Kucklick, Jan-Peter}, year={2022} }'
  chicago: Kucklick, Jan-Peter. “Towards a Model- and Data-Focused Taxonomy of XAI
    Systems.” In <i>Wirtschaftsinformatik 2022 Proceedings</i>, 2022.
  ieee: J.-P. Kucklick, “Towards a model- and data-focused taxonomy of XAI systems,”
    presented at the Wirtschaftsinformatik 2022 (WI22), Nürnberg (online), 2022.
  mla: Kucklick, Jan-Peter. “Towards a Model- and Data-Focused Taxonomy of XAI Systems.”
    <i>Wirtschaftsinformatik 2022 Proceedings</i>, 2022.
  short: 'J.-P. Kucklick, in: Wirtschaftsinformatik 2022 Proceedings, 2022.'
conference:
  end_date: 2022-02-23
  location: Nürnberg (online)
  name: Wirtschaftsinformatik 2022 (WI22)
  start_date: 2022-02-21
date_created: 2022-01-26T08:22:03Z
date_updated: 2022-01-26T08:24:30Z
department:
- _id: '195'
- _id: '196'
keyword:
- Explainable Artificial Intelligence
- XAI
- Interpretability
- Decision Support Systems
- Taxonomy
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1056&context=wi2022
oa: '1'
publication: Wirtschaftsinformatik 2022 Proceedings
status: public
title: Towards a model- and data-focused taxonomy of XAI systems
type: conference
user_id: '77066'
year: '2022'
...
---
_id: '24456'
abstract:
- lang: eng
  text: One objective of current research in explainable intelligent systems is to
    implement social aspects in order to increase the relevance of explanations. In
    this paper, we argue that a novel conceptual framework is needed to overcome shortcomings
    of existing AI systems with little attention to processes of interaction and learning.
    Drawing from research in interaction and development, we first outline the novel
    conceptual framework that pushes the design of AI systems toward true interactivity
    with an emphasis on the role of the partner and social relevance. We propose that
    AI systems will be able to provide a meaningful and relevant explanation only
    if the process of explaining is extended to active contribution of both partners
    that brings about dynamics that is modulated by different levels of analysis.
    Accordingly, our conceptual framework comprises monitoring and scaffolding as
    key concepts and claims that the process of explaining is not only modulated by
    the interaction between explainee and explainer but is embedded into a larger
    social context in which conventionalized and routinized behaviors are established.
    We discuss our conceptual framework in relation to the established objectives
    of transparency and autonomy that are raised for the design of explainable AI
    systems currently.
article_type: original
author:
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Tobias
  full_name: Matzner, Tobias
  id: '65695'
  last_name: Matzner
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  last_name: Buschmeier
- first_name: Elena
  full_name: Esposito, Elena
  last_name: Esposito
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Barbara
  full_name: Hammer, Barbara
  last_name: Hammer
- first_name: Reinhold
  full_name: Haeb-Umbach, Reinhold
  id: '242'
  last_name: Haeb-Umbach
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Eyke
  full_name: Hüllermeier, Eyke
  id: '48129'
  last_name: Hüllermeier
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
- first_name: Kirsten
  full_name: Thommes, Kirsten
  id: '72497'
  last_name: Thommes
- first_name: Axel-Cyrille
  full_name: Ngonga Ngomo, Axel-Cyrille
  id: '65716'
  last_name: Ngonga Ngomo
- first_name: Carsten
  full_name: Schulte, Carsten
  id: '60311'
  last_name: Schulte
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
- first_name: Petra
  full_name: Wagner, Petra
  last_name: Wagner
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: 'Rohlfing KJ, Cimiano P, Scharlau I, et al. Explanation as a Social Practice:
    Toward a Conceptual Framework for the Social Design of AI Systems. <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>. 2021;13(3):717-728. doi:<a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>'
  apa: 'Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier,
    H., Esposito, E., Grimminger, A., Hammer, B., Haeb-Umbach, R., Horwath, I., Hüllermeier,
    E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A.-C., Schulte, C., Wachsmuth,
    H., Wagner, P., &#38; Wrede, B. (2021). Explanation as a Social Practice: Toward
    a Conceptual Framework for the Social Design of AI Systems. <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>, <i>13</i>(3), 717–728. <a href="https://doi.org/10.1109/tcds.2020.3044366">https://doi.org/10.1109/tcds.2020.3044366</a>'
  bibtex: '@article{Rohlfing_Cimiano_Scharlau_Matzner_Buhl_Buschmeier_Esposito_Grimminger_Hammer_Haeb-Umbach_et
    al._2021, title={Explanation as a Social Practice: Toward a Conceptual Framework
    for the Social Design of AI Systems}, volume={13}, DOI={<a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>},
    number={3}, journal={IEEE Transactions on Cognitive and Developmental Systems},
    author={Rohlfing, Katharina J. and Cimiano, Philipp and Scharlau, Ingrid and Matzner,
    Tobias and Buhl, Heike M. and Buschmeier, Hendrik and Esposito, Elena and Grimminger,
    Angela and Hammer, Barbara and Haeb-Umbach, Reinhold and et al.}, year={2021},
    pages={717–728} }'
  chicago: 'Rohlfing, Katharina J., Philipp Cimiano, Ingrid Scharlau, Tobias Matzner,
    Heike M. Buhl, Hendrik Buschmeier, Elena Esposito, et al. “Explanation as a Social
    Practice: Toward a Conceptual Framework for the Social Design of AI Systems.”
    <i>IEEE Transactions on Cognitive and Developmental Systems</i> 13, no. 3 (2021):
    717–28. <a href="https://doi.org/10.1109/tcds.2020.3044366">https://doi.org/10.1109/tcds.2020.3044366</a>.'
  ieee: 'K. J. Rohlfing <i>et al.</i>, “Explanation as a Social Practice: Toward a
    Conceptual Framework for the Social Design of AI Systems,” <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>, vol. 13, no. 3, pp. 717–728, 2021,
    doi: <a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>.'
  mla: 'Rohlfing, Katharina J., et al. “Explanation as a Social Practice: Toward a
    Conceptual Framework for the Social Design of AI Systems.” <i>IEEE Transactions
    on Cognitive and Developmental Systems</i>, vol. 13, no. 3, 2021, pp. 717–28,
    doi:<a href="https://doi.org/10.1109/tcds.2020.3044366">10.1109/tcds.2020.3044366</a>.'
  short: K.J. Rohlfing, P. Cimiano, I. Scharlau, T. Matzner, H.M. Buhl, H. Buschmeier,
    E. Esposito, A. Grimminger, B. Hammer, R. Haeb-Umbach, I. Horwath, E. Hüllermeier,
    F. Kern, S. Kopp, K. Thommes, A.-C. Ngonga Ngomo, C. Schulte, H. Wachsmuth, P.
    Wagner, B. Wrede, IEEE Transactions on Cognitive and Developmental Systems 13
    (2021) 717–728.
date_created: 2021-09-14T20:52:57Z
date_updated: 2023-12-05T10:15:02Z
ddc:
- '300'
department:
- _id: '603'
- _id: '749'
- _id: '424'
- _id: '67'
- _id: '574'
- _id: '184'
- _id: '757'
- _id: '54'
- _id: '178'
doi: 10.1109/tcds.2020.3044366
file:
- access_level: open_access
  content_type: application/pdf
  creator: haebumb
  date_created: 2023-11-20T16:33:51Z
  date_updated: 2023-11-20T16:33:51Z
  file_id: '49081'
  file_name: 2020-12-01_explainability_final_version.pdf
  file_size: 626217
  relation: main_file
file_date_updated: 2023-11-20T16:33:51Z
has_accepted_license: '1'
intvolume: '        13'
issue: '3'
keyword:
- Explainability
- process ofexplaining andunderstanding
- explainable artificial systems
language:
- iso: eng
oa: '1'
page: 717-728
project:
- _id: '109'
  grant_number: '438445824'
  name: 'TRR 318: TRR 318 - Erklärbarkeit konstruieren'
publication: IEEE Transactions on Cognitive and Developmental Systems
publication_identifier:
  issn:
  - 2379-8920
  - 2379-8939
publication_status: published
quality_controlled: '1'
status: public
title: 'Explanation as a Social Practice: Toward a Conceptual Framework for the Social
  Design of AI Systems'
type: journal_article
user_id: '42933'
volume: 13
year: '2021'
...
