---
_id: '61156'
abstract:
- lang: eng
  text: Explainability has become an important topic in computer science and artificial
    intelligence, leading to a subfield called Explainable Artificial Intelligence
    (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’
    on the part of the explainee. However, what it means to ‘understand’ is still
    not clearly defined, and the concept itself is rarely the subject of scientific
    investigation. This conceptual article aims to present a model of forms of understanding
    for XAI-explanations and beyond. From an interdisciplinary perspective bringing
    together computer science, linguistics, sociology, philosophy and psychology,
    a definition of understanding and its forms, assessment, and dynamics during the
    process of giving everyday explanations are explored. Two types of understanding
    are considered as possible outcomes of explanations, namely enabledness, ‘knowing
    how’ to do or decide something, and comprehension, ‘knowing that’ – both in different
    degrees (from shallow to deep). Explanations regularly start with shallow understanding
    in a specific domain and can lead to deep comprehension and enabledness of the
    explanandum, which we see as a prerequisite for human users to gain agency. In
    this process, the increase of comprehension and enabledness are highly interdependent.
    Against the background of this systematization, special challenges of understanding
    in XAI are discussed.
article_number: '101419'
article_type: original
author:
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  id: '76456'
  last_name: Buschmeier
  orcid: 0000-0002-9613-5713
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Helen
  full_name: Beierling, Helen
  id: '50995'
  last_name: Beierling
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
  orcid: 0000-0002-9997-9241
- first_name: André
  full_name: Groß, André
  id: '93405'
  last_name: Groß
  orcid: 0000-0002-9593-7220
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Michael
  full_name: Lenke, Michael
  last_name: Lenke
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  id: '86589'
  last_name: Vollmer
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Annedore
  full_name: Wilmes, Annedore
  last_name: Wilmes
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>. 2025;94. doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>
  apa: Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher,
    J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer,
    V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang,
    Y., Wilmes, A., &#38; Wrede, B. (2025). Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>, <i>94</i>, Article 101419. <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>
  bibtex: '@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et
    al._2025, title={Forms of Understanding for XAI-Explanations}, volume={94}, DOI={<a
    href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>},
    number={101419}, journal={Cognitive Systems Research}, author={Buschmeier, Hendrik
    and Buhl, Heike M. and Kern, Friederike and Grimminger, Angela and Beierling,
    Helen and Fisher, Josephine Beryl and Groß, André and Horwath, Ilona and Klowait,
    Nils and Lazarov, Stefan Teodorov and et al.}, year={2025} }'
  chicago: Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger,
    Helen Beierling, Josephine Beryl Fisher, André Groß, et al. “Forms of Understanding
    for XAI-Explanations.” <i>Cognitive Systems Research</i> 94 (2025). <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>.
  ieee: 'H. Buschmeier <i>et al.</i>, “Forms of Understanding for XAI-Explanations,”
    <i>Cognitive Systems Research</i>, vol. 94, Art. no. 101419, 2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.'
  mla: Buschmeier, Hendrik, et al. “Forms of Understanding for XAI-Explanations.”
    <i>Cognitive Systems Research</i>, vol. 94, 101419, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.
  short: H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher,
    A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing,
    I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede,
    Cognitive Systems Research 94 (2025).
date_created: 2025-09-08T14:24:32Z
date_updated: 2025-12-05T15:32:25Z
ddc:
- '006'
department:
- _id: '660'
doi: 10.1016/j.cogsys.2025.101419
file:
- access_level: closed
  content_type: application/pdf
  creator: hbuschme
  date_created: 2025-12-01T21:02:20Z
  date_updated: 2025-12-01T21:02:20Z
  file_id: '62730'
  file_name: Buschmeier-etal-2025-COGSYS.pdf
  file_size: 10114981
  relation: main_file
  success: 1
file_date_updated: 2025-12-01T21:02:20Z
has_accepted_license: '1'
intvolume: '        94'
keyword:
- understanding
- explaining
- explanations
- explainable
- AI
- interdisciplinarity
- comprehension
- enabledness
- agency
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.sciencedirect.com/science/article/pii/S1389041725000993?via%3Dihub
oa: '1'
project:
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '113'
  name: TRR 318 - Subproject A3
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '122'
  name: TRR 318 - Subproject B3
- _id: '123'
  name: TRR 318 - Subproject B5
- _id: '119'
  name: TRR 318 - Project Area Ö
publication: Cognitive Systems Research
publication_status: published
quality_controlled: '1'
status: public
title: Forms of Understanding for XAI-Explanations
type: journal_article
user_id: '57578'
volume: 94
year: '2025'
...
---
_id: '36522'
abstract:
- lang: eng
  text: "Jupyter notebooks enable developers to interleave code snippets with rich-text
    and in-line visualizations. Data scientists use Jupyter notebook as the de-facto
    standard for creating and sharing machine-learning based solutions, primarily
    written in Python. Recent studies have demonstrated, however, that a large portion
    of Jupyter notebooks available on public platforms are undocumented and lacks
    a narrative structure. This reduces the readability of these notebooks. To address
    this shortcoming, this paper presents HeaderGen, a novel tool-based approach that
    automatically annotates code cells with categorical markdown headers based on
    a taxonomy of machine-learning operations, and classifies and displays function
    calls according to this taxonomy. For this functionality to be realized, HeaderGen
    enhances an existing call graph analysis in PyCG. To improve precision, HeaderGen
    extends PyCG's analysis with support for handling external library code and flow-sensitivity.
    The former is realized by facilitating the resolution of function return-types.
    Furthermore, HeaderGen uses type information to perform pattern matching on code
    syntax to annotate code cells.\r\nThe evaluation on 15 real-world Jupyter notebooks
    from Kaggle shows that HeaderGen's underlying call graph analysis yields high
    accuracy (96.4% precision and 95.9% recall). This is because HeaderGen can resolve
    return-types of external libraries where existing type inference tools such as
    pytype (by Google), pyright (by Microsoft), and Jedi fall short. The header generation
    has a precision of 82.2% and a recall rate of 96.8% with regard to headers created
    manually by experts. In a user study, HeaderGen helps participants finish comprehension
    and navigation tasks faster. All participants clearly perceive HeaderGen as useful
    to their task."
author:
- first_name: Ashwin Prasad
  full_name: Shivarpatna Venkatesh, Ashwin Prasad
  id: '66637'
  last_name: Shivarpatna Venkatesh
- first_name: Jiawei
  full_name: Wang, Jiawei
  last_name: Wang
- first_name: Li
  full_name: Li, Li
  last_name: Li
- first_name: Eric
  full_name: Bodden, Eric
  id: '59256'
  last_name: Bodden
  orcid: 0000-0003-3470-3647
citation:
  ama: 'Shivarpatna Venkatesh AP, Wang J, Li L, Bodden E. Enhancing Comprehension
    and Navigation in Jupyter Notebooks with Static Analysis. In: IEEE SANER 2023
    (International Conference on Software Analysis, Evolution and Reengineering);
    2023. doi:<a href="https://doi.org/10.48550/ARXIV.2301.04419">10.48550/ARXIV.2301.04419</a>'
  apa: Shivarpatna Venkatesh, A. P., Wang, J., Li, L., &#38; Bodden, E. (2023). <i>Enhancing
    Comprehension and Navigation in Jupyter Notebooks with Static Analysis</i>. IEEE
    SANER 2023 (International Conference on Software Analysis, Evolution and Reengineering).
    <a href="https://doi.org/10.48550/ARXIV.2301.04419">https://doi.org/10.48550/ARXIV.2301.04419</a>
  bibtex: '@inproceedings{Shivarpatna Venkatesh_Wang_Li_Bodden_2023, title={Enhancing
    Comprehension and Navigation in Jupyter Notebooks with Static Analysis}, DOI={<a
    href="https://doi.org/10.48550/ARXIV.2301.04419">10.48550/ARXIV.2301.04419</a>},
    publisher={IEEE SANER 2023 (International Conference on Software Analysis, Evolution
    and Reengineering)}, author={Shivarpatna Venkatesh, Ashwin Prasad and Wang, Jiawei
    and Li, Li and Bodden, Eric}, year={2023} }'
  chicago: Shivarpatna Venkatesh, Ashwin Prasad, Jiawei Wang, Li Li, and Eric Bodden.
    “Enhancing Comprehension and Navigation in Jupyter Notebooks with Static Analysis.”
    IEEE SANER 2023 (International Conference on Software Analysis, Evolution and
    Reengineering), 2023. <a href="https://doi.org/10.48550/ARXIV.2301.04419">https://doi.org/10.48550/ARXIV.2301.04419</a>.
  ieee: 'A. P. Shivarpatna Venkatesh, J. Wang, L. Li, and E. Bodden, “Enhancing Comprehension
    and Navigation in Jupyter Notebooks with Static Analysis,” presented at the IEEE
    SANER 2023 (International Conference on Software Analysis, Evolution and Reengineering),
    2023, doi: <a href="https://doi.org/10.48550/ARXIV.2301.04419">10.48550/ARXIV.2301.04419</a>.'
  mla: Shivarpatna Venkatesh, Ashwin Prasad, et al. <i>Enhancing Comprehension and
    Navigation in Jupyter Notebooks with Static Analysis</i>. IEEE SANER 2023 (International
    Conference on Software Analysis, Evolution and Reengineering), 2023, doi:<a href="https://doi.org/10.48550/ARXIV.2301.04419">10.48550/ARXIV.2301.04419</a>.
  short: 'A.P. Shivarpatna Venkatesh, J. Wang, L. Li, E. Bodden, in: IEEE SANER 2023
    (International Conference on Software Analysis, Evolution and Reengineering),
    2023.'
conference:
  name: IEEE SANER 2023 (International Conference on Software Analysis, Evolution
    and Reengineering)
date_created: 2023-01-13T08:03:26Z
date_updated: 2025-04-07T10:18:03Z
ddc:
- '000'
doi: 10.48550/ARXIV.2301.04419
file:
- access_level: open_access
  content_type: application/pdf
  creator: ashwin
  date_created: 2023-01-26T10:48:40Z
  date_updated: 2023-01-26T10:48:40Z
  file_id: '40304'
  file_name: 2301.04419.pdf
  file_size: 1862440
  relation: main_file
file_date_updated: 2023-01-26T10:48:40Z
has_accepted_license: '1'
keyword:
- static analysis
- python
- code comprehension
- annotation
- literate programming
- jupyter notebook
language:
- iso: eng
oa: '1'
publisher: IEEE SANER 2023 (International Conference on Software Analysis, Evolution
  and Reengineering)
status: public
title: Enhancing Comprehension and Navigation in Jupyter Notebooks with Static Analysis
type: conference
user_id: '15249'
year: '2023'
...
