[{"keyword":["understanding","explaining","explanations","explainable","AI","interdisciplinarity","comprehension","enabledness","agency"],"ddc":["006"],"language":[{"iso":"eng"}],"abstract":[{"text":"Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’ on the part of the explainee. However, what it means to ‘understand’ is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding for XAI-explanations and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, philosophy and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely enabledness, ‘knowing how’ to do or decide something, and comprehension, ‘knowing that’ – both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain agency. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.","lang":"eng"}],"file":[{"file_size":10114981,"file_name":"Buschmeier-etal-2025-COGSYS.pdf","access_level":"closed","file_id":"62730","date_updated":"2025-12-01T21:02:20Z","creator":"hbuschme","date_created":"2025-12-01T21:02:20Z","success":1,"relation":"main_file","content_type":"application/pdf"}],"publication":"Cognitive Systems Research","title":"Forms of Understanding for XAI-Explanations","date_created":"2025-09-08T14:24:32Z","year":"2025","quality_controlled":"1","article_number":"101419","article_type":"original","file_date_updated":"2025-12-01T21:02:20Z","_id":"61156","project":[{"_id":"111","name":"TRR 318; TP A01: Adaptives Erklären"},{"_id":"112","name":"TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten"},{"name":"TRR 318 - Subproject A3","_id":"113"},{"_id":"114","name":"TRR 318; TP A04: Integration des technischen Modells in das Partnermodell bei der Erklärung von digitalen Artefakten"},{"_id":"115","name":"TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog"},{"_id":"122","name":"TRR 318 - Subproject B3"},{"name":"TRR 318 - Subproject B5","_id":"123"},{"_id":"119","name":"TRR 318 - Project Area Ö"}],"department":[{"_id":"660"}],"user_id":"57578","status":"public","type":"journal_article","doi":"10.1016/j.cogsys.2025.101419","main_file_link":[{"url":"https://www.sciencedirect.com/science/article/pii/S1389041725000993?via%3Dihub","open_access":"1"}],"oa":"1","date_updated":"2025-12-05T15:32:25Z","volume":94,"author":[{"last_name":"Buschmeier","orcid":"0000-0002-9613-5713","full_name":"Buschmeier, Hendrik","id":"76456","first_name":"Hendrik"},{"last_name":"Buhl","id":"27152","full_name":"Buhl, Heike M.","first_name":"Heike M."},{"last_name":"Kern","full_name":"Kern, Friederike","first_name":"Friederike"},{"last_name":"Grimminger","id":"57578","full_name":"Grimminger, Angela","first_name":"Angela"},{"id":"50995","full_name":"Beierling, Helen","last_name":"Beierling","first_name":"Helen"},{"first_name":"Josephine Beryl","orcid":"0000-0002-9997-9241","last_name":"Fisher","full_name":"Fisher, Josephine Beryl","id":"56345"},{"last_name":"Groß","orcid":"0000-0002-9593-7220","id":"93405","full_name":"Groß, André","first_name":"André"},{"last_name":"Horwath","id":"68836","full_name":"Horwath, Ilona","first_name":"Ilona"},{"orcid":"0000-0002-7347-099X","last_name":"Klowait","id":"98454","full_name":"Klowait, Nils","first_name":"Nils"},{"first_name":"Stefan Teodorov","id":"90345","full_name":"Lazarov, Stefan Teodorov","last_name":"Lazarov","orcid":"0009-0009-0892-9483"},{"full_name":"Lenke, Michael","last_name":"Lenke","first_name":"Michael"},{"full_name":"Lohmer, Vivien","last_name":"Lohmer","first_name":"Vivien"},{"first_name":"Katharina","orcid":"0000-0002-5676-8233","last_name":"Rohlfing","full_name":"Rohlfing, Katharina","id":"50352"},{"first_name":"Ingrid","last_name":"Scharlau","orcid":"0000-0003-2364-9489","id":"451","full_name":"Scharlau, Ingrid"},{"first_name":"Amit","full_name":"Singh, Amit","id":"91018","orcid":"0000-0002-7789-1521","last_name":"Singh"},{"first_name":"Lutz","full_name":"Terfloth, Lutz","id":"37320","last_name":"Terfloth"},{"full_name":"Vollmer, Anna-Lisa","id":"86589","last_name":"Vollmer","first_name":"Anna-Lisa"},{"full_name":"Wang, Yu","last_name":"Wang","first_name":"Yu"},{"first_name":"Annedore","last_name":"Wilmes","full_name":"Wilmes, Annedore"},{"first_name":"Britta","full_name":"Wrede, Britta","last_name":"Wrede"}],"intvolume":"        94","citation":{"bibtex":"@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et al._2025, title={Forms of Understanding for XAI-Explanations}, volume={94}, DOI={<a href=\"https://doi.org/10.1016/j.cogsys.2025.101419\">10.1016/j.cogsys.2025.101419</a>}, number={101419}, journal={Cognitive Systems Research}, author={Buschmeier, Hendrik and Buhl, Heike M. and Kern, Friederike and Grimminger, Angela and Beierling, Helen and Fisher, Josephine Beryl and Groß, André and Horwath, Ilona and Klowait, Nils and Lazarov, Stefan Teodorov and et al.}, year={2025} }","mla":"Buschmeier, Hendrik, et al. “Forms of Understanding for XAI-Explanations.” <i>Cognitive Systems Research</i>, vol. 94, 101419, 2025, doi:<a href=\"https://doi.org/10.1016/j.cogsys.2025.101419\">10.1016/j.cogsys.2025.101419</a>.","short":"H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher, A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing, I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede, Cognitive Systems Research 94 (2025).","apa":"Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher, J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer, V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang, Y., Wilmes, A., &#38; Wrede, B. (2025). Forms of Understanding for XAI-Explanations. <i>Cognitive Systems Research</i>, <i>94</i>, Article 101419. <a href=\"https://doi.org/10.1016/j.cogsys.2025.101419\">https://doi.org/10.1016/j.cogsys.2025.101419</a>","ama":"Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding for XAI-Explanations. <i>Cognitive Systems Research</i>. 2025;94. doi:<a href=\"https://doi.org/10.1016/j.cogsys.2025.101419\">10.1016/j.cogsys.2025.101419</a>","chicago":"Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger, Helen Beierling, Josephine Beryl Fisher, André Groß, et al. “Forms of Understanding for XAI-Explanations.” <i>Cognitive Systems Research</i> 94 (2025). <a href=\"https://doi.org/10.1016/j.cogsys.2025.101419\">https://doi.org/10.1016/j.cogsys.2025.101419</a>.","ieee":"H. Buschmeier <i>et al.</i>, “Forms of Understanding for XAI-Explanations,” <i>Cognitive Systems Research</i>, vol. 94, Art. no. 101419, 2025, doi: <a href=\"https://doi.org/10.1016/j.cogsys.2025.101419\">10.1016/j.cogsys.2025.101419</a>."},"has_accepted_license":"1","publication_status":"published"},{"doi":"10.1609/aaai.v38i13.29352","title":"Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles","author":[{"first_name":"Maximilian","full_name":"Muschalik, Maximilian","last_name":"Muschalik"},{"first_name":"Fabian","id":"93420","full_name":"Fumagalli, Fabian","last_name":"Fumagalli"},{"first_name":"Barbara","full_name":"Hammer, Barbara","last_name":"Hammer"},{"id":"48129","full_name":"Huellermeier, Eyke","last_name":"Huellermeier","first_name":"Eyke"}],"date_created":"2024-03-27T14:50:04Z","volume":38,"date_updated":"2025-09-11T16:20:11Z","citation":{"apa":"Muschalik, M., Fumagalli, F., Hammer, B., &#38; Huellermeier, E. (2024). Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles. <i>Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)</i>, <i>38</i>(13), 14388–14396. <a href=\"https://doi.org/10.1609/aaai.v38i13.29352\">https://doi.org/10.1609/aaai.v38i13.29352</a>","bibtex":"@inproceedings{Muschalik_Fumagalli_Hammer_Huellermeier_2024, title={Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles}, volume={38}, DOI={<a href=\"https://doi.org/10.1609/aaai.v38i13.29352\">10.1609/aaai.v38i13.29352</a>}, number={13}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)}, author={Muschalik, Maximilian and Fumagalli, Fabian and Hammer, Barbara and Huellermeier, Eyke}, year={2024}, pages={14388–14396} }","mla":"Muschalik, Maximilian, et al. “Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles.” <i>Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)</i>, vol. 38, no. 13, 2024, pp. 14388–96, doi:<a href=\"https://doi.org/10.1609/aaai.v38i13.29352\">10.1609/aaai.v38i13.29352</a>.","short":"M. Muschalik, F. Fumagalli, B. Hammer, E. Huellermeier, in: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2024, pp. 14388–14396.","ama":"Muschalik M, Fumagalli F, Hammer B, Huellermeier E. Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles. In: <i>Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)</i>. Vol 38. ; 2024:14388-14396. doi:<a href=\"https://doi.org/10.1609/aaai.v38i13.29352\">10.1609/aaai.v38i13.29352</a>","chicago":"Muschalik, Maximilian, Fabian Fumagalli, Barbara Hammer, and Eyke Huellermeier. “Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles.” In <i>Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)</i>, 38:14388–96, 2024. <a href=\"https://doi.org/10.1609/aaai.v38i13.29352\">https://doi.org/10.1609/aaai.v38i13.29352</a>.","ieee":"M. Muschalik, F. Fumagalli, B. Hammer, and E. Huellermeier, “Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles,” in <i>Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)</i>, 2024, vol. 38, no. 13, pp. 14388–14396, doi: <a href=\"https://doi.org/10.1609/aaai.v38i13.29352\">10.1609/aaai.v38i13.29352</a>."},"page":"14388-14396","intvolume":"        38","year":"2024","issue":"13","publication_status":"published","publication_identifier":{"issn":["2374-3468","2159-5399"]},"language":[{"iso":"eng"}],"keyword":["Explainable Artificial Intelligence"],"user_id":"93420","department":[{"_id":"660"}],"project":[{"name":"TRR 318 - C3: TRR 318 - Subproject C3","_id":"126"},{"_id":"109","name":"TRR 318: TRR 318 - Erklärbarkeit konstruieren"},{"_id":"117","name":"TRR 318 - C: TRR 318 - Project Area C"}],"_id":"53073","status":"public","abstract":[{"text":"While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models. As a remedy, the Shapley value (SV) is a well-known concept in explainable artificial intelligence (XAI) research for quantifying additive feature attributions of predictions. The model-specific TreeSHAP methodology solves the exponential complexity for retrieving exact SVs from tree-based models. Expanding beyond individual feature attribution, Shapley interactions reveal the impact of intricate feature interactions of any order. In this work, we present TreeSHAP-IQ, an efficient method to compute any-order additive Shapley interactions for predictions of tree-based models. TreeSHAP-IQ is supported by a mathematical framework that exploits polynomial arithmetic to compute the interaction scores in a single recursive traversal of the tree, akin to Linear TreeSHAP. We apply TreeSHAP-IQ on state-of-the-art tree ensembles and explore interactions on well-established benchmark datasets.","lang":"eng"}],"type":"conference","publication":"Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)"},{"author":[{"first_name":"Elena ","full_name":"Esposito, Elena ","last_name":"Esposito"}],"date_created":"2024-02-18T10:16:43Z","volume":16,"date_updated":"2024-02-26T08:46:26Z","doi":"10.6092/ISSN.1971-8853/15804","title":"Does Explainability Require Transparency?","issue":"3","citation":{"ama":"Esposito E. Does Explainability Require Transparency? <i>Sociologica</i>. 2023;16(3):17-27. doi:<a href=\"https://doi.org/10.6092/ISSN.1971-8853/15804\">10.6092/ISSN.1971-8853/15804</a>","chicago":"Esposito, Elena . “Does Explainability Require Transparency?” <i>Sociologica</i> 16, no. 3 (2023): 17–27. <a href=\"https://doi.org/10.6092/ISSN.1971-8853/15804\">https://doi.org/10.6092/ISSN.1971-8853/15804</a>.","ieee":"E. Esposito, “Does Explainability Require Transparency?,” <i>Sociologica</i>, vol. 16, no. 3, pp. 17–27, 2023, doi: <a href=\"https://doi.org/10.6092/ISSN.1971-8853/15804\">10.6092/ISSN.1971-8853/15804</a>.","apa":"Esposito, E. (2023). Does Explainability Require Transparency? <i>Sociologica</i>, <i>16</i>(3), 17–27. <a href=\"https://doi.org/10.6092/ISSN.1971-8853/15804\">https://doi.org/10.6092/ISSN.1971-8853/15804</a>","bibtex":"@article{Esposito_2023, title={Does Explainability Require Transparency?}, volume={16}, DOI={<a href=\"https://doi.org/10.6092/ISSN.1971-8853/15804\">10.6092/ISSN.1971-8853/15804</a>}, number={3}, journal={Sociologica}, author={Esposito, Elena }, year={2023}, pages={17–27} }","mla":"Esposito, Elena. “Does Explainability Require Transparency?” <i>Sociologica</i>, vol. 16, no. 3, 2023, pp. 17–27, doi:<a href=\"https://doi.org/10.6092/ISSN.1971-8853/15804\">10.6092/ISSN.1971-8853/15804</a>.","short":"E. Esposito, Sociologica 16 (2023) 17–27."},"page":"17-27","intvolume":"        16","year":"2023","user_id":"54779","department":[{"_id":"660"}],"project":[{"name":"TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen des maschinellen Lernens (Teilprojekt B01)","_id":"121","grant_number":"438445824"}],"_id":"51368","language":[{"iso":"eng"}],"keyword":["Explainable AI","Transparency","Explanation","Communication","Sociological systems theory"],"type":"journal_article","publication":"Sociologica","status":"public","abstract":[{"text":"Dealing with opaque algorithms, the frequent overlap between transparency and explainability produces seemingly unsolvable dilemmas, as the much-discussed trade-off between model performance and model transparency. Referring to Niklas Luhmann's notion of communication, the paper argues that explainability does not necessarily require transparency and proposes an alternative approach. Explanations as communicative processes do not imply any disclosure of thoughts or neural processes, but only reformulations that provide the partners with additional elements and enable them to understand (from their perspective) what has been done and why. Recent computational approaches aiming at post-hoc explainability reproduce what happens in communication, producing explanations of the working of algorithms that can be different from the processes of the algorithms.","lang":"eng"}]},{"author":[{"last_name":"Esposito","full_name":"Esposito, Elena","first_name":"Elena"}],"date_created":"2024-02-18T10:23:23Z","volume":16,"date_updated":"2024-02-26T08:45:56Z","doi":"10.6092/ISSN.1971-8853/16265","title":"Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction","issue":"3","citation":{"apa":"Esposito, E. (2023). Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction. <i>Sociologica</i>, <i>16</i>(3), 1–4. <a href=\"https://doi.org/10.6092/ISSN.1971-8853/16265\">https://doi.org/10.6092/ISSN.1971-8853/16265</a>","short":"E. Esposito, Sociologica 16 (2023) 1–4.","mla":"Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction.” <i>Sociologica</i>, vol. 16, no. 3, 2023, pp. 1–4, doi:<a href=\"https://doi.org/10.6092/ISSN.1971-8853/16265\">10.6092/ISSN.1971-8853/16265</a>.","bibtex":"@article{Esposito_2023, title={Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction}, volume={16}, DOI={<a href=\"https://doi.org/10.6092/ISSN.1971-8853/16265\">10.6092/ISSN.1971-8853/16265</a>}, number={3}, journal={Sociologica}, author={Esposito, Elena}, year={2023}, pages={1–4} }","ama":"Esposito E. Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction. <i>Sociologica</i>. 2023;16(3):1-4. doi:<a href=\"https://doi.org/10.6092/ISSN.1971-8853/16265\">10.6092/ISSN.1971-8853/16265</a>","chicago":"Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction.” <i>Sociologica</i> 16, no. 3 (2023): 1–4. <a href=\"https://doi.org/10.6092/ISSN.1971-8853/16265\">https://doi.org/10.6092/ISSN.1971-8853/16265</a>.","ieee":"E. Esposito, “Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction,” <i>Sociologica</i>, vol. 16, no. 3, pp. 1–4, 2023, doi: <a href=\"https://doi.org/10.6092/ISSN.1971-8853/16265\">10.6092/ISSN.1971-8853/16265</a>."},"intvolume":"        16","page":"1-4","year":"2023","user_id":"54779","department":[{"_id":"660"}],"project":[{"grant_number":"438445824","_id":"121","name":"TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen des maschinellen Lernens (Teilprojekt B01)"}],"_id":"51369","language":[{"iso":"eng"}],"keyword":["Explainable AI","Inexplicability","Transparency","Explanation","Opacity","Contestability"],"type":"journal_article","publication":"Sociologica","status":"public","abstract":[{"lang":"eng","text":"This short introduction presents the symposium ‘Explaining Machines’. It locates the debate about Explainable AI in the history of the reflection about AI and outlines the issues discussed in the contributions."}]},{"user_id":"77066","department":[{"_id":"195"},{"_id":"196"}],"_id":"45299","language":[{"iso":"eng"}],"keyword":["Explainable AI (XAI)","machine learning","interpretability","real estate appraisal","framework","taxonomy"],"type":"journal_article","publication":"Journal of Decision Systems","status":"public","abstract":[{"lang":"eng","text":"Many applications are driven by Machine Learning (ML) today. While complex ML models lead to an accurate prediction, their inner decision-making is obfuscated. However, especially for high-stakes decisions, interpretability and explainability of the model are necessary. Therefore, we develop a holistic interpretability and explainability framework (HIEF) to objectively describe and evaluate an intelligent system’s explainable AI (XAI) capacities. This guides data scientists to create more transparent models. To evaluate our framework, we analyse 50 real estate appraisal papers to ensure the robustness of HIEF. Additionally, we identify six typical types of intelligent systems, so-called archetypes, which range from explanatory to predictive, and demonstrate how researchers can use the framework to identify blind-spot topics in their domain. Finally, regarding comprehensiveness, we used a random sample of six intelligent systems and conducted an applicability check to provide external validity."}],"author":[{"last_name":"Kucklick","full_name":"Kucklick, Jan-Peter","id":"77066","first_name":"Jan-Peter"}],"date_created":"2023-05-26T05:04:45Z","publisher":"Taylor & Francis","date_updated":"2023-05-26T05:08:36Z","main_file_link":[{"url":"https://www.tandfonline.com/doi/full/10.1080/12460125.2023.2207268"}],"doi":"10.1080/12460125.2023.2207268","title":"HIEF: a holistic interpretability and explainability framework","publication_status":"published","publication_identifier":{"issn":["1246-0125","2116-7052"]},"citation":{"mla":"Kucklick, Jan-Peter. “HIEF: A Holistic Interpretability and Explainability Framework.” <i>Journal of Decision Systems</i>, Taylor &#38; Francis, 2023, pp. 1–41, doi:<a href=\"https://doi.org/10.1080/12460125.2023.2207268\">10.1080/12460125.2023.2207268</a>.","short":"J.-P. Kucklick, Journal of Decision Systems (2023) 1–41.","bibtex":"@article{Kucklick_2023, title={HIEF: a holistic interpretability and explainability framework}, DOI={<a href=\"https://doi.org/10.1080/12460125.2023.2207268\">10.1080/12460125.2023.2207268</a>}, journal={Journal of Decision Systems}, publisher={Taylor &#38; Francis}, author={Kucklick, Jan-Peter}, year={2023}, pages={1–41} }","apa":"Kucklick, J.-P. (2023). HIEF: a holistic interpretability and explainability framework. <i>Journal of Decision Systems</i>, 1–41. <a href=\"https://doi.org/10.1080/12460125.2023.2207268\">https://doi.org/10.1080/12460125.2023.2207268</a>","chicago":"Kucklick, Jan-Peter. “HIEF: A Holistic Interpretability and Explainability Framework.” <i>Journal of Decision Systems</i>, 2023, 1–41. <a href=\"https://doi.org/10.1080/12460125.2023.2207268\">https://doi.org/10.1080/12460125.2023.2207268</a>.","ieee":"J.-P. Kucklick, “HIEF: a holistic interpretability and explainability framework,” <i>Journal of Decision Systems</i>, pp. 1–41, 2023, doi: <a href=\"https://doi.org/10.1080/12460125.2023.2207268\">10.1080/12460125.2023.2207268</a>.","ama":"Kucklick J-P. HIEF: a holistic interpretability and explainability framework. <i>Journal of Decision Systems</i>. Published online 2023:1-41. doi:<a href=\"https://doi.org/10.1080/12460125.2023.2207268\">10.1080/12460125.2023.2207268</a>"},"page":"1-41","year":"2023"},{"status":"public","abstract":[{"lang":"eng","text":"We describe a prototype of a Clinical Decision Support System (CDSS) that provides (counterfactual) explanations to support accurate medical diagnosis. The prototype is based on an inherently interpretable Bayesian network (BN). Our research aims to investigate which explanations are most useful for medical experts and whether co-constructing explanations can foster trust and acceptance of CDSS."}],"type":"conference","language":[{"iso":"eng"}],"keyword":["Explainable AI","Clinical decision support","Bayesian network","Counterfactual explanations"],"user_id":"93275","department":[{"_id":"660"}],"project":[{"_id":"128","name":"TRR 318 - C5: TRR 318 - Subproject C5"}],"_id":"56477","citation":{"ieee":"F. Liedeker and P. Cimiano, “A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations,” presented at the xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lissabon, 2023.","chicago":"Liedeker, Felix, and Philipp Cimiano. “A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations,” 2023.","ama":"Liedeker F, Cimiano P. A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations. In: ; 2023.","mla":"Liedeker, Felix, and Philipp Cimiano. <i>A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations</i>. 2023.","bibtex":"@inproceedings{Liedeker_Cimiano_2023, title={A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations}, author={Liedeker, Felix and Cimiano, Philipp}, year={2023} }","short":"F. Liedeker, P. Cimiano, in: 2023.","apa":"Liedeker, F., &#38; Cimiano, P. (2023). <i>A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations</i>. xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lissabon."},"year":"2023","conference":{"end_date":"2023-07-28","location":"Lissabon","name":"xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)","start_date":"2023-07-26"},"title":"A Prototype of an Interactive Clinical Decision Support System with Counterfactual Explanations","author":[{"first_name":"Felix","full_name":"Liedeker, Felix","id":"93275","last_name":"Liedeker"},{"last_name":"Cimiano","full_name":"Cimiano, Philipp","first_name":"Philipp"}],"date_created":"2024-10-09T14:50:09Z","date_updated":"2024-10-09T15:04:53Z"},{"publication":"55th Annual Hawaii International Conference on System Sciences (HICSS-55)","type":"conference","status":"public","abstract":[{"lang":"eng","text":"Explainability for machine learning gets more and more important in high-stakes decisions like real estate appraisal. While traditional hedonic house pricing models are fed with hard information based on housing attributes, recently also soft information has been incorporated to increase the predictive performance. This soft information can be extracted from image data by complex models like Convolutional Neural Networks (CNNs). However, these are intransparent which excludes their use for high-stakes financial decisions. To overcome this limitation, we examine if a two-stage modeling approach can provide explainability. We combine visual interpretability by Regression Activation Maps (RAM) for the CNN and a linear regression for the overall prediction. Our experiments are based on 62.000 family homes in Philadelphia and the results indicate that the CNN learns aspects related to vegetation and quality aspects of the house from exterior images, improving the predictive accuracy of real estate appraisal by up to 5.4%."}],"department":[{"_id":"195"},{"_id":"196"}],"user_id":"77066","_id":"27506","language":[{"iso":"eng"}],"keyword":["Explainable Artificial Intelligence (XAI)","Regression Activation Maps","Real Estate Appraisal","Convolutional Block Attention Module","Computer Vision"],"citation":{"chicago":"Kucklick, Jan-Peter. “Visual Interpretability of Image-Based Real Estate Appraisal.” In <i>55th Annual Hawaii International Conference on System Sciences (HICSS-55)</i>, 2022.","ieee":"J.-P. Kucklick, “Visual Interpretability of Image-based Real Estate Appraisal,” presented at the Hawaii International Conference on System Science (HICSS), Virtual, 2022.","ama":"Kucklick J-P. Visual Interpretability of Image-based Real Estate Appraisal. In: <i>55th Annual Hawaii International Conference on System Sciences (HICSS-55)</i>. ; 2022.","short":"J.-P. Kucklick, in: 55th Annual Hawaii International Conference on System Sciences (HICSS-55), 2022.","bibtex":"@inproceedings{Kucklick_2022, title={Visual Interpretability of Image-based Real Estate Appraisal}, booktitle={55th Annual Hawaii International Conference on System Sciences (HICSS-55)}, author={Kucklick, Jan-Peter}, year={2022} }","mla":"Kucklick, Jan-Peter. “Visual Interpretability of Image-Based Real Estate Appraisal.” <i>55th Annual Hawaii International Conference on System Sciences (HICSS-55)</i>, 2022.","apa":"Kucklick, J.-P. (2022). Visual Interpretability of Image-based Real Estate Appraisal. <i>55th Annual Hawaii International Conference on System Sciences (HICSS-55)</i>. Hawaii International Conference on System Science (HICSS), Virtual."},"year":"2022","date_created":"2021-11-17T07:08:15Z","author":[{"first_name":"Jan-Peter","full_name":"Kucklick, Jan-Peter","id":"77066","last_name":"Kucklick"}],"date_updated":"2022-01-06T06:57:40Z","oa":"1","conference":{"location":"Virtual","end_date":"2022-01-07","start_date":"2022-01-03","name":"Hawaii International Conference on System Science (HICSS)"},"main_file_link":[{"url":"https://scholarspace.manoa.hawaii.edu/bitstream/10125/79519/0149.pdf","open_access":"1"}],"title":"Visual Interpretability of Image-based Real Estate Appraisal"},{"department":[{"_id":"195"},{"_id":"196"}],"user_id":"77066","_id":"29539","language":[{"iso":"eng"}],"keyword":["Explainable Artificial Intelligence","XAI","Interpretability","Decision Support Systems","Taxonomy"],"publication":"Wirtschaftsinformatik 2022 Proceedings","type":"conference","status":"public","abstract":[{"lang":"eng","text":"Explainable Artificial Intelligence (XAI) is currently an important topic for the application of Machine Learning (ML) in high-stakes decision scenarios. Related research focuses on evaluating ML algorithms in terms of interpretability. However, providing a human understandable explanation of an intelligent system does not only relate to the used ML algorithm. The data and features used also have a considerable impact on interpretability. In this paper, we develop a taxonomy for describing XAI systems based on aspects about the algorithm and data. The proposed taxonomy gives researchers and practitioners opportunities to describe and evaluate current XAI systems with respect to interpretability and guides the future development of this class of systems."}],"date_created":"2022-01-26T08:22:03Z","author":[{"first_name":"Jan-Peter","last_name":"Kucklick","id":"77066","full_name":"Kucklick, Jan-Peter"}],"date_updated":"2022-01-26T08:24:30Z","oa":"1","conference":{"location":"Nürnberg (online)","end_date":"2022-02-23","start_date":"2022-02-21","name":"Wirtschaftsinformatik 2022 (WI22)"},"main_file_link":[{"url":"https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1056&context=wi2022","open_access":"1"}],"title":"Towards a model- and data-focused taxonomy of XAI systems","citation":{"apa":"Kucklick, J.-P. (2022). Towards a model- and data-focused taxonomy of XAI systems. <i>Wirtschaftsinformatik 2022 Proceedings</i>. Wirtschaftsinformatik 2022 (WI22), Nürnberg (online).","short":"J.-P. Kucklick, in: Wirtschaftsinformatik 2022 Proceedings, 2022.","bibtex":"@inproceedings{Kucklick_2022, title={Towards a model- and data-focused taxonomy of XAI systems}, booktitle={Wirtschaftsinformatik 2022 Proceedings}, author={Kucklick, Jan-Peter}, year={2022} }","mla":"Kucklick, Jan-Peter. “Towards a Model- and Data-Focused Taxonomy of XAI Systems.” <i>Wirtschaftsinformatik 2022 Proceedings</i>, 2022.","ieee":"J.-P. Kucklick, “Towards a model- and data-focused taxonomy of XAI systems,” presented at the Wirtschaftsinformatik 2022 (WI22), Nürnberg (online), 2022.","chicago":"Kucklick, Jan-Peter. “Towards a Model- and Data-Focused Taxonomy of XAI Systems.” In <i>Wirtschaftsinformatik 2022 Proceedings</i>, 2022.","ama":"Kucklick J-P. Towards a model- and data-focused taxonomy of XAI systems. In: <i>Wirtschaftsinformatik 2022 Proceedings</i>. ; 2022."},"year":"2022"},{"status":"public","type":"journal_article","file_date_updated":"2023-11-20T16:33:51Z","article_type":"original","user_id":"42933","department":[{"_id":"603"},{"_id":"749"},{"_id":"424"},{"_id":"67"},{"_id":"574"},{"_id":"184"},{"_id":"757"},{"_id":"54"},{"_id":"178"}],"project":[{"grant_number":"438445824","_id":"109","name":"TRR 318: TRR 318 - Erklärbarkeit konstruieren"}],"_id":"24456","citation":{"ieee":"K. J. Rohlfing <i>et al.</i>, “Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems,” <i>IEEE Transactions on Cognitive and Developmental Systems</i>, vol. 13, no. 3, pp. 717–728, 2021, doi: <a href=\"https://doi.org/10.1109/tcds.2020.3044366\">10.1109/tcds.2020.3044366</a>.","chicago":"Rohlfing, Katharina J., Philipp Cimiano, Ingrid Scharlau, Tobias Matzner, Heike M. Buhl, Hendrik Buschmeier, Elena Esposito, et al. “Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems.” <i>IEEE Transactions on Cognitive and Developmental Systems</i> 13, no. 3 (2021): 717–28. <a href=\"https://doi.org/10.1109/tcds.2020.3044366\">https://doi.org/10.1109/tcds.2020.3044366</a>.","ama":"Rohlfing KJ, Cimiano P, Scharlau I, et al. Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. <i>IEEE Transactions on Cognitive and Developmental Systems</i>. 2021;13(3):717-728. doi:<a href=\"https://doi.org/10.1109/tcds.2020.3044366\">10.1109/tcds.2020.3044366</a>","apa":"Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier, H., Esposito, E., Grimminger, A., Hammer, B., Haeb-Umbach, R., Horwath, I., Hüllermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A.-C., Schulte, C., Wachsmuth, H., Wagner, P., &#38; Wrede, B. (2021). Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. <i>IEEE Transactions on Cognitive and Developmental Systems</i>, <i>13</i>(3), 717–728. <a href=\"https://doi.org/10.1109/tcds.2020.3044366\">https://doi.org/10.1109/tcds.2020.3044366</a>","mla":"Rohlfing, Katharina J., et al. “Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems.” <i>IEEE Transactions on Cognitive and Developmental Systems</i>, vol. 13, no. 3, 2021, pp. 717–28, doi:<a href=\"https://doi.org/10.1109/tcds.2020.3044366\">10.1109/tcds.2020.3044366</a>.","bibtex":"@article{Rohlfing_Cimiano_Scharlau_Matzner_Buhl_Buschmeier_Esposito_Grimminger_Hammer_Haeb-Umbach_et al._2021, title={Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems}, volume={13}, DOI={<a href=\"https://doi.org/10.1109/tcds.2020.3044366\">10.1109/tcds.2020.3044366</a>}, number={3}, journal={IEEE Transactions on Cognitive and Developmental Systems}, author={Rohlfing, Katharina J. and Cimiano, Philipp and Scharlau, Ingrid and Matzner, Tobias and Buhl, Heike M. and Buschmeier, Hendrik and Esposito, Elena and Grimminger, Angela and Hammer, Barbara and Haeb-Umbach, Reinhold and et al.}, year={2021}, pages={717–728} }","short":"K.J. Rohlfing, P. Cimiano, I. Scharlau, T. Matzner, H.M. Buhl, H. Buschmeier, E. Esposito, A. Grimminger, B. Hammer, R. Haeb-Umbach, I. Horwath, E. Hüllermeier, F. Kern, S. Kopp, K. Thommes, A.-C. Ngonga Ngomo, C. Schulte, H. Wachsmuth, P. Wagner, B. Wrede, IEEE Transactions on Cognitive and Developmental Systems 13 (2021) 717–728."},"intvolume":"        13","page":"717-728","publication_status":"published","publication_identifier":{"issn":["2379-8920","2379-8939"]},"has_accepted_license":"1","doi":"10.1109/tcds.2020.3044366","author":[{"last_name":"Rohlfing","full_name":"Rohlfing, Katharina J.","id":"50352","first_name":"Katharina J."},{"last_name":"Cimiano","full_name":"Cimiano, Philipp","first_name":"Philipp"},{"full_name":"Scharlau, Ingrid","id":"451","last_name":"Scharlau","orcid":"0000-0003-2364-9489","first_name":"Ingrid"},{"id":"65695","full_name":"Matzner, Tobias","last_name":"Matzner","first_name":"Tobias"},{"last_name":"Buhl","id":"27152","full_name":"Buhl, Heike M.","first_name":"Heike M."},{"first_name":"Hendrik","full_name":"Buschmeier, Hendrik","last_name":"Buschmeier"},{"full_name":"Esposito, Elena","last_name":"Esposito","first_name":"Elena"},{"id":"57578","full_name":"Grimminger, Angela","last_name":"Grimminger","first_name":"Angela"},{"last_name":"Hammer","full_name":"Hammer, Barbara","first_name":"Barbara"},{"last_name":"Haeb-Umbach","id":"242","full_name":"Haeb-Umbach, Reinhold","first_name":"Reinhold"},{"first_name":"Ilona","full_name":"Horwath, Ilona","id":"68836","last_name":"Horwath"},{"last_name":"Hüllermeier","id":"48129","full_name":"Hüllermeier, Eyke","first_name":"Eyke"},{"first_name":"Friederike","full_name":"Kern, Friederike","last_name":"Kern"},{"first_name":"Stefan","full_name":"Kopp, Stefan","last_name":"Kopp"},{"first_name":"Kirsten","last_name":"Thommes","full_name":"Thommes, Kirsten","id":"72497"},{"last_name":"Ngonga Ngomo","id":"65716","full_name":"Ngonga Ngomo, Axel-Cyrille","first_name":"Axel-Cyrille"},{"first_name":"Carsten","last_name":"Schulte","full_name":"Schulte, Carsten","id":"60311"},{"first_name":"Henning","id":"3900","full_name":"Wachsmuth, Henning","last_name":"Wachsmuth"},{"full_name":"Wagner, Petra","last_name":"Wagner","first_name":"Petra"},{"last_name":"Wrede","full_name":"Wrede, Britta","first_name":"Britta"}],"volume":13,"oa":"1","date_updated":"2023-12-05T10:15:02Z","file":[{"date_updated":"2023-11-20T16:33:51Z","creator":"haebumb","date_created":"2023-11-20T16:33:51Z","file_size":626217,"file_name":"2020-12-01_explainability_final_version.pdf","file_id":"49081","access_level":"open_access","content_type":"application/pdf","relation":"main_file"}],"abstract":[{"lang":"eng","text":"One objective of current research in explainable intelligent systems is to implement social aspects in order to increase the relevance of explanations. In this paper, we argue that a novel conceptual framework is needed to overcome shortcomings of existing AI systems with little attention to processes of interaction and learning. Drawing from research in interaction and development, we first outline the novel conceptual framework that pushes the design of AI systems toward true interactivity with an emphasis on the role of the partner and social relevance. We propose that AI systems will be able to provide a meaningful and relevant explanation only if the process of explaining is extended to active contribution of both partners that brings about dynamics that is modulated by different levels of analysis. Accordingly, our conceptual framework comprises monitoring and scaffolding as key concepts and claims that the process of explaining is not only modulated by the interaction between explainee and explainer but is embedded into a larger social context in which conventionalized and routinized behaviors are established. We discuss our conceptual framework in relation to the established objectives of transparency and autonomy that are raised for the design of explainable AI systems currently."}],"publication":"IEEE Transactions on Cognitive and Developmental Systems","language":[{"iso":"eng"}],"ddc":["300"],"keyword":["Explainability","process ofexplaining andunderstanding","explainable artificial systems"],"year":"2021","issue":"3","quality_controlled":"1","title":"Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems","date_created":"2021-09-14T20:52:57Z"}]
