{"type":"preprint","publication":"arXiv:2301.05109","citation":{"bibtex":"@article{Sieger_Heindorf_Blübaum_Ngonga Ngomo_2023, title={Counterfactual Explanations for Concepts in ELH}, journal={arXiv:2301.05109}, author={Sieger, Leonie Nora and Heindorf, Stefan and Blübaum, Lukas and Ngonga Ngomo, Axel-Cyrille}, year={2023} }","apa":"Sieger, L. N., Heindorf, S., Blübaum, L., & Ngonga Ngomo, A.-C. (2023). Counterfactual Explanations for Concepts in ELH. In arXiv:2301.05109.","chicago":"Sieger, Leonie Nora, Stefan Heindorf, Lukas Blübaum, and Axel-Cyrille Ngonga Ngomo. “Counterfactual Explanations for Concepts in ELH.” ArXiv:2301.05109, 2023.","ama":"Sieger LN, Heindorf S, Blübaum L, Ngonga Ngomo A-C. Counterfactual Explanations for Concepts in ELH. arXiv:230105109. Published online 2023.","short":"L.N. Sieger, S. Heindorf, L. Blübaum, A.-C. Ngonga Ngomo, ArXiv:2301.05109 (2023).","mla":"Sieger, Leonie Nora, et al. “Counterfactual Explanations for Concepts in ELH.” ArXiv:2301.05109, 2023.","ieee":"L. N. Sieger, S. Heindorf, L. Blübaum, and A.-C. Ngonga Ngomo, “Counterfactual Explanations for Concepts in ELH,” arXiv:2301.05109. 2023."},"external_id":{"arxiv":["2301.05109"]},"department":[{"_id":"574"}],"_id":"37937","author":[{"id":"93402","full_name":"Sieger, Leonie Nora","first_name":"Leonie Nora","last_name":"Sieger"},{"first_name":"Stefan","orcid":"0000-0002-4525-6865","last_name":"Heindorf","id":"11871","full_name":"Heindorf, Stefan"},{"first_name":"Lukas","last_name":"Blübaum","full_name":"Blübaum, Lukas"},{"id":"65716","full_name":"Ngonga Ngomo, Axel-Cyrille","first_name":"Axel-Cyrille","last_name":"Ngonga Ngomo"}],"title":"Counterfactual Explanations for Concepts in ELH","abstract":[{"text":"Knowledge bases are widely used for information management on the web,\r\nenabling high-impact applications such as web search, question answering, and\r\nnatural language processing. They also serve as the backbone for automatic\r\ndecision systems, e.g. for medical diagnostics and credit scoring. As\r\nstakeholders affected by these decisions would like to understand their\r\nsituation and verify fair decisions, a number of explanation approaches have\r\nbeen proposed using concepts in description logics. However, the learned\r\nconcepts can become long and difficult to fathom for non-experts, even when\r\nverbalized. Moreover, long concepts do not immediately provide a clear path of\r\naction to change one's situation. Counterfactuals answering the question \"How\r\nmust feature values be changed to obtain a different classification?\" have been\r\nproposed as short, human-friendly explanations for tabular data. In this paper,\r\nwe transfer the notion of counterfactuals to description logics and propose the\r\nfirst algorithm for generating counterfactual explanations in the description\r\nlogic $\\mathcal{ELH}$. Counterfactual candidates are generated from concepts\r\nand the candidates with fewest feature changes are selected as counterfactuals.\r\nIn case of multiple counterfactuals, we rank them according to the likeliness\r\nof their feature combinations. For evaluation, we conduct a user survey to\r\ninvestigate which of the generated counterfactual candidates are preferred for\r\nexplanation by participants. In a second study, we explore possible use cases\r\nfor counterfactual explanations.","lang":"eng"}],"date_created":"2023-01-22T19:36:01Z","status":"public","user_id":"11871","main_file_link":[{"url":"https://arxiv.org/pdf/2301.05109.pdf"}],"language":[{"iso":"eng"}],"year":"2023","date_updated":"2023-01-22T19:40:18Z"}