---
_id: '54450'
abstract:
- lang: eng
  text: In the last decade, there has been increasing interest in allowing users to
    understand how the predictions of machine-learned models come about, thus increasing
    transparency and empowering users to understand and potentially contest those
    decisions.Dialogue-based approaches, in contrast to traditional one-shot eXplainable
    Artificial Intelligence (XAI) methods, facilitate interactive, in-depth exploration
    through multi-turn dialogues, simulating expert conversations. This paper reviews
    the current state of dialogue-based XAI, presenting a systematic review of 1,339
    publications, narrowed down to 14 based on inclusion criteria. We explore theoretical
    foundations of the systems, propose key dimensions along which different solutions
    to dialogue-based XAI differ, and identify key use cases, target audiences, system
    components, and the types of supported queries and responses. Furthermore, we
    investigate the current paradigms by which systems are evaluated and highlight
    their key limitations. Key findings include identifying the main use cases, objectives,
    and audiences targeted by dialogue-based XAI methods, and summarize the main types
    of questions and information needs. Beyond discussing avenues for future work,
    we present a meta-architecture for these systems from existing literature and
    outlined prevalent theoretical frameworks.
article_number: '81'
author:
- first_name: Dimitry
  full_name: Mindlin, Dimitry
  last_name: Mindlin
- first_name: Fabian
  full_name: Beer, Fabian
  last_name: Beer
- first_name: Leonie Nora
  full_name: Sieger, Leonie Nora
  id: '93402'
  last_name: Sieger
- first_name: Stefan
  full_name: Heindorf, Stefan
  id: '11871'
  last_name: Heindorf
  orcid: 0000-0002-4525-6865
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
- first_name: Elena
  full_name: Esposito, Elena
  last_name: Esposito
- first_name: Axel-Cyrille
  full_name: Ngonga Ngomo, Axel-Cyrille
  id: '65716'
  last_name: Ngonga Ngomo
citation:
  ama: 'Mindlin D, Beer F, Sieger LN, et al. Beyond One-Shot Explanations: A Systematic
    Literature Review of Dialogue-Based XAI Approaches. <i>Artificial Intelligence
    Review</i>. 2025;58(3). doi:<a href="https://doi.org/10.1007/s10462-024-11007-7">10.1007/s10462-024-11007-7</a>'
  apa: 'Mindlin, D., Beer, F., Sieger, L. N., Heindorf, S., Cimiano, P., Esposito,
    E., &#38; Ngonga Ngomo, A.-C. (2025). Beyond One-Shot Explanations: A Systematic
    Literature Review of Dialogue-Based XAI Approaches. <i>Artificial Intelligence
    Review</i>, <i>58</i>(3), Article 81. <a href="https://doi.org/10.1007/s10462-024-11007-7">https://doi.org/10.1007/s10462-024-11007-7</a>'
  bibtex: '@article{Mindlin_Beer_Sieger_Heindorf_Cimiano_Esposito_Ngonga Ngomo_2025,
    title={Beyond One-Shot Explanations: A Systematic Literature Review of Dialogue-Based
    XAI Approaches}, volume={58}, DOI={<a href="https://doi.org/10.1007/s10462-024-11007-7">10.1007/s10462-024-11007-7</a>},
    number={381}, journal={Artificial Intelligence Review}, publisher={Springer},
    author={Mindlin, Dimitry and Beer, Fabian and Sieger, Leonie Nora and Heindorf,
    Stefan and Cimiano, Philipp and Esposito, Elena and Ngonga Ngomo, Axel-Cyrille},
    year={2025} }'
  chicago: 'Mindlin, Dimitry, Fabian Beer, Leonie Nora Sieger, Stefan Heindorf, Philipp
    Cimiano, Elena Esposito, and Axel-Cyrille Ngonga Ngomo. “Beyond One-Shot Explanations:
    A Systematic Literature Review of Dialogue-Based XAI Approaches.” <i>Artificial
    Intelligence Review</i> 58, no. 3 (2025). <a href="https://doi.org/10.1007/s10462-024-11007-7">https://doi.org/10.1007/s10462-024-11007-7</a>.'
  ieee: 'D. Mindlin <i>et al.</i>, “Beyond One-Shot Explanations: A Systematic Literature
    Review of Dialogue-Based XAI Approaches,” <i>Artificial Intelligence Review</i>,
    vol. 58, no. 3, Art. no. 81, 2025, doi: <a href="https://doi.org/10.1007/s10462-024-11007-7">10.1007/s10462-024-11007-7</a>.'
  mla: 'Mindlin, Dimitry, et al. “Beyond One-Shot Explanations: A Systematic Literature
    Review of Dialogue-Based XAI Approaches.” <i>Artificial Intelligence Review</i>,
    vol. 58, no. 3, 81, Springer, 2025, doi:<a href="https://doi.org/10.1007/s10462-024-11007-7">10.1007/s10462-024-11007-7</a>.'
  short: D. Mindlin, F. Beer, L.N. Sieger, S. Heindorf, P. Cimiano, E. Esposito, A.-C.
    Ngonga Ngomo, Artificial Intelligence Review 58 (2025).
date_created: 2024-05-26T18:55:58Z
date_updated: 2025-01-24T20:09:20Z
department:
- _id: '760'
- _id: '574'
doi: 10.1007/s10462-024-11007-7
intvolume: '        58'
issue: '3'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://link.springer.com/article/10.1007/s10462-024-11007-7
oa: '1'
publication: Artificial Intelligence Review
publication_status: published
publisher: Springer
status: public
title: 'Beyond One-Shot Explanations: A Systematic Literature Review of Dialogue-Based
  XAI Approaches'
type: journal_article
user_id: '11871'
volume: 58
year: '2025'
...
---
_id: '37937'
abstract:
- lang: eng
  text: "Knowledge bases are widely used for information management on the web,\r\nenabling
    high-impact applications such as web search, question answering, and\r\nnatural
    language processing. They also serve as the backbone for automatic\r\ndecision
    systems, e.g. for medical diagnostics and credit scoring. As\r\nstakeholders affected
    by these decisions would like to understand their\r\nsituation and verify fair
    decisions, a number of explanation approaches have\r\nbeen proposed using concepts
    in description logics. However, the learned\r\nconcepts can become long and difficult
    to fathom for non-experts, even when\r\nverbalized. Moreover, long concepts do
    not immediately provide a clear path of\r\naction to change one's situation. Counterfactuals
    answering the question \"How\r\nmust feature values be changed to obtain a different
    classification?\" have been\r\nproposed as short, human-friendly explanations
    for tabular data. In this paper,\r\nwe transfer the notion of counterfactuals
    to description logics and propose the\r\nfirst algorithm for generating counterfactual
    explanations in the description\r\nlogic $\\mathcal{ELH}$. Counterfactual candidates
    are generated from concepts\r\nand the candidates with fewest feature changes
    are selected as counterfactuals.\r\nIn case of multiple counterfactuals, we rank
    them according to the likeliness\r\nof their feature combinations. For evaluation,
    we conduct a user survey to\r\ninvestigate which of the generated counterfactual
    candidates are preferred for\r\nexplanation by participants. In a second study,
    we explore possible use cases\r\nfor counterfactual explanations."
author:
- first_name: Leonie Nora
  full_name: Sieger, Leonie Nora
  id: '93402'
  last_name: Sieger
- first_name: Stefan
  full_name: Heindorf, Stefan
  id: '11871'
  last_name: Heindorf
  orcid: 0000-0002-4525-6865
- first_name: Lukas
  full_name: Blübaum, Lukas
  last_name: Blübaum
- first_name: Axel-Cyrille
  full_name: Ngonga Ngomo, Axel-Cyrille
  id: '65716'
  last_name: Ngonga Ngomo
citation:
  ama: Sieger LN, Heindorf S, Blübaum L, Ngonga Ngomo A-C. Explaining ELH Concept
    Descriptions through Counterfactual Reasoning. <i>arXiv:230105109</i>. Published
    online 2023.
  apa: Sieger, L. N., Heindorf, S., Blübaum, L., &#38; Ngonga Ngomo, A.-C. (2023).
    Explaining ELH Concept Descriptions through Counterfactual Reasoning. In <i>arXiv:2301.05109</i>.
  bibtex: '@article{Sieger_Heindorf_Blübaum_Ngonga Ngomo_2023, title={Explaining ELH
    Concept Descriptions through Counterfactual Reasoning}, journal={arXiv:2301.05109},
    author={Sieger, Leonie Nora and Heindorf, Stefan and Blübaum, Lukas and Ngonga
    Ngomo, Axel-Cyrille}, year={2023} }'
  chicago: Sieger, Leonie Nora, Stefan Heindorf, Lukas Blübaum, and Axel-Cyrille Ngonga
    Ngomo. “Explaining ELH Concept Descriptions through Counterfactual Reasoning.”
    <i>ArXiv:2301.05109</i>, 2023.
  ieee: L. N. Sieger, S. Heindorf, L. Blübaum, and A.-C. Ngonga Ngomo, “Explaining
    ELH Concept Descriptions through Counterfactual Reasoning,” <i>arXiv:2301.05109</i>.
    2023.
  mla: Sieger, Leonie Nora, et al. “Explaining ELH Concept Descriptions through Counterfactual
    Reasoning.” <i>ArXiv:2301.05109</i>, 2023.
  short: L.N. Sieger, S. Heindorf, L. Blübaum, A.-C. Ngonga Ngomo, ArXiv:2301.05109
    (2023).
date_created: 2023-01-22T19:36:01Z
date_updated: 2024-05-26T19:03:44Z
department:
- _id: '574'
- _id: '760'
external_id:
  arxiv:
  - '2301.05109'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/pdf/2301.05109.pdf
oa: '1'
publication: arXiv:2301.05109
status: public
title: Explaining ELH Concept Descriptions through Counterfactual Reasoning
type: preprint
user_id: '11871'
year: '2023'
...
---
_id: '34674'
abstract:
- lang: eng
  text: 'Smart home systems contain plenty of features that enhance wellbeing in everyday
    life through artificial intelligence (AI). However, many users feel insecure because
    they do not understand the AI’s functionality and do not feel they are in control
    of it. Combining technical, psychological and philosophical views on AI, we rethink
    smart homes as interactive systems where users can partake in an intelligent agent’s
    learning. Parallel to the goals of explainable AI (XAI), we explored the possibility
    of user involvement in supervised learning of the smart home to have a first approach
    to improve acceptance, support subjective understanding and increase perceived
    control. In this work, we conducted two studies: In an online pre-study, we asked
    participants about their attitude towards teaching AI via a questionnaire. In
    the main study, we performed a Wizard of Oz laboratory experiment with human participants,
    where participants spent time in a prototypical smart home and taught activity
    recognition to the intelligent agent through supervised learning based on the
    user’s behaviour. We found that involvement in the AI’s learning phase enhanced
    the users’ feeling of control, perceived understanding and perceived usefulness
    of AI in general. The participants reported positive attitudes towards training
    a smart home AI and found the process understandable and controllable. We suggest
    that involving the user in the learning phase could lead to better personalisation
    and increased understanding and control by users of intelligent agents for smart
    home automation.'
alternative_title:
- Increasing Perceived Control and Understanding
author:
- first_name: Leonie Nora
  full_name: Sieger, Leonie Nora
  id: '93402'
  last_name: Sieger
- first_name: Julia
  full_name: Hermann, Julia
  last_name: Hermann
- first_name: Astrid
  full_name: Schomäcker, Astrid
  last_name: Schomäcker
- first_name: Stefan
  full_name: Heindorf, Stefan
  id: '11871'
  last_name: Heindorf
  orcid: 0000-0002-4525-6865
- first_name: Christian
  full_name: Meske, Christian
  last_name: Meske
- first_name: Celine-Chiara
  full_name: Hey, Celine-Chiara
  last_name: Hey
- first_name: Ayşegül
  full_name: Doğangün, Ayşegül
  last_name: Doğangün
citation:
  ama: 'Sieger LN, Hermann J, Schomäcker A, et al. User Involvement in Training Smart
    Home Agents. In: <i>International Conference on Human-Agent Interaction</i>. ACM;
    2022. doi:<a href="https://doi.org/10.1145/3527188.3561914">10.1145/3527188.3561914</a>'
  apa: 'Sieger, L. N., Hermann, J., Schomäcker, A., Heindorf, S., Meske, C., Hey,
    C.-C., &#38; Doğangün, A. (2022). User Involvement in Training Smart Home Agents.
    <i>International Conference on Human-Agent Interaction</i>. HAI ’22: International
    Conference on Human-Agent Interaction, Christchurch, New Zealand. <a href="https://doi.org/10.1145/3527188.3561914">https://doi.org/10.1145/3527188.3561914</a>'
  bibtex: '@inproceedings{Sieger_Hermann_Schomäcker_Heindorf_Meske_Hey_Doğangün_2022,
    title={User Involvement in Training Smart Home Agents}, DOI={<a href="https://doi.org/10.1145/3527188.3561914">10.1145/3527188.3561914</a>},
    booktitle={International Conference on Human-Agent Interaction}, publisher={ACM},
    author={Sieger, Leonie Nora and Hermann, Julia and Schomäcker, Astrid and Heindorf,
    Stefan and Meske, Christian and Hey, Celine-Chiara and Doğangün, Ayşegül}, year={2022}
    }'
  chicago: Sieger, Leonie Nora, Julia Hermann, Astrid Schomäcker, Stefan Heindorf,
    Christian Meske, Celine-Chiara Hey, and Ayşegül Doğangün. “User Involvement in
    Training Smart Home Agents.” In <i>International Conference on Human-Agent Interaction</i>.
    ACM, 2022. <a href="https://doi.org/10.1145/3527188.3561914">https://doi.org/10.1145/3527188.3561914</a>.
  ieee: 'L. N. Sieger <i>et al.</i>, “User Involvement in Training Smart Home Agents,”
    presented at the HAI ’22: International Conference on Human-Agent Interaction,
    Christchurch, New Zealand, 2022, doi: <a href="https://doi.org/10.1145/3527188.3561914">10.1145/3527188.3561914</a>.'
  mla: Sieger, Leonie Nora, et al. “User Involvement in Training Smart Home Agents.”
    <i>International Conference on Human-Agent Interaction</i>, ACM, 2022, doi:<a
    href="https://doi.org/10.1145/3527188.3561914">10.1145/3527188.3561914</a>.
  short: 'L.N. Sieger, J. Hermann, A. Schomäcker, S. Heindorf, C. Meske, C.-C. Hey,
    A. Doğangün, in: International Conference on Human-Agent Interaction, ACM, 2022.'
conference:
  end_date: 2022-12-08
  location: Christchurch, New Zealand
  name: 'HAI ''22: International Conference on Human-Agent Interaction'
  start_date: 2022-12-05
date_created: 2022-12-21T09:48:43Z
date_updated: 2024-05-30T18:04:45Z
ddc:
- '000'
department:
- _id: '574'
- _id: '760'
doi: 10.1145/3527188.3561914
file:
- access_level: closed
  content_type: application/pdf
  creator: heindorf
  date_created: 2024-05-30T18:04:31Z
  date_updated: 2024-05-30T18:04:31Z
  file_id: '54524'
  file_name: User_Involvement_in_Training_Smart_Home_Agents_public.pdf
  file_size: 1151728
  relation: main_file
  success: 1
file_date_updated: 2024-05-30T18:04:31Z
has_accepted_license: '1'
keyword:
- human-agent interaction
- smart homes
- supervised learning
- participation
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://papers.dice-research.org/2022/HAI_SmartHome/User_Involvement_in_Training_Smart_Home_Agents_public.pdf
oa: '1'
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B1: TRR 318 - Subproject B1'
publication: International Conference on Human-Agent Interaction
publication_status: published
publisher: ACM
quality_controlled: '1'
status: public
title: User Involvement in Training Smart Home Agents
type: conference
user_id: '11871'
year: '2022'
...
