---
_id: '56479'
abstract:
- lang: eng
  text: 'While the importance of explainable artificial intelligence in high-stakes
    decision-making is widely recognized in existing literature, empirical studies
    assessing users'' perceived value of explanations are scarce. In this paper, we
    aim to address this shortcoming by conducting an empirical study focused on measuring
    the perceived value of the following types of explanations: plain explanations
    based on feature attribution, counterfactual explanations and complex counterfactual
    explanations. We measure an explanation''s value using five dimensions: perceived
    accuracy, understandability, plausibility, sufficiency of detail, and user satisfaction.
    Our findings indicate a sweet spot of explanation complexity, with both dimensional
    and structural complexity positively impacting the perceived value up to a certain
    threshold.'
author:
- first_name: Felix
  full_name: Liedeker, Felix
  id: '93275'
  last_name: Liedeker
- first_name: Christoph
  full_name: Düsing, Christoph
  last_name: Düsing
- first_name: Marcel
  full_name: Nieveler, Marcel
  last_name: Nieveler
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
citation:
  ama: 'Liedeker F, Düsing C, Nieveler M, Cimiano P. An Empirical Investigation of
    Users’ Assessment of XAI Explanations: Identifying the Sweet-Spot of Explanation
    Complexity. In: ; 2024.'
  apa: 'Liedeker, F., Düsing, C., Nieveler, M., &#38; Cimiano, P. (2024). <i>An Empirical
    Investigation of Users’ Assessment of XAI Explanations: Identifying the Sweet-Spot
    of Explanation Complexity</i>. 2nd World Conference on eXplainable Artificial
    Intelligence, Valetta, Malta.'
  bibtex: '@inproceedings{Liedeker_Düsing_Nieveler_Cimiano_2024, title={An Empirical
    Investigation of Users’ Assessment of XAI Explanations: Identifying the Sweet-Spot
    of Explanation Complexity}, author={Liedeker, Felix and Düsing, Christoph and
    Nieveler, Marcel and Cimiano, Philipp}, year={2024} }'
  chicago: 'Liedeker, Felix, Christoph Düsing, Marcel Nieveler, and Philipp Cimiano.
    “An Empirical Investigation of Users’ Assessment of XAI Explanations: Identifying
    the Sweet-Spot of Explanation Complexity,” 2024.'
  ieee: 'F. Liedeker, C. Düsing, M. Nieveler, and P. Cimiano, “An Empirical Investigation
    of Users’ Assessment of XAI Explanations: Identifying the Sweet-Spot of Explanation
    Complexity,” presented at the 2nd World Conference on eXplainable Artificial Intelligence,
    Valetta, Malta, 2024.'
  mla: 'Liedeker, Felix, et al. <i>An Empirical Investigation of Users’ Assessment
    of XAI Explanations: Identifying the Sweet-Spot of Explanation Complexity</i>.
    2024.'
  short: 'F. Liedeker, C. Düsing, M. Nieveler, P. Cimiano, in: 2024.'
conference:
  end_date: 2024-07-19
  location: Valetta, Malta
  name: 2nd World Conference on eXplainable Artificial Intelligence
  start_date: 2024-07-17
date_created: 2024-10-09T14:57:49Z
date_updated: 2024-10-09T15:06:00Z
department:
- _id: '660'
keyword:
- XAI
- Explanation Complexity
- User Perception
language:
- iso: eng
project:
- _id: '128'
  name: 'TRR 318 - C5: TRR 318 - Subproject C5'
status: public
title: 'An Empirical Investigation of Users'' Assessment of XAI Explanations: Identifying
  the Sweet-Spot of Explanation Complexity'
type: conference
user_id: '93275'
year: '2024'
...
---
_id: '56660'
abstract:
- lang: eng
  text: In a successful dialogue in general and a successful explanation in specific,
    partners need to account for both, the task model (what is relevant for the task)
    and the partner model (what one can con- tribute). The phenomenon of coupling
    between task and the partner model becomes especially interesting in the context
    of Human– Robot Interaction where humans have to deal with unknown ca- pabilities
    of the robot, which can momentarily be perceived when the robot is unable to contribute
    to the task. Following research on the path over manner prominence in an action
    [31–33], a robot ex- plained actions to a human by emphasizing two aspects – the
    path ("where" component) and the manner ("how" component). On criti- cal trials,
    the robot occasionally omitted one of these components where participants sought
    missing information for the path or the manner. Participants’ information-seeking
    and gaze behaviour were analysed. Analysis confirms the initial predictions for,
    a) task model (path over manner prominence), i.e., earlier information-seeking
    for path-missing than manner-missing trials, and b) partner model, i.e., while
    information-seeking is predominantly tied to the attention on the robot’s face,
    when robot fails to provide resolution, attention shifts more often towards its
    torso – a behavior likely to indicate an exploration of the robot’s capabilities.
    An individual-level anal- ysis further confirms that the intra-individual variation
    in the task model is partly influenced by the perceived capability of the robot.
author:
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
citation:
  ama: 'Singh A, Rohlfing KJ. Coupling of Task and Partner Model: Investigating the
    Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue.
    In: <i>Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)</i>. ; 2024. doi:<a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>'
  apa: 'Singh, A., &#38; Rohlfing, K. J. (2024). Coupling of Task and Partner Model:
    Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory
    Dialogue. <i>Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)</i>. 26th ACM International Conference on Multimodal Interaction (ICMI
    2024), San Jose, Costa Rica. <a href="https://doi.org/10.1145/3686215.3689202">https://doi.org/10.1145/3686215.3689202</a>'
  bibtex: '@inproceedings{Singh_Rohlfing_2024, title={Coupling of Task and Partner
    Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot
    Explanatory Dialogue}, DOI={<a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>},
    booktitle={Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)}, author={Singh, Amit and Rohlfing, Katharina J.}, year={2024} }'
  chicago: 'Singh, Amit, and Katharina J. Rohlfing. “Coupling of Task and Partner
    Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot
    Explanatory Dialogue.” In <i>Proceedings of 26th ACM International Conference
    on Multimodal Interaction (ICMI 2024)</i>, 2024. <a href="https://doi.org/10.1145/3686215.3689202">https://doi.org/10.1145/3686215.3689202</a>.'
  ieee: 'A. Singh and K. J. Rohlfing, “Coupling of Task and Partner Model: Investigating
    the Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue,”
    presented at the 26th ACM International Conference on Multimodal Interaction (ICMI
    2024), San Jose, Costa Rica, 2024, doi: <a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>.'
  mla: 'Singh, Amit, and Katharina J. Rohlfing. “Coupling of Task and Partner Model:
    Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory
    Dialogue.” <i>Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)</i>, 2024, doi:<a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>.'
  short: 'A. Singh, K.J. Rohlfing, in: Proceedings of 26th ACM International Conference
    on Multimodal Interaction (ICMI 2024), 2024.'
conference:
  location: San Jose, Costa Rica
  name: 26th ACM International Conference on Multimodal Interaction (ICMI 2024)
date_created: 2024-10-17T09:35:32Z
date_updated: 2024-11-06T10:56:34Z
ddc:
- '410'
department:
- _id: '749'
- _id: '660'
doi: 10.1145/3686215.3689202
has_accepted_license: '1'
keyword:
- Explanation
- Scaffolding
- Eyetracking
- Partner Model
- HRI
language:
- iso: eng
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Proceedings of 26th ACM International Conference on Multimodal Interaction
  (ICMI 2024)
status: public
title: 'Coupling of Task and Partner Model: Investigating the Intra-Individual Variability
  in Gaze during Human–Robot Explanatory Dialogue'
type: conference
user_id: '91018'
year: '2024'
...
---
_id: '57204'
abstract:
- lang: eng
  text: In this study on the use of gesture deixis during explanations, a sample of
    24 videorecorded dyadic interactions of a board game explanation was analyzed.
    The relation between the use of gesture deixis by different explainers and their
    interpretation of explainees' understanding was investigated. In addition, we
    describe explainers' intra-individual variations related to their interactions
    with three different explainees consecutively. While we did not find a relation
    between interpretations of explainees' complete understanding and a decrease in
    explainers' use of gesture deixis, we demonstrated that the overall use of gesture
    deixis is related to the process of interactional monitoring and the attendance
    of a different explainee.
author:
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
citation:
  ama: 'Lazarov ST, Grimminger A. Variations in explainers’ gesture deixis in explanations
    related to the monitoring of explainees’ understanding. In: <i>Proceedings of
    the Annual Meeting of the Cognitive Science Society</i>. Vol 46. ; 2024.'
  apa: Lazarov, S. T., &#38; Grimminger, A. (2024). Variations in explainers’ gesture
    deixis in explanations related to the monitoring of explainees’ understanding.
    <i>Proceedings of the Annual Meeting of the Cognitive Science Society</i>, <i>46</i>.
  bibtex: '@inproceedings{Lazarov_Grimminger_2024, title={Variations in explainers’
    gesture deixis in explanations related to the monitoring of explainees’ understanding},
    volume={46}, booktitle={Proceedings of the Annual Meeting of the Cognitive Science
    Society}, author={Lazarov, Stefan Teodorov and Grimminger, Angela}, year={2024}
    }'
  chicago: Lazarov, Stefan Teodorov, and Angela Grimminger. “Variations in Explainers’
    Gesture Deixis in Explanations Related to the Monitoring of Explainees’ Understanding.”
    In <i>Proceedings of the Annual Meeting of the Cognitive Science Society</i>,
    Vol. 46, 2024.
  ieee: S. T. Lazarov and A. Grimminger, “Variations in explainers’ gesture deixis
    in explanations related to the monitoring of explainees’ understanding,” in <i>Proceedings
    of the Annual Meeting of the Cognitive Science Society</i>, Rotterdam, 2024, vol.
    46.
  mla: Lazarov, Stefan Teodorov, and Angela Grimminger. “Variations in Explainers’
    Gesture Deixis in Explanations Related to the Monitoring of Explainees’ Understanding.”
    <i>Proceedings of the Annual Meeting of the Cognitive Science Society</i>, vol.
    46, 2024.
  short: 'S.T. Lazarov, A. Grimminger, in: Proceedings of the Annual Meeting of the
    Cognitive Science Society, 2024.'
conference:
  end_date: 2024-07-27
  location: Rotterdam
  name: Sognitive Science Society
  start_date: 2024-07-24
date_created: 2024-11-18T13:40:09Z
date_updated: 2024-11-18T13:40:39Z
department:
- _id: '660'
intvolume: '        46'
keyword:
- explanation
- gesture deixis
- monitoring
- understanding
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://escholarship.org/uc/item/7dz8n8tf
oa: '1'
project:
- _id: '112'
  grant_number: '438445824'
  name: 'TRR 318 - A02: TRR 318 - Verstehensprozess einer Erklärung beobachten und
    auswerten (Teilprojekt A02)'
publication: Proceedings of the Annual Meeting of the Cognitive Science Society
publication_status: published
status: public
title: Variations in explainers’ gesture deixis in explanations related to the monitoring
  of explainees’ understanding
type: conference
user_id: '90345'
volume: 46
year: '2024'
...
---
_id: '58109'
abstract:
- lang: eng
  text: The present study aims to understand how metaphors are used in explanations.
    According to many current theories, metaphors have a conceptual function for the
    understanding of abstract objects. From this theoretical assumption, we derived
    the hypothesis that the lower the expertise of the addressee of an explanation,
    the more metaphors should be used. We tested this hypothesis on a relatively natural
    data set of 24 published videos with close to 100,000 words overall in which experts
    explain abstract, mostly scientific concepts to persons of different expertise,
    varying from minimal (children) to profound (expert). Contrary to our expectations,
    the frequency of metaphors did not decrease with expertise, but actually increased.
    This increase could be statistically substantiated with higher differences in
    expertise. The study contributes to a better understanding of the use of metaphors
    in actual explanatory processes and how metaphor use depends on contextual factors.
    It thus supports the expansion of the conceptual and linguistic perspective on
    metaphors to include the aspect of how metaphors are used by speakers.
article_type: original
author:
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Miriam
  full_name: Körber, Miriam
  last_name: Körber
- first_name: Meghdut
  full_name: Sengupta, Meghdut
  last_name: Sengupta
- first_name: Henning
  full_name: Wachsmuth, Henning
  last_name: Wachsmuth
citation:
  ama: 'Scharlau I, Körber M, Sengupta M, Wachsmuth H. When to use a metaphor: Metaphors
    in dialogical explanations with addressees of different expertise. <i>Frontiers
    in Language Sciences</i>. 2024;3:1474924.'
  apa: 'Scharlau, I., Körber, M., Sengupta, M., &#38; Wachsmuth, H. (2024). When to
    use a metaphor: Metaphors in dialogical explanations with addressees of different
    expertise. <i>Frontiers in Language Sciences</i>, <i>3</i>, 1474924.'
  bibtex: '@article{Scharlau_Körber_Sengupta_Wachsmuth_2024, title={When to use a
    metaphor: Metaphors in dialogical explanations with addressees of different expertise},
    volume={3}, journal={Frontiers in Language Sciences}, author={Scharlau, Ingrid
    and Körber, Miriam and Sengupta, Meghdut and Wachsmuth, Henning}, year={2024},
    pages={1474924} }'
  chicago: 'Scharlau, Ingrid, Miriam Körber, Meghdut Sengupta, and Henning Wachsmuth.
    “When to Use a Metaphor: Metaphors in Dialogical Explanations with Addressees
    of Different Expertise.” <i>Frontiers in Language Sciences</i> 3 (2024): 1474924.'
  ieee: 'I. Scharlau, M. Körber, M. Sengupta, and H. Wachsmuth, “When to use a metaphor:
    Metaphors in dialogical explanations with addressees of different expertise,”
    <i>Frontiers in Language Sciences</i>, vol. 3, p. 1474924, 2024.'
  mla: 'Scharlau, Ingrid, et al. “When to Use a Metaphor: Metaphors in Dialogical
    Explanations with Addressees of Different Expertise.” <i>Frontiers in Language
    Sciences</i>, vol. 3, 2024, p. 1474924.'
  short: I. Scharlau, M. Körber, M. Sengupta, H. Wachsmuth, Frontiers in Language
    Sciences 3 (2024) 1474924.
date_created: 2025-01-08T11:59:24Z
date_updated: 2025-01-08T11:59:34Z
department:
- _id: '660'
funded_apc: '1'
intvolume: '         3'
keyword:
- metaphor
- conceptual metaphor
- conceptual metaphor theory
- metaphor usage
- explaining
- explanation
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.frontiersin.org/journals/language-sciences/articles/10.3389/flang.2024.1474924/full
oa: '1'
page: '1474924'
project:
- _id: '127'
  name: 'TRR 318 - C4: TRR 318 - Subproject C4 - Metaphern als Werkzeug des Erklärens'
publication: Frontiers in Language Sciences
quality_controlled: '1'
status: public
title: 'When to use a metaphor: Metaphors in dialogical explanations with addressees
  of different expertise'
type: journal_article
user_id: '451'
volume: 3
year: '2024'
...
---
_id: '61273'
abstract:
- lang: eng
  text: "In human-machine explanation interactions, such as tutoring systems or customer
    support chatbots, it is important for the machine explainer to infer the human
    user's understanding.  Nonverbal signals play an important role for expressing
    mental states like understanding and confusion in these interactions. However,
    an individual's expressions may vary depending on other factors. In cases where
    these factors are unknown, machine learning methods that infer understanding from
    nonverbal cues become unreliable. Stress for example has been shown to affect
    human expression, but it is not clear from the current research how stress affects
    the expression of understanding.\r\nTo address this gap, we design a paradigm
    that induces understanding and confusion through game rule explanations. During
    the explanations, self-perceived understanding and confusion are annotated by
    the participants. A stress condition is also introduced to enable the investigation
    of changes in the expression of social signals under stress.\r\nWe conducted a
    study to validate the stress induction and participants reported a statistically
    significant increase in stress during the stress condition compared to the neutral
    control condition. \r\nAdditionally, feedback from participants shows that the
    paradigm is effective in inducing understanding and confusion. \r\nThis paradigm
    paves the way for further studies investigating social signals of understanding
    to improve human-machine explanation interactions for varying contexts."
author:
- first_name: Jonas
  full_name: Paletschek, Jonas
  id: '98941'
  last_name: Paletschek
citation:
  ama: 'Paletschek J. A Paradigm to Investigate Social Signals of Understanding and
    Their Susceptibility to Stress. In: <i>12th International Conference on  Affective
    Computing &#38; Intelligent Interaction</i>. IEEE; 2024. doi:<a href="https://doi.org/10.1109/ACII63134.2024.00040">10.1109/ACII63134.2024.00040</a>'
  apa: Paletschek, J. (2024). A Paradigm to Investigate Social Signals of Understanding
    and Their Susceptibility to Stress. <i>12th International Conference on  Affective
    Computing &#38; Intelligent Interaction</i>. 12th International Conference on 
    Affective Computing &#38; Intelligent Interaction, Glasgow. <a href="https://doi.org/10.1109/ACII63134.2024.00040">https://doi.org/10.1109/ACII63134.2024.00040</a>
  bibtex: '@inproceedings{Paletschek_2024, title={A Paradigm to Investigate Social
    Signals of Understanding and Their Susceptibility to Stress}, DOI={<a href="https://doi.org/10.1109/ACII63134.2024.00040">10.1109/ACII63134.2024.00040</a>},
    booktitle={12th International Conference on  Affective Computing &#38; Intelligent
    Interaction}, publisher={IEEE}, author={Paletschek, Jonas}, year={2024} }'
  chicago: Paletschek, Jonas. “A Paradigm to Investigate Social Signals of Understanding
    and Their Susceptibility to Stress.” In <i>12th International Conference on  Affective
    Computing &#38; Intelligent Interaction</i>. IEEE, 2024. <a href="https://doi.org/10.1109/ACII63134.2024.00040">https://doi.org/10.1109/ACII63134.2024.00040</a>.
  ieee: 'J. Paletschek, “A Paradigm to Investigate Social Signals of Understanding
    and Their Susceptibility to Stress,” presented at the 12th International Conference
    on  Affective Computing &#38; Intelligent Interaction, Glasgow, 2024, doi: <a
    href="https://doi.org/10.1109/ACII63134.2024.00040">10.1109/ACII63134.2024.00040</a>.'
  mla: Paletschek, Jonas. “A Paradigm to Investigate Social Signals of Understanding
    and Their Susceptibility to Stress.” <i>12th International Conference on  Affective
    Computing &#38; Intelligent Interaction</i>, IEEE, 2024, doi:<a href="https://doi.org/10.1109/ACII63134.2024.00040">10.1109/ACII63134.2024.00040</a>.
  short: 'J. Paletschek, in: 12th International Conference on  Affective Computing
    &#38; Intelligent Interaction, IEEE, 2024.'
conference:
  end_date: 2024-09-18
  location: Glasgow
  name: 12th International Conference on  Affective Computing & Intelligent Interaction
  start_date: 2024-09-15
date_created: 2025-09-15T11:24:56Z
date_updated: 2025-09-16T07:57:53Z
ddc:
- '150'
department:
- _id: '660'
doi: 10.1109/ACII63134.2024.00040
file:
- access_level: closed
  content_type: application/pdf
  creator: paletsch
  date_created: 2025-09-15T11:18:01Z
  date_updated: 2025-09-15T11:18:01Z
  file_id: '61274'
  file_name: ACII2024_Camera_Ready.pdf
  file_size: 8807478
  relation: main_file
  success: 1
file_date_updated: 2025-09-15T11:18:01Z
has_accepted_license: '1'
keyword:
- Understanding
- Nonverbal Social Signals
- Stress Induction
- Explanation
- Machine Learning Bias
language:
- iso: eng
project:
- _id: '1200'
  name: TRR 318 - Teilprojekt A6 - Inklusive Ko-Konstruktion sozialer Signale des
    Verstehens
publication: 12th International Conference on  Affective Computing & Intelligent Interaction
publication_status: published
publisher: IEEE
status: public
title: A Paradigm to Investigate Social Signals of Understanding and Their Susceptibility
  to Stress
type: conference
user_id: '98941'
year: '2024'
...
---
_id: '51368'
abstract:
- lang: eng
  text: Dealing with opaque algorithms, the frequent overlap between transparency
    and explainability produces seemingly unsolvable dilemmas, as the much-discussed
    trade-off between model performance and model transparency. Referring to Niklas
    Luhmann's notion of communication, the paper argues that explainability does not
    necessarily require transparency and proposes an alternative approach. Explanations
    as communicative processes do not imply any disclosure of thoughts or neural processes,
    but only reformulations that provide the partners with additional elements and
    enable them to understand (from their perspective) what has been done and why.
    Recent computational approaches aiming at post-hoc explainability reproduce what
    happens in communication, producing explanations of the working of algorithms
    that can be different from the processes of the algorithms.
author:
- first_name: 'Elena '
  full_name: 'Esposito, Elena '
  last_name: Esposito
citation:
  ama: Esposito E. Does Explainability Require Transparency? <i>Sociologica</i>. 2023;16(3):17-27.
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>
  apa: Esposito, E. (2023). Does Explainability Require Transparency? <i>Sociologica</i>,
    <i>16</i>(3), 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>
  bibtex: '@article{Esposito_2023, title={Does Explainability Require Transparency?},
    volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena }, year={2023}, pages={17–27}
    }'
  chicago: 'Esposito, Elena . “Does Explainability Require Transparency?” <i>Sociologica</i>
    16, no. 3 (2023): 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>.'
  ieee: 'E. Esposito, “Does Explainability Require Transparency?,” <i>Sociologica</i>,
    vol. 16, no. 3, pp. 17–27, 2023, doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.'
  mla: Esposito, Elena. “Does Explainability Require Transparency?” <i>Sociologica</i>,
    vol. 16, no. 3, 2023, pp. 17–27, doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.
  short: E. Esposito, Sociologica 16 (2023) 17–27.
date_created: 2024-02-18T10:16:43Z
date_updated: 2024-02-26T08:46:26Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/15804
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Transparency
- Explanation
- Communication
- Sociological systems theory
language:
- iso: eng
page: 17-27
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: Does Explainability Require Transparency?
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
---
_id: '51369'
abstract:
- lang: eng
  text: This short introduction presents the symposium ‘Explaining Machines’. It locates
    the debate about Explainable AI in the history of the reflection about AI and
    outlines the issues discussed in the contributions.
author:
- first_name: Elena
  full_name: Esposito, Elena
  last_name: Esposito
citation:
  ama: 'Esposito E. Explaining Machines: Social Management of Incomprehensible Algorithms.
    Introduction. <i>Sociologica</i>. 2023;16(3):1-4. doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>'
  apa: 'Esposito, E. (2023). Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction. <i>Sociologica</i>, <i>16</i>(3), 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>'
  bibtex: '@article{Esposito_2023, title={Explaining Machines: Social Management of
    Incomprehensible Algorithms. Introduction}, volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena}, year={2023}, pages={1–4}
    }'
  chicago: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i> 16, no. 3 (2023): 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>.'
  ieee: 'E. Esposito, “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction,” <i>Sociologica</i>, vol. 16, no. 3, pp. 1–4, 2023,
    doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  mla: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i>, vol. 16, no. 3, 2023, pp. 1–4,
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  short: E. Esposito, Sociologica 16 (2023) 1–4.
date_created: 2024-02-18T10:23:23Z
date_updated: 2024-02-26T08:45:56Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/16265
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Inexplicability
- Transparency
- Explanation
- Opacity
- Contestability
language:
- iso: eng
page: 1-4
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: 'Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction'
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
