---
_id: '51372'
abstract:
- lang: eng
  text: Machine learning is frequently used in affective computing, but presents challenges
    due the opacity of state-of-the-art machine learning methods. Because of the impact
    affective machine learning systems may have on an individual's life, it is important
    that models be made transparent to detect and mitigate biased decision making.
    In this regard, affective machine learning could benefit from the recent advancements
    in explainable artificial intelligence (XAI) research. We perform a structured
    literature review to examine the use of interpretability in the context of affective
    machine learning. We focus on studies using audio, visual, or audiovisual data
    for model training and identified 29 research articles. Our findings show an emergence
    of the use of interpretability methods in the last five years. However, their
    use is currently limited regarding the range of methods used, the depth of evaluations,
    and the consideration of use-cases. We outline the main gaps in the research and
    provide recommendations for researchers that aim to implement interpretable methods
    for affective machine learning.
author:
- first_name: 'David '
  full_name: 'Johnson, David '
  last_name: Johnson
- first_name: Olya
  full_name: Hakobyan, Olya
  last_name: Hakobyan
- first_name: Hanna
  full_name: Drimalla, Hanna
  last_name: Drimalla
citation:
  ama: 'Johnson D, Hakobyan O, Drimalla H. Towards Interpretability in Audio and Visual
    Affective Machine Learning: A Review. Published online 2023.'
  apa: 'Johnson, D., Hakobyan, O., &#38; Drimalla, H. (2023). <i>Towards Interpretability
    in Audio and Visual Affective Machine Learning: A Review</i>.'
  bibtex: '@article{Johnson_Hakobyan_Drimalla_2023, title={Towards Interpretability
    in Audio and Visual Affective Machine Learning: A Review}, author={Johnson, David  and
    Hakobyan, Olya and Drimalla, Hanna}, year={2023} }'
  chicago: 'Johnson, David , Olya Hakobyan, and Hanna Drimalla. “Towards Interpretability
    in Audio and Visual Affective Machine Learning: A Review,” 2023.'
  ieee: 'D. Johnson, O. Hakobyan, and H. Drimalla, “Towards Interpretability in Audio
    and Visual Affective Machine Learning: A Review.” 2023.'
  mla: 'Johnson, David, et al. <i>Towards Interpretability in Audio and Visual Affective
    Machine Learning: A Review</i>. 2023.'
  short: D. Johnson, O. Hakobyan, H. Drimalla, (2023).
date_created: 2024-02-18T10:52:36Z
date_updated: 2024-02-26T08:43:01Z
department:
- _id: '660'
language:
- iso: eng
project:
- _id: '110'
  name: 'TRR 318 - A: TRR 318 - Project Area A'
status: public
title: 'Towards Interpretability in Audio and Visual Affective Machine Learning: A
  Review'
type: preprint
user_id: '54779'
year: '2023'
...
---
_id: '51371'
abstract:
- lang: eng
  text: <jats:p>In this paper, we investigate the effect of distractions and hesitations
    as a scaffolding strategy. Recent research points to the potential beneficial
    effects of a speaker’s hesitations on the listeners’ comprehension of utterances,
    although results from studies on this issue indicate that humans do not make strategic
    use of them. The role of hesitations and their communicative function in human-human
    interaction is a much-discussed topic in current research. To better understand
    the underlying cognitive processes, we developed a human–robot interaction (HRI)
    setup that allows the measurement of the electroencephalogram (EEG) signals of
    a human participant while interacting with a robot. We thereby address the research
    question of whether we find effects on single-trial EEG based on the distraction
    and the corresponding robot’s hesitation scaffolding strategy. To carry out the
    experiments, we leverage our LabLinking method, which enables interdisciplinary
    joint research between remote labs. This study could not have been conducted without
    LabLinking, as the two involved labs needed to combine their individual expertise
    and equipment to achieve the goal together. The results of our study indicate
    that the EEG correlates in the distracted condition are different from the baseline
    condition without distractions. Furthermore, we could differentiate the EEG correlates
    of distraction with and without a hesitation scaffolding strategy. This proof-of-concept
    study shows that LabLinking makes it possible to conduct collaborative HRI studies
    in remote laboratories and lays the first foundation for more in-depth research
    into robotic scaffolding strategies.</jats:p>
article_number: '37'
author:
- first_name: Birte
  full_name: Richter, Birte
  last_name: Richter
- first_name: Felix
  full_name: Putze, Felix
  last_name: Putze
- first_name: Gabriel
  full_name: Ivucic, Gabriel
  last_name: Ivucic
- first_name: Mara
  full_name: Brandt, Mara
  last_name: Brandt
- first_name: Christian
  full_name: Schütze, Christian
  last_name: Schütze
- first_name: Rafael
  full_name: Reisenhofer, Rafael
  last_name: Reisenhofer
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Tanja
  full_name: Schultz, Tanja
  last_name: Schultz
citation:
  ama: 'Richter B, Putze F, Ivucic G, et al. EEG Correlates of Distractions and Hesitations
    in Human–Robot Interaction: A LabLinking Pilot Study. <i>Multimodal Technologies
    and Interaction</i>. 2023;7(4). doi:<a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>'
  apa: 'Richter, B., Putze, F., Ivucic, G., Brandt, M., Schütze, C., Reisenhofer,
    R., Wrede, B., &#38; Schultz, T. (2023). EEG Correlates of Distractions and Hesitations
    in Human–Robot Interaction: A LabLinking Pilot Study. <i>Multimodal Technologies
    and Interaction</i>, <i>7</i>(4), Article 37. <a href="https://doi.org/10.3390/mti7040037">https://doi.org/10.3390/mti7040037</a>'
  bibtex: '@article{Richter_Putze_Ivucic_Brandt_Schütze_Reisenhofer_Wrede_Schultz_2023,
    title={EEG Correlates of Distractions and Hesitations in Human–Robot Interaction:
    A LabLinking Pilot Study}, volume={7}, DOI={<a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>},
    number={437}, journal={Multimodal Technologies and Interaction}, publisher={MDPI
    AG}, author={Richter, Birte and Putze, Felix and Ivucic, Gabriel and Brandt, Mara
    and Schütze, Christian and Reisenhofer, Rafael and Wrede, Britta and Schultz,
    Tanja}, year={2023} }'
  chicago: 'Richter, Birte, Felix Putze, Gabriel Ivucic, Mara Brandt, Christian Schütze,
    Rafael Reisenhofer, Britta Wrede, and Tanja Schultz. “EEG Correlates of Distractions
    and Hesitations in Human–Robot Interaction: A LabLinking Pilot Study.” <i>Multimodal
    Technologies and Interaction</i> 7, no. 4 (2023). <a href="https://doi.org/10.3390/mti7040037">https://doi.org/10.3390/mti7040037</a>.'
  ieee: 'B. Richter <i>et al.</i>, “EEG Correlates of Distractions and Hesitations
    in Human–Robot Interaction: A LabLinking Pilot Study,” <i>Multimodal Technologies
    and Interaction</i>, vol. 7, no. 4, Art. no. 37, 2023, doi: <a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>.'
  mla: 'Richter, Birte, et al. “EEG Correlates of Distractions and Hesitations in
    Human–Robot Interaction: A LabLinking Pilot Study.” <i>Multimodal Technologies
    and Interaction</i>, vol. 7, no. 4, 37, MDPI AG, 2023, doi:<a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>.'
  short: B. Richter, F. Putze, G. Ivucic, M. Brandt, C. Schütze, R. Reisenhofer, B.
    Wrede, T. Schultz, Multimodal Technologies and Interaction 7 (2023).
date_created: 2024-02-18T10:45:53Z
date_updated: 2024-02-26T08:44:32Z
department:
- _id: '660'
doi: 10.3390/mti7040037
intvolume: '         7'
issue: '4'
keyword:
- Computer Networks and Communications
- Computer Science Applications
- Human-Computer Interaction
- Neuroscience (miscellaneous)
language:
- iso: eng
project:
- _id: '113'
  name: 'TRR 318 - A3: TRR 318 - Subproject A3'
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Multimodal Technologies and Interaction
publication_identifier:
  issn:
  - 2414-4088
publication_status: published
publisher: MDPI AG
status: public
title: 'EEG Correlates of Distractions and Hesitations in Human–Robot Interaction:
  A LabLinking Pilot Study'
type: journal_article
user_id: '54779'
volume: 7
year: '2023'
...
---
_id: '51370'
author:
- first_name: Leonie
  full_name: Dyck, Leonie
  last_name: Dyck
- first_name: Helen
  full_name: Beierling, Helen
  last_name: Beierling
- first_name: Robin
  full_name: Helmert, Robin
  last_name: Helmert
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  last_name: Vollmer
citation:
  ama: 'Dyck L, Beierling H, Helmert R, Vollmer A-L. Technical Transparency for Robot
    Navigation Through AR Visualizations. In: <i>Companion of the 2023 ACM/IEEE International
    Conference on Human-Robot Interaction</i>. ACM; 2023:720-724. doi:<a href="https://doi.org/10.1145/3568294.3580181">10.1145/3568294.3580181</a>'
  apa: Dyck, L., Beierling, H., Helmert, R., &#38; Vollmer, A.-L. (2023). Technical
    Transparency for Robot Navigation Through AR Visualizations. <i>Companion of the
    2023 ACM/IEEE International Conference on Human-Robot Interaction</i>, 720–724.
    <a href="https://doi.org/10.1145/3568294.3580181">https://doi.org/10.1145/3568294.3580181</a>
  bibtex: '@inproceedings{Dyck_Beierling_Helmert_Vollmer_2023, title={Technical Transparency
    for Robot Navigation Through AR Visualizations}, DOI={<a href="https://doi.org/10.1145/3568294.3580181">10.1145/3568294.3580181</a>},
    booktitle={Companion of the 2023 ACM/IEEE International Conference on Human-Robot
    Interaction}, publisher={ACM}, author={Dyck, Leonie and Beierling, Helen and Helmert,
    Robin and Vollmer, Anna-Lisa}, year={2023}, pages={720–724} }'
  chicago: Dyck, Leonie, Helen Beierling, Robin Helmert, and Anna-Lisa Vollmer. “Technical
    Transparency for Robot Navigation Through AR Visualizations.” In <i>Companion
    of the 2023 ACM/IEEE International Conference on Human-Robot Interaction</i>,
    720–24. ACM, 2023. <a href="https://doi.org/10.1145/3568294.3580181">https://doi.org/10.1145/3568294.3580181</a>.
  ieee: 'L. Dyck, H. Beierling, R. Helmert, and A.-L. Vollmer, “Technical Transparency
    for Robot Navigation Through AR Visualizations,” in <i>Companion of the 2023 ACM/IEEE
    International Conference on Human-Robot Interaction</i>, Stockholm , 2023, pp.
    720–724, doi: <a href="https://doi.org/10.1145/3568294.3580181">10.1145/3568294.3580181</a>.'
  mla: Dyck, Leonie, et al. “Technical Transparency for Robot Navigation Through AR
    Visualizations.” <i>Companion of the 2023 ACM/IEEE International Conference on
    Human-Robot Interaction</i>, ACM, 2023, pp. 720–24, doi:<a href="https://doi.org/10.1145/3568294.3580181">10.1145/3568294.3580181</a>.
  short: 'L. Dyck, H. Beierling, R. Helmert, A.-L. Vollmer, in: Companion of the 2023
    ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2023, pp. 720–724.'
conference:
  end_date: 2023-3-16
  location: 'Stockholm '
  name: 'HRI ''23: ACM/IEEE International Conference on Human-Robot Interaction'
  start_date: 2023-3-13
date_created: 2024-02-18T10:30:36Z
date_updated: 2024-02-26T08:45:06Z
department:
- _id: '660'
doi: 10.1145/3568294.3580181
language:
- iso: eng
page: 720-724
project:
- _id: '123'
  name: 'TRR 318 - B5: TRR 318 - Subproject B5'
publication: Companion of the 2023 ACM/IEEE International Conference on Human-Robot
  Interaction
publication_status: published
publisher: ACM
status: public
title: Technical Transparency for Robot Navigation Through AR Visualizations
type: conference
user_id: '54779'
year: '2023'
...
---
_id: '51368'
abstract:
- lang: eng
  text: Dealing with opaque algorithms, the frequent overlap between transparency
    and explainability produces seemingly unsolvable dilemmas, as the much-discussed
    trade-off between model performance and model transparency. Referring to Niklas
    Luhmann's notion of communication, the paper argues that explainability does not
    necessarily require transparency and proposes an alternative approach. Explanations
    as communicative processes do not imply any disclosure of thoughts or neural processes,
    but only reformulations that provide the partners with additional elements and
    enable them to understand (from their perspective) what has been done and why.
    Recent computational approaches aiming at post-hoc explainability reproduce what
    happens in communication, producing explanations of the working of algorithms
    that can be different from the processes of the algorithms.
author:
- first_name: 'Elena '
  full_name: 'Esposito, Elena '
  last_name: Esposito
citation:
  ama: Esposito E. Does Explainability Require Transparency? <i>Sociologica</i>. 2023;16(3):17-27.
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>
  apa: Esposito, E. (2023). Does Explainability Require Transparency? <i>Sociologica</i>,
    <i>16</i>(3), 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>
  bibtex: '@article{Esposito_2023, title={Does Explainability Require Transparency?},
    volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena }, year={2023}, pages={17–27}
    }'
  chicago: 'Esposito, Elena . “Does Explainability Require Transparency?” <i>Sociologica</i>
    16, no. 3 (2023): 17–27. <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">https://doi.org/10.6092/ISSN.1971-8853/15804</a>.'
  ieee: 'E. Esposito, “Does Explainability Require Transparency?,” <i>Sociologica</i>,
    vol. 16, no. 3, pp. 17–27, 2023, doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.'
  mla: Esposito, Elena. “Does Explainability Require Transparency?” <i>Sociologica</i>,
    vol. 16, no. 3, 2023, pp. 17–27, doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/15804">10.6092/ISSN.1971-8853/15804</a>.
  short: E. Esposito, Sociologica 16 (2023) 17–27.
date_created: 2024-02-18T10:16:43Z
date_updated: 2024-02-26T08:46:26Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/15804
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Transparency
- Explanation
- Communication
- Sociological systems theory
language:
- iso: eng
page: 17-27
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: Does Explainability Require Transparency?
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
---
_id: '51369'
abstract:
- lang: eng
  text: This short introduction presents the symposium ‘Explaining Machines’. It locates
    the debate about Explainable AI in the history of the reflection about AI and
    outlines the issues discussed in the contributions.
author:
- first_name: Elena
  full_name: Esposito, Elena
  last_name: Esposito
citation:
  ama: 'Esposito E. Explaining Machines: Social Management of Incomprehensible Algorithms.
    Introduction. <i>Sociologica</i>. 2023;16(3):1-4. doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>'
  apa: 'Esposito, E. (2023). Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction. <i>Sociologica</i>, <i>16</i>(3), 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>'
  bibtex: '@article{Esposito_2023, title={Explaining Machines: Social Management of
    Incomprehensible Algorithms. Introduction}, volume={16}, DOI={<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>},
    number={3}, journal={Sociologica}, author={Esposito, Elena}, year={2023}, pages={1–4}
    }'
  chicago: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i> 16, no. 3 (2023): 1–4. <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">https://doi.org/10.6092/ISSN.1971-8853/16265</a>.'
  ieee: 'E. Esposito, “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction,” <i>Sociologica</i>, vol. 16, no. 3, pp. 1–4, 2023,
    doi: <a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  mla: 'Esposito, Elena. “Explaining Machines: Social Management of Incomprehensible
    Algorithms. Introduction.” <i>Sociologica</i>, vol. 16, no. 3, 2023, pp. 1–4,
    doi:<a href="https://doi.org/10.6092/ISSN.1971-8853/16265">10.6092/ISSN.1971-8853/16265</a>.'
  short: E. Esposito, Sociologica 16 (2023) 1–4.
date_created: 2024-02-18T10:23:23Z
date_updated: 2024-02-26T08:45:56Z
department:
- _id: '660'
doi: 10.6092/ISSN.1971-8853/16265
intvolume: '        16'
issue: '3'
keyword:
- Explainable AI
- Inexplicability
- Transparency
- Explanation
- Opacity
- Contestability
language:
- iso: eng
page: 1-4
project:
- _id: '121'
  grant_number: '438445824'
  name: 'TRR 318 - B01: TRR 318 - Ein dialogbasierter Ansatz zur Erklärung von Modellen
    des maschinellen Lernens (Teilprojekt B01)'
publication: Sociologica
status: public
title: 'Explaining Machines: Social Management of Incomprehensible Algorithms. Introduction'
type: journal_article
user_id: '54779'
volume: 16
year: '2023'
...
---
_id: '44849'
author:
- first_name: Frederik
  full_name: Rautenberg, Frederik
  id: '72602'
  last_name: Rautenberg
- first_name: Michael
  full_name: Kuhlmann, Michael
  id: '49871'
  last_name: Kuhlmann
- first_name: Janek
  full_name: Ebbers, Janek
  id: '34851'
  last_name: Ebbers
- first_name: Jana
  full_name: Wiechmann, Jana
  last_name: Wiechmann
- first_name: Fritz
  full_name: Seebauer, Fritz
  last_name: Seebauer
- first_name: Petra
  full_name: Wagner, Petra
  last_name: Wagner
- first_name: Reinhold
  full_name: Haeb-Umbach, Reinhold
  id: '242'
  last_name: Haeb-Umbach
citation:
  ama: 'Rautenberg F, Kuhlmann M, Ebbers J, et al. Speech Disentanglement for Analysis
    and Modification of Acoustic and Perceptual Speaker Characteristics. In: <i>Fortschritte
    Der Akustik - DAGA 2023</i>. ; 2023:1409-1412.'
  apa: Rautenberg, F., Kuhlmann, M., Ebbers, J., Wiechmann, J., Seebauer, F., Wagner,
    P., &#38; Haeb-Umbach, R. (2023). Speech Disentanglement for Analysis and Modification
    of Acoustic and Perceptual Speaker Characteristics. <i>Fortschritte Der Akustik
    - DAGA 2023</i>, 1409–1412.
  bibtex: '@inproceedings{Rautenberg_Kuhlmann_Ebbers_Wiechmann_Seebauer_Wagner_Haeb-Umbach_2023,
    title={Speech Disentanglement for Analysis and Modification of Acoustic and Perceptual
    Speaker Characteristics}, booktitle={Fortschritte der Akustik - DAGA 2023}, author={Rautenberg,
    Frederik and Kuhlmann, Michael and Ebbers, Janek and Wiechmann, Jana and Seebauer,
    Fritz and Wagner, Petra and Haeb-Umbach, Reinhold}, year={2023}, pages={1409–1412}
    }'
  chicago: Rautenberg, Frederik, Michael Kuhlmann, Janek Ebbers, Jana Wiechmann, Fritz
    Seebauer, Petra Wagner, and Reinhold Haeb-Umbach. “Speech Disentanglement for
    Analysis and Modification of Acoustic and Perceptual Speaker Characteristics.”
    In <i>Fortschritte Der Akustik - DAGA 2023</i>, 1409–12, 2023.
  ieee: F. Rautenberg <i>et al.</i>, “Speech Disentanglement for Analysis and Modification
    of Acoustic and Perceptual Speaker Characteristics,” in <i>Fortschritte der Akustik
    - DAGA 2023</i>, Hamburg, 2023, pp. 1409–1412.
  mla: Rautenberg, Frederik, et al. “Speech Disentanglement for Analysis and Modification
    of Acoustic and Perceptual Speaker Characteristics.” <i>Fortschritte Der Akustik
    - DAGA 2023</i>, 2023, pp. 1409–12.
  short: 'F. Rautenberg, M. Kuhlmann, J. Ebbers, J. Wiechmann, F. Seebauer, P. Wagner,
    R. Haeb-Umbach, in: Fortschritte Der Akustik - DAGA 2023, 2023, pp. 1409–1412.'
conference:
  end_date: 2023-03-09
  location: Hamburg
  name: DAGA 2023 - 49. Jahrestagung für Akustik
  start_date: 2023-03-06
date_created: 2023-05-15T08:48:54Z
date_updated: 2024-02-29T17:05:16Z
ddc:
- '000'
department:
- _id: '54'
- _id: '660'
file:
- access_level: open_access
  content_type: application/pdf
  creator: frra
  date_created: 2024-02-29T16:15:12Z
  date_updated: 2024-02-29T16:15:12Z
  file_id: '52221'
  file_name: Daga_2023_Rautenberg_Paper.pdf
  file_size: 289493
  relation: main_file
file_date_updated: 2024-02-29T16:15:12Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://pub.dega-akustik.de/DAGA_2023/data/articles/000105.pdf
oa: '1'
page: 1409-1412
project:
- _id: '129'
  grant_number: '438445824'
  name: 'TRR 318 - C06: TRR 318 - Technisch unterstütztes Erklären von Stimmcharakteristika
    (Teilprojekt C06)'
publication: Fortschritte der Akustik - DAGA 2023
publication_status: published
status: public
title: Speech Disentanglement for Analysis and Modification of Acoustic and Perceptual
  Speaker Characteristics
type: conference
user_id: '72602'
year: '2023'
...
---
_id: '54909'
author:
- first_name: Jonas Manuel
  full_name: Hanselle, Jonas Manuel
  id: '43980'
  last_name: Hanselle
  orcid: 0000-0002-1231-4985
- first_name: Johannes
  full_name: Fürnkranz, Johannes
  last_name: Fürnkranz
- first_name: Eyke
  full_name: Hüllermeier, Eyke
  id: '48129'
  last_name: Hüllermeier
citation:
  ama: 'Hanselle JM, Fürnkranz J, Hüllermeier E. Probabilistic Scoring Lists for Interpretable
    Machine Learning. In: <i>Discovery Science</i>. Springer Nature Switzerland; 2023.
    doi:<a href="https://doi.org/10.1007/978-3-031-45275-8_13">10.1007/978-3-031-45275-8_13</a>'
  apa: Hanselle, J. M., Fürnkranz, J., &#38; Hüllermeier, E. (2023). Probabilistic
    Scoring Lists for Interpretable Machine Learning. In <i>Discovery Science</i>.
    Springer Nature Switzerland. <a href="https://doi.org/10.1007/978-3-031-45275-8_13">https://doi.org/10.1007/978-3-031-45275-8_13</a>
  bibtex: '@inbook{Hanselle_Fürnkranz_Hüllermeier_2023, place={Cham}, title={Probabilistic
    Scoring Lists for Interpretable Machine Learning}, DOI={<a href="https://doi.org/10.1007/978-3-031-45275-8_13">10.1007/978-3-031-45275-8_13</a>},
    booktitle={Discovery Science}, publisher={Springer Nature Switzerland}, author={Hanselle,
    Jonas Manuel and Fürnkranz, Johannes and Hüllermeier, Eyke}, year={2023} }'
  chicago: 'Hanselle, Jonas Manuel, Johannes Fürnkranz, and Eyke Hüllermeier. “Probabilistic
    Scoring Lists for Interpretable Machine Learning.” In <i>Discovery Science</i>.
    Cham: Springer Nature Switzerland, 2023. <a href="https://doi.org/10.1007/978-3-031-45275-8_13">https://doi.org/10.1007/978-3-031-45275-8_13</a>.'
  ieee: 'J. M. Hanselle, J. Fürnkranz, and E. Hüllermeier, “Probabilistic Scoring
    Lists for Interpretable Machine Learning,” in <i>Discovery Science</i>, Cham:
    Springer Nature Switzerland, 2023.'
  mla: Hanselle, Jonas Manuel, et al. “Probabilistic Scoring Lists for Interpretable
    Machine Learning.” <i>Discovery Science</i>, Springer Nature Switzerland, 2023,
    doi:<a href="https://doi.org/10.1007/978-3-031-45275-8_13">10.1007/978-3-031-45275-8_13</a>.
  short: 'J.M. Hanselle, J. Fürnkranz, E. Hüllermeier, in: Discovery Science, Springer
    Nature Switzerland, Cham, 2023.'
date_created: 2024-06-26T14:24:29Z
date_updated: 2024-06-26T14:25:50Z
department:
- _id: '660'
doi: 10.1007/978-3-031-45275-8_13
language:
- iso: eng
place: Cham
project:
- _id: '125'
  name: 'TRR 318 - C2: TRR 318 - Subproject C2'
publication: Discovery Science
publication_identifier:
  isbn:
  - '9783031452741'
  - '9783031452758'
  issn:
  - 0302-9743
  - 1611-3349
publication_status: published
publisher: Springer Nature Switzerland
status: public
title: Probabilistic Scoring Lists for Interpretable Machine Learning
type: book_chapter
user_id: '72497'
year: '2023'
...
---
_id: '55155'
author:
- first_name: Amelie
  full_name: Robrecht, Amelie
  id: '91982'
  last_name: Robrecht
  orcid: 0000-0001-5622-8248
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
citation:
  ama: 'Robrecht A, Kopp S. SNAPE: A Sequential Non-Stationary Decision Process Model
    for Adaptive Explanation Generation. In: <i>Proceedings of the 15th International
    Conference on Agents and Artificial Intelligence</i>. SCITEPRESS - Science and
    Technology Publications; 2023. doi:<a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>'
  apa: 'Robrecht, A., &#38; Kopp, S. (2023). SNAPE: A Sequential Non-Stationary Decision
    Process Model for Adaptive Explanation Generation. <i>Proceedings of the 15th
    International Conference on Agents and Artificial Intelligence</i>. <a href="https://doi.org/10.5220/0011671300003393">https://doi.org/10.5220/0011671300003393</a>'
  bibtex: '@inproceedings{Robrecht_Kopp_2023, title={SNAPE: A Sequential Non-Stationary
    Decision Process Model for Adaptive Explanation Generation}, DOI={<a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>},
    booktitle={Proceedings of the 15th International Conference on Agents and Artificial
    Intelligence}, publisher={SCITEPRESS - Science and Technology Publications}, author={Robrecht,
    Amelie and Kopp, Stefan}, year={2023} }'
  chicago: 'Robrecht, Amelie, and Stefan Kopp. “SNAPE: A Sequential Non-Stationary
    Decision Process Model for Adaptive Explanation Generation.” In <i>Proceedings
    of the 15th International Conference on Agents and Artificial Intelligence</i>.
    SCITEPRESS - Science and Technology Publications, 2023. <a href="https://doi.org/10.5220/0011671300003393">https://doi.org/10.5220/0011671300003393</a>.'
  ieee: 'A. Robrecht and S. Kopp, “SNAPE: A Sequential Non-Stationary Decision Process
    Model for Adaptive Explanation Generation,” 2023, doi: <a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>.'
  mla: 'Robrecht, Amelie, and Stefan Kopp. “SNAPE: A Sequential Non-Stationary Decision
    Process Model for Adaptive Explanation Generation.” <i>Proceedings of the 15th
    International Conference on Agents and Artificial Intelligence</i>, SCITEPRESS
    - Science and Technology Publications, 2023, doi:<a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>.'
  short: 'A. Robrecht, S. Kopp, in: Proceedings of the 15th International Conference
    on Agents and Artificial Intelligence, SCITEPRESS - Science and Technology Publications,
    2023.'
date_created: 2024-07-10T11:05:25Z
date_updated: 2024-07-16T09:38:25Z
department:
- _id: '660'
doi: 10.5220/0011671300003393
language:
- iso: eng
project:
- _id: '111'
  grant_number: '438445824'
  name: 'TRR 318 - A01: TRR 318 - Adaptives Erklären (Teilprojekt A01)'
publication: Proceedings of the 15th International Conference on Agents and Artificial
  Intelligence
publication_status: published
publisher: SCITEPRESS - Science and Technology Publications
status: public
title: 'SNAPE: A Sequential Non-Stationary Decision Process Model for Adaptive Explanation
  Generation'
type: conference
user_id: '91982'
year: '2023'
...
---
_id: '55152'
author:
- first_name: Amelie
  full_name: Robrecht, Amelie
  id: '91982'
  last_name: Robrecht
  orcid: 0000-0001-5622-8248
- first_name: Markus
  full_name: Rothgänger, Markus
  last_name: Rothgänger
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
citation:
  ama: 'Robrecht A, Rothgänger M, Kopp S. A Study on the Benefits and Drawbacks of
    Adaptivity in AI-generated Explanations. In: <i>Proceedings of the 23rd ACM International
    Conference on Intelligent Virtual Agents</i>. ACM; 2023. doi:<a href="https://doi.org/10.1145/3570945.3607339">10.1145/3570945.3607339</a>'
  apa: Robrecht, A., Rothgänger, M., &#38; Kopp, S. (2023). A Study on the Benefits
    and Drawbacks of Adaptivity in AI-generated Explanations. <i>Proceedings of the
    23rd ACM International Conference on Intelligent Virtual Agents</i>. <a href="https://doi.org/10.1145/3570945.3607339">https://doi.org/10.1145/3570945.3607339</a>
  bibtex: '@inproceedings{Robrecht_Rothgänger_Kopp_2023, title={A Study on the Benefits
    and Drawbacks of Adaptivity in AI-generated Explanations}, DOI={<a href="https://doi.org/10.1145/3570945.3607339">10.1145/3570945.3607339</a>},
    booktitle={Proceedings of the 23rd ACM International Conference on Intelligent
    Virtual Agents}, publisher={ACM}, author={Robrecht, Amelie and Rothgänger, Markus
    and Kopp, Stefan}, year={2023} }'
  chicago: Robrecht, Amelie, Markus Rothgänger, and Stefan Kopp. “A Study on the Benefits
    and Drawbacks of Adaptivity in AI-Generated Explanations.” In <i>Proceedings of
    the 23rd ACM International Conference on Intelligent Virtual Agents</i>. ACM,
    2023. <a href="https://doi.org/10.1145/3570945.3607339">https://doi.org/10.1145/3570945.3607339</a>.
  ieee: 'A. Robrecht, M. Rothgänger, and S. Kopp, “A Study on the Benefits and Drawbacks
    of Adaptivity in AI-generated Explanations,” 2023, doi: <a href="https://doi.org/10.1145/3570945.3607339">10.1145/3570945.3607339</a>.'
  mla: Robrecht, Amelie, et al. “A Study on the Benefits and Drawbacks of Adaptivity
    in AI-Generated Explanations.” <i>Proceedings of the 23rd ACM International Conference
    on Intelligent Virtual Agents</i>, ACM, 2023, doi:<a href="https://doi.org/10.1145/3570945.3607339">10.1145/3570945.3607339</a>.
  short: 'A. Robrecht, M. Rothgänger, S. Kopp, in: Proceedings of the 23rd ACM International
    Conference on Intelligent Virtual Agents, ACM, 2023.'
date_created: 2024-07-10T11:03:09Z
date_updated: 2024-07-16T09:38:13Z
department:
- _id: '660'
doi: 10.1145/3570945.3607339
language:
- iso: eng
project:
- _id: '111'
  grant_number: '438445824'
  name: 'TRR 318 - A01: TRR 318 - Adaptives Erklären (Teilprojekt A01)'
publication: Proceedings of the 23rd ACM International Conference on Intelligent Virtual
  Agents
publication_status: published
publisher: ACM
status: public
title: A Study on the Benefits and Drawbacks of Adaptivity in AI-generated Explanations
type: conference
user_id: '91982'
year: '2023'
...
---
_id: '55406'
abstract:
- lang: eng
  text: Metaphorical language, such as {“}spending time together{”}, projects meaning
    from a source domain (here, $money$) to a target domain ($time$). Thereby, it
    highlights certain aspects of the target domain, such as the $effort$ behind the
    time investment. Highlighting aspects with metaphors (while hiding others) bridges
    the two domains and is the core of metaphorical meaning construction. For metaphor
    interpretation, linguistic theories stress that identifying the highlighted aspects
    is important for a better understanding of metaphors. However, metaphor research
    in NLP has not yet dealt with the phenomenon of highlighting. In this paper, we
    introduce the task of identifying the main aspect highlighted in a metaphorical
    sentence. Given the inherent interaction of source domains and highlighted aspects,
    we propose two multitask approaches - a joint learning approach and a continual
    learning approach - based on a finetuned contrastive learning model to jointly
    predict highlighted aspects and source domains. We further investigate whether
    (predicted) information about a source domain leads to better performance in predicting
    the highlighted aspects, and vice versa. Our experiments on an existing corpus
    suggest that, with the corresponding information, the performance to predict the
    other improves in terms of model accuracy in predicting highlighted aspects and
    source domains notably compared to the single-task baselines.
author:
- first_name: Meghdut
  full_name: Sengupta, Meghdut
  id: '99459'
  last_name: Sengupta
- first_name: Milad
  full_name: Alshomary, Milad
  id: '73059'
  last_name: Alshomary
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
citation:
  ama: 'Sengupta M, Alshomary M, Scharlau I, Wachsmuth H. Modeling Highlighting of
    Metaphors in Multitask Contrastive Learning Paradigms. In: Bouamor H, Pino J,
    Bali K, eds. <i>Findings of the Association for Computational Linguistics: EMNLP
    2023</i>. Association for Computational Linguistics; 2023:4636–4659. doi:<a href="https://doi.org/10.18653/v1/2023.findings-emnlp.308">10.18653/v1/2023.findings-emnlp.308</a>'
  apa: 'Sengupta, M., Alshomary, M., Scharlau, I., &#38; Wachsmuth, H. (2023). Modeling
    Highlighting of Metaphors in Multitask Contrastive Learning Paradigms. In H. Bouamor,
    J. Pino, &#38; K. Bali (Eds.), <i>Findings of the Association for Computational
    Linguistics: EMNLP 2023</i> (pp. 4636–4659). Association for Computational Linguistics.
    <a href="https://doi.org/10.18653/v1/2023.findings-emnlp.308">https://doi.org/10.18653/v1/2023.findings-emnlp.308</a>'
  bibtex: '@inproceedings{Sengupta_Alshomary_Scharlau_Wachsmuth_2023, place={Singapore},
    title={Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms},
    DOI={<a href="https://doi.org/10.18653/v1/2023.findings-emnlp.308">10.18653/v1/2023.findings-emnlp.308</a>},
    booktitle={Findings of the Association for Computational Linguistics: EMNLP 2023},
    publisher={Association for Computational Linguistics}, author={Sengupta, Meghdut
    and Alshomary, Milad and Scharlau, Ingrid and Wachsmuth, Henning}, editor={Bouamor,
    Houda and Pino, Juan and Bali, Kalika}, year={2023}, pages={4636–4659} }'
  chicago: 'Sengupta, Meghdut, Milad Alshomary, Ingrid Scharlau, and Henning Wachsmuth.
    “Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms.”
    In <i>Findings of the Association for Computational Linguistics: EMNLP 2023</i>,
    edited by Houda Bouamor, Juan Pino, and Kalika Bali, 4636–4659. Singapore: Association
    for Computational Linguistics, 2023. <a href="https://doi.org/10.18653/v1/2023.findings-emnlp.308">https://doi.org/10.18653/v1/2023.findings-emnlp.308</a>.'
  ieee: 'M. Sengupta, M. Alshomary, I. Scharlau, and H. Wachsmuth, “Modeling Highlighting
    of Metaphors in Multitask Contrastive Learning Paradigms,” in <i>Findings of the
    Association for Computational Linguistics: EMNLP 2023</i>, 2023, pp. 4636–4659,
    doi: <a href="https://doi.org/10.18653/v1/2023.findings-emnlp.308">10.18653/v1/2023.findings-emnlp.308</a>.'
  mla: 'Sengupta, Meghdut, et al. “Modeling Highlighting of Metaphors in Multitask
    Contrastive Learning Paradigms.” <i>Findings of the Association for Computational
    Linguistics: EMNLP 2023</i>, edited by Houda Bouamor et al., Association for Computational
    Linguistics, 2023, pp. 4636–4659, doi:<a href="https://doi.org/10.18653/v1/2023.findings-emnlp.308">10.18653/v1/2023.findings-emnlp.308</a>.'
  short: 'M. Sengupta, M. Alshomary, I. Scharlau, H. Wachsmuth, in: H. Bouamor, J.
    Pino, K. Bali (Eds.), Findings of the Association for Computational Linguistics:
    EMNLP 2023, Association for Computational Linguistics, Singapore, 2023, pp. 4636–4659.'
date_created: 2024-07-26T13:09:20Z
date_updated: 2024-07-26T13:19:53Z
department:
- _id: '600'
- _id: '660'
doi: 10.18653/v1/2023.findings-emnlp.308
editor:
- first_name: Houda
  full_name: Bouamor, Houda
  last_name: Bouamor
- first_name: Juan
  full_name: Pino, Juan
  last_name: Pino
- first_name: Kalika
  full_name: Bali, Kalika
  last_name: Bali
language:
- iso: eng
page: 4636–4659
place: Singapore
project:
- _id: '127'
  name: 'TRR 318 - C4: TRR 318 - Subproject C4 - Metaphern als Werkzeug des Erklärens'
publication: 'Findings of the Association for Computational Linguistics: EMNLP 2023'
publisher: Association for Computational Linguistics
status: public
title: Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms
type: conference
user_id: '3900'
year: '2023'
...
---
_id: '51767'
author:
- first_name: 'Fabian '
  full_name: 'Beer, Fabian '
  last_name: Beer
- first_name: Christian
  full_name: Schulz, Christian
  id: '72684'
  last_name: Schulz
citation:
  ama: 'Beer F, Schulz C. The Return of Black Box Theory in Explainable AI. In: <i>4S
    Conference (Society for the Social Studies of Science), Honolulu/Hawaii, November
    9</i>.'
  apa: Beer, F., &#38; Schulz, C. (n.d.). The Return of Black Box Theory in Explainable
    AI. <i>4S Conference (Society for the Social Studies of Science), Honolulu/Hawaii,
    November 9</i>.
  bibtex: '@inproceedings{Beer_Schulz, title={The Return of Black Box Theory in Explainable
    AI}, booktitle={4S Conference (Society for the Social Studies of Science), Honolulu/Hawaii,
    November 9}, author={Beer, Fabian  and Schulz, Christian} }'
  chicago: Beer, Fabian , and Christian Schulz. “The Return of Black Box Theory in
    Explainable AI.” In <i>4S Conference (Society for the Social Studies of Science),
    Honolulu/Hawaii, November 9</i>, n.d.
  ieee: F. Beer and C. Schulz, “The Return of Black Box Theory in Explainable AI.”
  mla: Beer, Fabian, and Christian Schulz. “The Return of Black Box Theory in Explainable
    AI.” <i>4S Conference (Society for the Social Studies of Science), Honolulu/Hawaii,
    November 9</i>.
  short: 'F. Beer, C. Schulz, in: 4S Conference (Society for the Social Studies of
    Science), Honolulu/Hawaii, November 9, n.d.'
date_created: 2024-02-22T15:15:20Z
date_updated: 2024-08-14T06:15:04Z
department:
- _id: '660'
language:
- iso: eng
project:
- _id: '109'
  grant_number: '438445824'
  name: 'TRR 318: TRR 318 - Erklärbarkeit konstruieren'
publication: 4S Conference (Society for the Social Studies of Science), Honolulu/Hawaii,
  November 9
publication_status: unpublished
status: public
title: The Return of Black Box Theory in Explainable AI
type: conference
user_id: '72684'
year: '2023'
...
---
_id: '51766'
author:
- first_name: Christian
  full_name: Schulz, Christian
  id: '72684'
  last_name: Schulz
- first_name: 'Annedore '
  full_name: 'Wilmes , Annedore '
  last_name: 'Wilmes '
citation:
  ama: Schulz C, Wilmes  A. Vernacular Metaphors of AI .
  apa: Schulz, C., &#38; Wilmes , A. (n.d.). <i>Vernacular Metaphors of AI </i>.
  bibtex: '@inproceedings{Schulz_Wilmes , place={ICA Preconference Workshop “History
    of Digital Metaphors”, University of Toronto, May 25 }, title={Vernacular Metaphors
    of AI }, author={Schulz, Christian and Wilmes , Annedore } }'
  chicago: Schulz, Christian, and Annedore  Wilmes . “Vernacular Metaphors of AI .”
    ICA Preconference Workshop “History of Digital Metaphors”, University of Toronto,
    May 25 , n.d.
  ieee: C. Schulz and A. Wilmes , “Vernacular Metaphors of AI .”
  mla: Schulz, Christian, and Annedore Wilmes . <i>Vernacular Metaphors of AI </i>.
  short: 'C. Schulz, A. Wilmes , in: ICA Preconference Workshop “History of Digital
    Metaphors”, University of Toronto, May 25 , n.d.'
date_created: 2024-02-22T15:11:29Z
date_updated: 2024-08-14T06:04:55Z
department:
- _id: '660'
language:
- iso: eng
place: 'ICA Preconference Workshop "History of Digital Metaphors", University of Toronto,
  May 25 '
project:
- _id: '122'
  name: 'TRR 318 - B3: TRR 318 - Subproject B3'
publication_status: unpublished
status: public
title: 'Vernacular Metaphors of AI '
type: conference
user_id: '72684'
year: '2023'
...
---
_id: '46067'
abstract:
- lang: eng
  text: '<p>The study investigates two different ways of guiding the addressee of
    an explanation - an explainee, through action demonstration: contrastive and non-contrastive.
    Their effect was tested on attention to specific action elements (goal) as well
    as on event memory. In an eye-tracking experiment, participants were shown different
    motion videos that were either contrastive or non-contrastive with respect to
    the segments of movement presentation. Given that everyday action demonstration
    is often multimodal, the stimuli were created with re- spect to their visual and
    verbal presentation. For visual presentation, a video combined two movements in
    a contrastive (e.g., Up-motion following a Down-motion) or non-contrastive way
    (e.g., two Up-motions following each other). For verbal presentation, each video
    was combined with a sequence of instruction descriptions in the form of negative
    (i.e., contrastive) or assertive (i.e., non-contrastive) guidance. It was found
    that a) attention to the event goal increased for this condition in the later
    time window, and b) participants’ recall of the event was facilitated when a visually
    contrastive motion was combined with a verbal contrast.</p>'
author:
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
citation:
  ama: 'Singh A, Rohlfing KJ. Contrastiveness in the context of action demonstration:
    an eye-tracking study on its effects on action perception and action recall. In:
    <i>Proceedings of the Annual Meeting of the Cognitive Science Society 45 (45)</i>.
    Cognitive Science Society; 2023.'
  apa: 'Singh, A., &#38; Rohlfing, K. J. (2023). Contrastiveness in the context of
    action demonstration: an eye-tracking study on its effects on action perception
    and action recall. <i>Proceedings of the Annual Meeting of the Cognitive Science
    Society 45 (45)</i>. 45th Annual Conference of the Cognitive Science Society,
    Sydney.'
  bibtex: '@inproceedings{Singh_Rohlfing_2023, place={Sydney, Australia}, title={Contrastiveness
    in the context of action demonstration: an eye-tracking study on its effects on
    action perception and action recall}, booktitle={Proceedings of the Annual Meeting
    of the Cognitive Science Society 45 (45)}, publisher={Cognitive Science Society},
    author={Singh, Amit and Rohlfing, Katharina J.}, year={2023} }'
  chicago: 'Singh, Amit, and Katharina J. Rohlfing. “Contrastiveness in the Context
    of Action Demonstration: An Eye-Tracking Study on Its Effects on Action Perception
    and Action Recall.” In <i>Proceedings of the Annual Meeting of the Cognitive Science
    Society 45 (45)</i>. Sydney, Australia: Cognitive Science Society, 2023.'
  ieee: 'A. Singh and K. J. Rohlfing, “Contrastiveness in the context of action demonstration:
    an eye-tracking study on its effects on action perception and action recall,”
    presented at the 45th Annual Conference of the Cognitive Science Society, Sydney,
    2023.'
  mla: 'Singh, Amit, and Katharina J. Rohlfing. “Contrastiveness in the Context of
    Action Demonstration: An Eye-Tracking Study on Its Effects on Action Perception
    and Action Recall.” <i>Proceedings of the Annual Meeting of the Cognitive Science
    Society 45 (45)</i>, Cognitive Science Society, 2023.'
  short: 'A. Singh, K.J. Rohlfing, in: Proceedings of the Annual Meeting of the Cognitive
    Science Society 45 (45), Cognitive Science Society, Sydney, Australia, 2023.'
conference:
  location: Sydney
  name: 45th Annual Conference of the Cognitive Science Society
date_created: 2023-07-15T12:16:42Z
date_updated: 2023-09-27T13:51:42Z
department:
- _id: '749'
- _id: '660'
keyword:
- Attention
- negation
- contrastive  guidance
- eye-movements
- action understanding
- event representation
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://escholarship.org/uc/item/2w94t4cv
oa: '1'
place: Sydney, Australia
popular_science: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Proceedings of the Annual Meeting of the Cognitive Science Society 45
  (45)
publication_status: published
publisher: Cognitive Science Society
quality_controlled: '1'
related_material:
  record:
  - id: '46067'
    relation: contains
    status: public
status: public
title: 'Contrastiveness in the context of action demonstration: an eye-tracking study
  on its effects on action perception and action recall'
type: conference
user_id: '91018'
year: '2023'
...
---
_id: '56477'
abstract:
- lang: eng
  text: We describe a prototype of a Clinical Decision Support System (CDSS) that
    provides (counterfactual) explanations to support accurate medical diagnosis.
    The prototype is based on an inherently interpretable Bayesian network (BN). Our
    research aims to investigate which explanations are most useful for medical experts
    and whether co-constructing explanations can foster trust and acceptance of CDSS.
author:
- first_name: Felix
  full_name: Liedeker, Felix
  id: '93275'
  last_name: Liedeker
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
citation:
  ama: 'Liedeker F, Cimiano P. A Prototype of an Interactive Clinical Decision Support
    System with Counterfactual Explanations. In: ; 2023.'
  apa: Liedeker, F., &#38; Cimiano, P. (2023). <i>A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations</i>. xAI-2023 Late-breaking
    Work, Demos and Doctoral Consortium co-located with the 1st World Conference on
    eXplainable Artificial Intelligence (xAI-2023), Lissabon.
  bibtex: '@inproceedings{Liedeker_Cimiano_2023, title={A Prototype of an Interactive
    Clinical Decision Support System with Counterfactual Explanations}, author={Liedeker,
    Felix and Cimiano, Philipp}, year={2023} }'
  chicago: Liedeker, Felix, and Philipp Cimiano. “A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations,” 2023.
  ieee: F. Liedeker and P. Cimiano, “A Prototype of an Interactive Clinical Decision
    Support System with Counterfactual Explanations,” presented at the xAI-2023 Late-breaking
    Work, Demos and Doctoral Consortium co-located with the 1st World Conference on
    eXplainable Artificial Intelligence (xAI-2023), Lissabon, 2023.
  mla: Liedeker, Felix, and Philipp Cimiano. <i>A Prototype of an Interactive Clinical
    Decision Support System with Counterfactual Explanations</i>. 2023.
  short: 'F. Liedeker, P. Cimiano, in: 2023.'
conference:
  end_date: 2023-07-28
  location: Lissabon
  name: xAI-2023 Late-breaking Work, Demos and Doctoral Consortium co-located with
    the 1st World Conference on eXplainable Artificial Intelligence (xAI-2023)
  start_date: 2023-07-26
date_created: 2024-10-09T14:50:09Z
date_updated: 2024-10-09T15:04:53Z
department:
- _id: '660'
keyword:
- Explainable AI
- Clinical decision support
- Bayesian network
- Counterfactual explanations
language:
- iso: eng
project:
- _id: '128'
  name: 'TRR 318 - C5: TRR 318 - Subproject C5'
status: public
title: A Prototype of an Interactive Clinical Decision Support System with Counterfactual
  Explanations
type: conference
user_id: '93275'
year: '2023'
...
---
_id: '56478'
author:
- first_name: Felix
  full_name: Liedeker, Felix
  id: '93275'
  last_name: Liedeker
- first_name: Philipp
  full_name: Cimiano, Philipp
  last_name: Cimiano
citation:
  ama: 'Liedeker F, Cimiano P. Dynamic Feature Selection in AI-based Diagnostic Decision
    Support for Epilepsy. In: ; 2023.'
  apa: Liedeker, F., &#38; Cimiano, P. (2023). <i>Dynamic Feature Selection in AI-based
    Diagnostic Decision Support for Epilepsy</i>. 1st International Conference on
    Artificial Intelligence in Epilepsy and Neurological Disorders, Breckenridge,
    CO, USA .
  bibtex: '@inproceedings{Liedeker_Cimiano_2023, title={Dynamic Feature Selection
    in AI-based Diagnostic Decision Support for Epilepsy}, author={Liedeker, Felix
    and Cimiano, Philipp}, year={2023} }'
  chicago: Liedeker, Felix, and Philipp Cimiano. “Dynamic Feature Selection in AI-Based
    Diagnostic Decision Support for Epilepsy,” 2023.
  ieee: F. Liedeker and P. Cimiano, “Dynamic Feature Selection in AI-based Diagnostic
    Decision Support for Epilepsy,” presented at the 1st International Conference
    on Artificial Intelligence in Epilepsy and Neurological Disorders, Breckenridge,
    CO, USA , 2023.
  mla: Liedeker, Felix, and Philipp Cimiano. <i>Dynamic Feature Selection in AI-Based
    Diagnostic Decision Support for Epilepsy</i>. 2023.
  short: 'F. Liedeker, P. Cimiano, in: 2023.'
conference:
  end_date: 2023-03-10
  location: 'Breckenridge, CO, USA '
  name: 1st International Conference on Artificial Intelligence in Epilepsy and Neurological
    Disorders
  start_date: 2023-03-07
date_created: 2024-10-09T14:53:45Z
date_updated: 2024-10-09T15:06:47Z
department:
- _id: '660'
language:
- iso: eng
project:
- _id: '128'
  name: 'TRR 318 - C5: TRR 318 - Subproject C5'
status: public
title: Dynamic Feature Selection in AI-based Diagnostic Decision Support for Epilepsy
type: conference_abstract
user_id: '93275'
year: '2023'
...
---
_id: '56663'
abstract:
- lang: eng
  text: |-
    Explainability has become an important topic in computer science and
    artificial intelligence, leading to a subfield called Explainable Artificial
    Intelligence (XAI). The goal of providing or seeking explanations is to achieve
    (better) 'understanding' on the part of the explainee. However, what it means
    to 'understand' is still not clearly defined, and the concept itself is rarely
    the subject of scientific investigation. This conceptual article aims to
    present a model of forms of understanding in the context of XAI and beyond.
    From an interdisciplinary perspective bringing together computer science,
    linguistics, sociology, and psychology, a definition of understanding and its
    forms, assessment, and dynamics during the process of giving everyday
    explanations are explored. Two types of understanding are considered as
    possible outcomes of explanations, namely enabledness, 'knowing how' to do or
    decide something, and comprehension, 'knowing that' -- both in different
    degrees (from shallow to deep). Explanations regularly start with shallow
    understanding in a specific domain and can lead to deep comprehension and
    enabledness of the explanandum, which we see as a prerequisite for human users
    to gain agency. In this process, the increase of comprehension and enabledness
    are highly interdependent. Against the background of this systematization,
    special challenges of understanding in XAI are discussed.
author:
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  last_name: Buschmeier
- first_name: Heike M.
  full_name: Buhl, Heike M.
  last_name: Buhl
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Angela
  full_name: Grimminger, Angela
  last_name: Grimminger
- first_name: Helen
  full_name: Beierling, Helen
  last_name: Beierling
- first_name: Josephine
  full_name: Fisher, Josephine
  last_name: Fisher
- first_name: André
  full_name: Groß, André
  last_name: Groß
- first_name: Ilona
  full_name: Horwath, Ilona
  last_name: Horwath
- first_name: Nils
  full_name: Klowait, Nils
  last_name: Klowait
- first_name: Stefan
  full_name: Lazarov, Stefan
  last_name: Lazarov
- first_name: Michael
  full_name: Lenke, Michael
  last_name: Lenke
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Katharina
  full_name: Rohlfing, Katharina
  last_name: Rohlfing
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  last_name: Scharlau
- first_name: Amit
  full_name: Singh, Amit
  last_name: Singh
- first_name: Lutz
  full_name: Terfloth, Lutz
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  last_name: Vollmer
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Annedore
  full_name: Wilmes, Annedore
  last_name: Wilmes
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding of XAI-Explanations.
    <i>arXiv:231108760</i>. Published online 2023.
  apa: Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher,
    J., Groß, A., Horwath, I., Klowait, N., Lazarov, S., Lenke, M., Lohmer, V., Rohlfing,
    K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang, Y., Wilmes, A.,
    &#38; Wrede, B. (2023). Forms of Understanding of XAI-Explanations. In <i>arXiv:2311.08760</i>.
  bibtex: '@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et
    al._2023, title={Forms of Understanding of XAI-Explanations}, journal={arXiv:2311.08760},
    author={Buschmeier, Hendrik and Buhl, Heike M. and Kern, Friederike and Grimminger,
    Angela and Beierling, Helen and Fisher, Josephine and Groß, André and Horwath,
    Ilona and Klowait, Nils and Lazarov, Stefan and et al.}, year={2023} }'
  chicago: Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger,
    Helen Beierling, Josephine Fisher, André Groß, et al. “Forms of Understanding
    of XAI-Explanations.” <i>ArXiv:2311.08760</i>, 2023.
  ieee: H. Buschmeier <i>et al.</i>, “Forms of Understanding of XAI-Explanations,”
    <i>arXiv:2311.08760</i>. 2023.
  mla: Buschmeier, Hendrik, et al. “Forms of Understanding of XAI-Explanations.” <i>ArXiv:2311.08760</i>,
    2023.
  short: H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J. Fisher,
    A. Groß, I. Horwath, N. Klowait, S. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing,
    I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede,
    ArXiv:2311.08760 (2023).
date_created: 2024-10-17T10:09:39Z
date_updated: 2024-10-31T09:24:20Z
department:
- _id: '749'
- _id: '660'
external_id:
  arxiv:
  - '2311.08760'
publication: arXiv:2311.08760
status: public
title: Forms of Understanding of XAI-Explanations
type: preprint
user_id: '91018'
year: '2023'
...
---
_id: '51367'
author:
- first_name: Amelie
  full_name: Robrecht, Amelie
  id: '91982'
  last_name: Robrecht
  orcid: 0000-0001-5622-8248
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
citation:
  ama: 'Robrecht A, Kopp S. SNAPE: A Sequential Non-Stationary Decision Process Model
    for Adaptive Explanation Generation. In: <i>Proceedings of the 15th International
    Conference on Agents and Artificial Intelligence</i>. SCITEPRESS - Science and
    Technology Publications; 2023:48-58. doi:<a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>'
  apa: 'Robrecht, A., &#38; Kopp, S. (2023). SNAPE: A Sequential Non-Stationary Decision
    Process Model for Adaptive Explanation Generation. <i>Proceedings of the 15th
    International Conference on Agents and Artificial Intelligence</i>, 48–58. <a
    href="https://doi.org/10.5220/0011671300003393">https://doi.org/10.5220/0011671300003393</a>'
  bibtex: '@inproceedings{Robrecht_Kopp_2023, title={SNAPE: A Sequential Non-Stationary
    Decision Process Model for Adaptive Explanation Generation}, DOI={<a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>},
    booktitle={Proceedings of the 15th International Conference on Agents and Artificial
    Intelligence}, publisher={SCITEPRESS - Science and Technology Publications}, author={Robrecht,
    Amelie and Kopp, Stefan}, year={2023}, pages={48–58} }'
  chicago: 'Robrecht, Amelie, and Stefan Kopp. “SNAPE: A Sequential Non-Stationary
    Decision Process Model for Adaptive Explanation Generation.” In <i>Proceedings
    of the 15th International Conference on Agents and Artificial Intelligence</i>,
    48–58. SCITEPRESS - Science and Technology Publications, 2023. <a href="https://doi.org/10.5220/0011671300003393">https://doi.org/10.5220/0011671300003393</a>.'
  ieee: 'A. Robrecht and S. Kopp, “SNAPE: A Sequential Non-Stationary Decision Process
    Model for Adaptive Explanation Generation,” in <i>Proceedings of the 15th International
    Conference on Agents and Artificial Intelligence</i>, Lisbon, 2023, pp. 48–58,
    doi: <a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>.'
  mla: 'Robrecht, Amelie, and Stefan Kopp. “SNAPE: A Sequential Non-Stationary Decision
    Process Model for Adaptive Explanation Generation.” <i>Proceedings of the 15th
    International Conference on Agents and Artificial Intelligence</i>, SCITEPRESS
    - Science and Technology Publications, 2023, pp. 48–58, doi:<a href="https://doi.org/10.5220/0011671300003393">10.5220/0011671300003393</a>.'
  short: 'A. Robrecht, S. Kopp, in: Proceedings of the 15th International Conference
    on Agents and Artificial Intelligence, SCITEPRESS - Science and Technology Publications,
    2023, pp. 48–58.'
conference:
  end_date: ' 2023-02-24'
  location: Lisbon
  name: 15th International Conference on Agents and Artificial Intelligence
  start_date: '2023-02-22 '
date_created: 2024-02-18T10:05:42Z
date_updated: 2025-01-15T13:46:38Z
department:
- _id: '660'
doi: 10.5220/0011671300003393
language:
- iso: eng
page: 48-58
project:
- _id: '111'
  grant_number: '438445824'
  name: 'TRR 318 - A01: TRR 318 - Adaptives Erklären (Teilprojekt A01)'
- _id: '117'
  name: 'TRR 318 - C: TRR 318 - Project Area C'
publication: Proceedings of the 15th International Conference on Agents and Artificial
  Intelligence
publication_identifier:
  isbn:
  - 978-989-758-623-1
publication_status: published
publisher: SCITEPRESS - Science and Technology Publications
status: public
title: 'SNAPE: A Sequential Non-Stationary Decision Process Model for Adaptive Explanation
  Generation'
type: conference
user_id: '55908'
year: '2023'
...
---
_id: '55156'
author:
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
- first_name: Amelie
  full_name: Robrecht, Amelie
  id: '91982'
  last_name: Robrecht
  orcid: 0000-0001-5622-8248
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
citation:
  ama: 'Fisher JB, Robrecht A, Kopp S, Rohlfing KJ. Exploring the Semantic Dialogue
    Patterns of Explanations – a Case Study of Game Explanations. In: <i>Proceedings
    of the 27th Workshop on the Semantics and Pragmatics of Dialogue </i>. ; 2023.'
  apa: Fisher, J. B., Robrecht, A., Kopp, S., &#38; Rohlfing, K. J. (2023). Exploring
    the Semantic Dialogue Patterns of Explanations – a Case Study of Game Explanations.
    <i>Proceedings of the 27th Workshop on the Semantics and Pragmatics of Dialogue
    </i>. Semdial, Maribor.
  bibtex: '@inproceedings{Fisher_Robrecht_Kopp_Rohlfing_2023, title={Exploring the
    Semantic Dialogue Patterns of Explanations – a Case Study of Game Explanations},
    booktitle={Proceedings of the 27th Workshop on the Semantics and Pragmatics of
    Dialogue }, author={Fisher, Josephine Beryl and Robrecht, Amelie and Kopp, Stefan
    and Rohlfing, Katharina J.}, year={2023} }'
  chicago: Fisher, Josephine Beryl, Amelie Robrecht, Stefan Kopp, and Katharina J.
    Rohlfing. “Exploring the Semantic Dialogue Patterns of Explanations – a Case Study
    of Game Explanations.” In <i>Proceedings of the 27th Workshop on the Semantics
    and Pragmatics of Dialogue </i>, 2023.
  ieee: J. B. Fisher, A. Robrecht, S. Kopp, and K. J. Rohlfing, “Exploring the Semantic
    Dialogue Patterns of Explanations – a Case Study of Game Explanations,” presented
    at the Semdial, Maribor, 2023.
  mla: Fisher, Josephine Beryl, et al. “Exploring the Semantic Dialogue Patterns of
    Explanations – a Case Study of Game Explanations.” <i>Proceedings of the 27th
    Workshop on the Semantics and Pragmatics of Dialogue </i>, 2023.
  short: 'J.B. Fisher, A. Robrecht, S. Kopp, K.J. Rohlfing, in: Proceedings of the
    27th Workshop on the Semantics and Pragmatics of Dialogue , 2023.'
conference:
  location: Maribor
  name: Semdial
date_created: 2024-07-10T11:07:51Z
date_updated: 2025-01-15T13:57:35Z
department:
- _id: '660'
language:
- iso: eng
project:
- _id: '111'
  grant_number: '438445824'
  name: 'TRR 318 - A01: TRR 318 - Adaptives Erklären (Teilprojekt A01)'
publication: 'Proceedings of the 27th Workshop on the Semantics and Pragmatics of
  Dialogue '
status: public
title: Exploring the Semantic Dialogue Patterns of Explanations – a Case Study of
  Game Explanations
type: conference
user_id: '55908'
year: '2023'
...
---
_id: '50262'
abstract:
- lang: eng
  text: <jats:title>Abstract</jats:title><jats:p>Explainable artificial intelligence
    has mainly focused on static learning scenarios so far. We are interested in dynamic
    scenarios where data is sampled progressively, and learning is done in an incremental
    rather than a batch mode. We seek efficient incremental algorithms for computing
    feature importance (FI). Permutation feature importance (PFI) is a well-established
    model-agnostic measure to obtain global FI based on feature marginalization of
    absent features. We propose an efficient, model-agnostic algorithm called iPFI
    to estimate this measure incrementally and under dynamic modeling conditions including
    concept drift. We prove theoretical guarantees on the approximation quality in
    terms of expectation and variance. To validate our theoretical findings and the
    efficacy of our approaches in incremental scenarios dealing with streaming data
    rather than traditional batch settings, we conduct multiple experimental studies
    on benchmark data with and without concept drift.</jats:p>
author:
- first_name: Fabian
  full_name: Fumagalli, Fabian
  last_name: Fumagalli
- first_name: Maximilian
  full_name: Muschalik, Maximilian
  last_name: Muschalik
- first_name: Eyke
  full_name: Hüllermeier, Eyke
  last_name: Hüllermeier
- first_name: Barbara
  full_name: Hammer, Barbara
  last_name: Hammer
citation:
  ama: 'Fumagalli F, Muschalik M, Hüllermeier E, Hammer B. Incremental permutation
    feature importance (iPFI): towards online explanations on data streams. <i>Machine
    Learning</i>. 2023;112(12):4863-4903. doi:<a href="https://doi.org/10.1007/s10994-023-06385-y">10.1007/s10994-023-06385-y</a>'
  apa: 'Fumagalli, F., Muschalik, M., Hüllermeier, E., &#38; Hammer, B. (2023). Incremental
    permutation feature importance (iPFI): towards online explanations on data streams.
    <i>Machine Learning</i>, <i>112</i>(12), 4863–4903. <a href="https://doi.org/10.1007/s10994-023-06385-y">https://doi.org/10.1007/s10994-023-06385-y</a>'
  bibtex: '@article{Fumagalli_Muschalik_Hüllermeier_Hammer_2023, title={Incremental
    permutation feature importance (iPFI): towards online explanations on data streams},
    volume={112}, DOI={<a href="https://doi.org/10.1007/s10994-023-06385-y">10.1007/s10994-023-06385-y</a>},
    number={12}, journal={Machine Learning}, publisher={Springer Science and Business
    Media LLC}, author={Fumagalli, Fabian and Muschalik, Maximilian and Hüllermeier,
    Eyke and Hammer, Barbara}, year={2023}, pages={4863–4903} }'
  chicago: 'Fumagalli, Fabian, Maximilian Muschalik, Eyke Hüllermeier, and Barbara
    Hammer. “Incremental Permutation Feature Importance (IPFI): Towards Online Explanations
    on Data Streams.” <i>Machine Learning</i> 112, no. 12 (2023): 4863–4903. <a href="https://doi.org/10.1007/s10994-023-06385-y">https://doi.org/10.1007/s10994-023-06385-y</a>.'
  ieee: 'F. Fumagalli, M. Muschalik, E. Hüllermeier, and B. Hammer, “Incremental permutation
    feature importance (iPFI): towards online explanations on data streams,” <i>Machine
    Learning</i>, vol. 112, no. 12, pp. 4863–4903, 2023, doi: <a href="https://doi.org/10.1007/s10994-023-06385-y">10.1007/s10994-023-06385-y</a>.'
  mla: 'Fumagalli, Fabian, et al. “Incremental Permutation Feature Importance (IPFI):
    Towards Online Explanations on Data Streams.” <i>Machine Learning</i>, vol. 112,
    no. 12, Springer Science and Business Media LLC, 2023, pp. 4863–903, doi:<a href="https://doi.org/10.1007/s10994-023-06385-y">10.1007/s10994-023-06385-y</a>.'
  short: F. Fumagalli, M. Muschalik, E. Hüllermeier, B. Hammer, Machine Learning 112
    (2023) 4863–4903.
date_created: 2024-01-05T21:52:28Z
date_updated: 2025-01-16T16:20:12Z
department:
- _id: '660'
doi: 10.1007/s10994-023-06385-y
intvolume: '       112'
issue: '12'
keyword:
- Artificial Intelligence
- Software
language:
- iso: eng
page: 4863-4903
project:
- _id: '126'
  name: 'TRR 318 - C3: TRR 318 - Subproject C3'
- _id: '117'
  name: 'TRR 318 - C: TRR 318 - Project Area C'
- _id: '109'
  grant_number: '438445824'
  name: 'TRR 318: TRR 318 - Erklärbarkeit konstruieren'
publication: Machine Learning
publication_identifier:
  issn:
  - 0885-6125
  - 1573-0565
publication_status: published
publisher: Springer Science and Business Media LLC
status: public
title: 'Incremental permutation feature importance (iPFI): towards online explanations
  on data streams'
type: journal_article
user_id: '93420'
volume: 112
year: '2023'
...
---
_id: '58723'
abstract:
- lang: eng
  text: In real-world debates, the most common way to counter an argument is to reason
    against its main point, that is, its conclusion. Existing work on the automatic
    generation of natural language counter-arguments does not address the relation
    to the conclusion, possibly because many arguments leave their conclusion implicit.
    In this paper, we hypothesize that the key to effective counter-argument generation
    is to explicitly model the argument‘s conclusion and to ensure that the stance
    of the generated counter is opposite to that conclusion. In particular, we propose
    a multitask approach that jointly learns to generate both the conclusion and the
    counter of an input argument. The approach employs a stance-based ranking component
    that selects the counter from a diverse set of generated candidates whose stance
    best opposes the generated conclusion. In both automatic and manual evaluation,
    we provide evidence that our approach generates more relevant and stance-adhering
    counters than strong baselines.
author:
- first_name: Milad
  full_name: Alshomary, Milad
  id: '73059'
  last_name: Alshomary
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
citation:
  ama: 'Alshomary M, Wachsmuth H. Conclusion-based Counter-Argument Generation. In:
    Vlachos A, Augenstein I, eds. <i>Proceedings of the 17th Conference of the European
    Chapter of the Association for Computational Linguistics</i>. Association for
    Computational Linguistics; 2023:957–967. doi:<a href="https://doi.org/10.18653/v1/2023.eacl-main.67">10.18653/v1/2023.eacl-main.67</a>'
  apa: Alshomary, M., &#38; Wachsmuth, H. (2023). Conclusion-based Counter-Argument
    Generation. In A. Vlachos &#38; I. Augenstein (Eds.), <i>Proceedings of the 17th
    Conference of the European Chapter of the Association for Computational Linguistics</i>
    (pp. 957–967). Association for Computational Linguistics. <a href="https://doi.org/10.18653/v1/2023.eacl-main.67">https://doi.org/10.18653/v1/2023.eacl-main.67</a>
  bibtex: '@inproceedings{Alshomary_Wachsmuth_2023, place={Dubrovnik, Croatia}, title={Conclusion-based
    Counter-Argument Generation}, DOI={<a href="https://doi.org/10.18653/v1/2023.eacl-main.67">10.18653/v1/2023.eacl-main.67</a>},
    booktitle={Proceedings of the 17th Conference of the European Chapter of the Association
    for Computational Linguistics}, publisher={Association for Computational Linguistics},
    author={Alshomary, Milad and Wachsmuth, Henning}, editor={Vlachos, Andreas and
    Augenstein, Isabelle}, year={2023}, pages={957–967} }'
  chicago: 'Alshomary, Milad, and Henning Wachsmuth. “Conclusion-Based Counter-Argument
    Generation.” In <i>Proceedings of the 17th Conference of the European Chapter
    of the Association for Computational Linguistics</i>, edited by Andreas Vlachos
    and Isabelle Augenstein, 957–967. Dubrovnik, Croatia: Association for Computational
    Linguistics, 2023. <a href="https://doi.org/10.18653/v1/2023.eacl-main.67">https://doi.org/10.18653/v1/2023.eacl-main.67</a>.'
  ieee: 'M. Alshomary and H. Wachsmuth, “Conclusion-based Counter-Argument Generation,”
    in <i>Proceedings of the 17th Conference of the European Chapter of the Association
    for Computational Linguistics</i>, 2023, pp. 957–967, doi: <a href="https://doi.org/10.18653/v1/2023.eacl-main.67">10.18653/v1/2023.eacl-main.67</a>.'
  mla: Alshomary, Milad, and Henning Wachsmuth. “Conclusion-Based Counter-Argument
    Generation.” <i>Proceedings of the 17th Conference of the European Chapter of
    the Association for Computational Linguistics</i>, edited by Andreas Vlachos and
    Isabelle Augenstein, Association for Computational Linguistics, 2023, pp. 957–967,
    doi:<a href="https://doi.org/10.18653/v1/2023.eacl-main.67">10.18653/v1/2023.eacl-main.67</a>.
  short: 'M. Alshomary, H. Wachsmuth, in: A. Vlachos, I. Augenstein (Eds.), Proceedings
    of the 17th Conference of the European Chapter of the Association for Computational
    Linguistics, Association for Computational Linguistics, Dubrovnik, Croatia, 2023,
    pp. 957–967.'
date_created: 2025-02-20T08:20:35Z
date_updated: 2025-02-20T08:21:41Z
department:
- _id: '600'
- _id: '660'
doi: 10.18653/v1/2023.eacl-main.67
editor:
- first_name: Andreas
  full_name: Vlachos, Andreas
  last_name: Vlachos
- first_name: Isabelle
  full_name: Augenstein, Isabelle
  last_name: Augenstein
language:
- iso: eng
page: 957–967
place: Dubrovnik, Croatia
project:
- _id: '118'
  name: 'TRR 318 - INF: TRR 318 - Project Area INF'
publication: Proceedings of the 17th Conference of the European Chapter of the Association
  for Computational Linguistics
publisher: Association for Computational Linguistics
status: public
title: Conclusion-based Counter-Argument Generation
type: conference
user_id: '3900'
year: '2023'
...
