---
_id: '61323'
author:
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  last_name: Buschmeier
- first_name: Katharina Justine
  full_name: Rohlfing, Katharina Justine
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Meisam
  full_name: Booshehri, Meisam
  last_name: Booshehri
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
citation:
  ama: 'Wrede B, Buschmeier H, Rohlfing KJ, Booshehri M, Grimminger A. Incremental
    communication. In: Rohlfing KJ, Främling K, Alpsancar S, Thommes K, Lim BY, eds.
    <i>Social Explainable AI</i>. Springer; 2026:227-245. doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_12">10.1007/978-981-96-5290-7_12</a>'
  apa: Wrede, B., Buschmeier, H., Rohlfing, K. J., Booshehri, M., &#38; Grimminger,
    A. (2026). Incremental communication. In K. J. Rohlfing, K. Främling, S. Alpsancar,
    K. Thommes, &#38; B. Y. Lim (Eds.), <i>Social Explainable AI</i> (pp. 227–245).
    Springer. <a href="https://doi.org/10.1007/978-981-96-5290-7_12">https://doi.org/10.1007/978-981-96-5290-7_12</a>
  bibtex: '@inbook{Wrede_Buschmeier_Rohlfing_Booshehri_Grimminger_2026, title={Incremental
    communication}, DOI={<a href="https://doi.org/10.1007/978-981-96-5290-7_12">10.1007/978-981-96-5290-7_12</a>},
    booktitle={Social Explainable AI}, publisher={Springer}, author={Wrede, Britta
    and Buschmeier, Hendrik and Rohlfing, Katharina Justine and Booshehri, Meisam
    and Grimminger, Angela}, editor={Rohlfing, Katharina J. and Främling, Kary and
    Alpsancar, Suzana and Thommes, Kirsten and Lim, Brian Y.}, year={2026}, pages={227–245}
    }'
  chicago: Wrede, Britta, Hendrik Buschmeier, Katharina Justine Rohlfing, Meisam Booshehri,
    and Angela Grimminger. “Incremental Communication.” In <i>Social Explainable AI</i>,
    edited by Katharina J. Rohlfing, Kary Främling, Suzana Alpsancar, Kirsten Thommes,
    and Brian Y. Lim, 227–45. Springer, 2026. <a href="https://doi.org/10.1007/978-981-96-5290-7_12">https://doi.org/10.1007/978-981-96-5290-7_12</a>.
  ieee: B. Wrede, H. Buschmeier, K. J. Rohlfing, M. Booshehri, and A. Grimminger,
    “Incremental communication,” in <i>Social Explainable AI</i>, K. J. Rohlfing,
    K. Främling, S. Alpsancar, K. Thommes, and B. Y. Lim, Eds. Springer, 2026, pp.
    227–245.
  mla: Wrede, Britta, et al. “Incremental Communication.” <i>Social Explainable AI</i>,
    edited by Katharina J. Rohlfing et al., Springer, 2026, pp. 227–45, doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_12">10.1007/978-981-96-5290-7_12</a>.
  short: 'B. Wrede, H. Buschmeier, K.J. Rohlfing, M. Booshehri, A. Grimminger, in:
    K.J. Rohlfing, K. Främling, S. Alpsancar, K. Thommes, B.Y. Lim (Eds.), Social
    Explainable AI, Springer, 2026, pp. 227–245.'
date_created: 2025-09-17T10:16:36Z
date_updated: 2026-03-19T12:38:37Z
department:
- _id: '660'
doi: 10.1007/978-981-96-5290-7_12
editor:
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  last_name: Rohlfing
- first_name: Kary
  full_name: Främling, Kary
  last_name: Främling
- first_name: Suzana
  full_name: Alpsancar, Suzana
  last_name: Alpsancar
- first_name: Kirsten
  full_name: Thommes, Kirsten
  last_name: Thommes
- first_name: Brian Y.
  full_name: Lim, Brian Y.
  last_name: Lim
language:
- iso: eng
main_file_link:
- open_access: '1'
oa: '1'
page: 227-245
project:
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '113'
  name: TRR 318 - Subproject A3
- _id: '118'
  name: 'TRR 318: Project Area INF'
publication: Social Explainable AI
publication_identifier:
  eisbn:
  - 978-981-96-5290-7
publication_status: epub_ahead
publisher: Springer
quality_controlled: '1'
related_material:
  link:
  - relation: original
    url: https://link.springer.com/chapter/10.1007/978-981-96-5290-7_12
status: public
title: Incremental communication
type: book_chapter
user_id: '57578'
year: '2026'
...
---
_id: '61112'
author:
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  last_name: Vollmer
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
citation:
  ama: 'Rohlfing KJ, Vollmer A-L, Grimminger A. Practices: How to establish an explaining
    practice. In: Rohlfing K, Främling K, Thommes K, Alpsancar S, Lim BY, eds. <i>Social
    Explainable AI</i>. Springer; 2026. doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_5">10.1007/978-981-96-5290-7_5</a>'
  apa: 'Rohlfing, K. J., Vollmer, A.-L., &#38; Grimminger, A. (2026). Practices: How
    to establish an explaining practice. In K. Rohlfing, K. Främling, K. Thommes,
    S. Alpsancar, &#38; B. Y. Lim (Eds.), <i>Social Explainable AI</i>. Springer.
    <a href="https://doi.org/10.1007/978-981-96-5290-7_5">https://doi.org/10.1007/978-981-96-5290-7_5</a>'
  bibtex: '@inbook{Rohlfing_Vollmer_Grimminger_2026, title={Practices: How to establish
    an explaining practice}, DOI={<a href="https://doi.org/10.1007/978-981-96-5290-7_5">10.1007/978-981-96-5290-7_5</a>},
    booktitle={Social Explainable AI}, publisher={Springer}, author={Rohlfing, Katharina
    J. and Vollmer, Anna-Lisa and Grimminger, Angela}, editor={Rohlfing, Katharina
    and Främling, Kary and Thommes, Kirsten and Alpsancar, Suzana and Lim, Brian Y.},
    year={2026} }'
  chicago: 'Rohlfing, Katharina J., Anna-Lisa Vollmer, and Angela Grimminger. “Practices:
    How to Establish an Explaining Practice.” In <i>Social Explainable AI</i>, edited
    by Katharina Rohlfing, Kary Främling, Kirsten Thommes, Suzana Alpsancar, and Brian
    Y. Lim. Springer, 2026. <a href="https://doi.org/10.1007/978-981-96-5290-7_5">https://doi.org/10.1007/978-981-96-5290-7_5</a>.'
  ieee: 'K. J. Rohlfing, A.-L. Vollmer, and A. Grimminger, “Practices: How to establish
    an explaining practice,” in <i>Social Explainable AI</i>, K. Rohlfing, K. Främling,
    K. Thommes, S. Alpsancar, and B. Y. Lim, Eds. Springer, 2026.'
  mla: 'Rohlfing, Katharina J., et al. “Practices: How to Establish an Explaining
    Practice.” <i>Social Explainable AI</i>, edited by Katharina Rohlfing et al.,
    Springer, 2026, doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_5">10.1007/978-981-96-5290-7_5</a>.'
  short: 'K.J. Rohlfing, A.-L. Vollmer, A. Grimminger, in: K. Rohlfing, K. Främling,
    K. Thommes, S. Alpsancar, B.Y. Lim (Eds.), Social Explainable AI, Springer, 2026.'
date_created: 2025-09-02T14:33:16Z
date_updated: 2026-03-20T09:11:58Z
department:
- _id: '660'
doi: 10.1007/978-981-96-5290-7_5
editor:
- first_name: Katharina
  full_name: Rohlfing, Katharina
  last_name: Rohlfing
- first_name: Kary
  full_name: Främling, Kary
  last_name: Främling
- first_name: Kirsten
  full_name: Thommes, Kirsten
  last_name: Thommes
- first_name: Suzana
  full_name: Alpsancar, Suzana
  last_name: Alpsancar
- first_name: Brian Y.
  full_name: Lim, Brian Y.
  last_name: Lim
language:
- iso: eng
main_file_link:
- open_access: '1'
oa: '1'
project:
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '123'
  name: TRR 318 - Subproject B5
publication: Social Explainable AI
publication_identifier:
  eisbn:
  - 978-981-96-5290-7
publication_status: epub_ahead
publisher: Springer
quality_controlled: '1'
related_material:
  link:
  - relation: original
    url: https://link.springer.com/chapter/10.1007/978-981-96-5290-7_5
status: public
title: 'Practices: How to establish an explaining practice'
type: book_chapter
user_id: '57578'
year: '2026'
...
---
_id: '65083'
author:
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
  orcid: 0000-0002-9997-9241
- first_name: Marco
  full_name: Matarese, Marco
  last_name: Matarese
citation:
  ama: 'Buhl HM, Wrede B, Fisher JB, Matarese M. Adaptation. In: Rohlfing KJ, Främling
    K, Lim B, Alpsancar S, Thommes K, eds. <i>Social Explainable AI</i>. Springer;
    2026:247-267. doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_13">https://doi.org/10.1007/978-981-96-5290-7_13</a>'
  apa: Buhl, H. M., Wrede, B., Fisher, J. B., &#38; Matarese, M. (2026). Adaptation.
    In K. J. Rohlfing, K. Främling, B. Lim, S. Alpsancar, &#38; K. Thommes (Eds.),
    <i>Social Explainable AI</i> (pp. 247–267). Springer. <a href="https://doi.org/10.1007/978-981-96-5290-7_13">https://doi.org/10.1007/978-981-96-5290-7_13</a>
  bibtex: '@inbook{Buhl_Wrede_Fisher_Matarese_2026, title={Adaptation}, DOI={<a href="https://doi.org/10.1007/978-981-96-5290-7_13">https://doi.org/10.1007/978-981-96-5290-7_13</a>},
    booktitle={Social Explainable AI}, publisher={Springer}, author={Buhl, Heike M.
    and Wrede, Britta and Fisher, Josephine Beryl and Matarese, Marco}, editor={Rohlfing,
    Katharina J. and Främling, Kary and Lim, Brian and Alpsancar, Suzana and Thommes,
    Kirsten}, year={2026}, pages={247–267} }'
  chicago: Buhl, Heike M., Britta Wrede, Josephine Beryl Fisher, and Marco Matarese.
    “Adaptation.” In <i>Social Explainable AI</i>, edited by Katharina J. Rohlfing,
    Kary Främling, Brian Lim, Suzana Alpsancar, and Kirsten Thommes, 247–67. Springer,
    2026. <a href="https://doi.org/10.1007/978-981-96-5290-7_13">https://doi.org/10.1007/978-981-96-5290-7_13</a>.
  ieee: H. M. Buhl, B. Wrede, J. B. Fisher, and M. Matarese, “Adaptation,” in <i>Social
    Explainable AI</i>, K. J. Rohlfing, K. Främling, B. Lim, S. Alpsancar, and K.
    Thommes, Eds. Springer, 2026, pp. 247–267.
  mla: Buhl, Heike M., et al. “Adaptation.” <i>Social Explainable AI</i>, edited by
    Katharina J. Rohlfing et al., Springer, 2026, pp. 247–67, doi:<a href="https://doi.org/10.1007/978-981-96-5290-7_13">https://doi.org/10.1007/978-981-96-5290-7_13</a>.
  short: 'H.M. Buhl, B. Wrede, J.B. Fisher, M. Matarese, in: K.J. Rohlfing, K. Främling,
    B. Lim, S. Alpsancar, K. Thommes (Eds.), Social Explainable AI, Springer, 2026,
    pp. 247–267.'
date_created: 2026-03-23T07:55:56Z
date_updated: 2026-03-23T18:25:34Z
department:
- _id: '427'
- _id: '660'
doi: https://doi.org/10.1007/978-981-96-5290-7_13
editor:
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  last_name: Rohlfing
- first_name: Kary
  full_name: Främling, Kary
  last_name: Främling
- first_name: Brian
  full_name: Lim, Brian
  last_name: Lim
- first_name: Suzana
  full_name: Alpsancar, Suzana
  last_name: Alpsancar
- first_name: Kirsten
  full_name: Thommes, Kirsten
  last_name: Thommes
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://link.springer.com/chapter/10.1007/978-981-96-5290-7_13
oa: '1'
page: 247-267
project:
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '118'
  name: 'TRR 318: Project Area INF'
publication: Social Explainable AI
publication_identifier:
  unknown:
  - 978-981-96-5290-7
publisher: Springer
related_material:
  link:
  - relation: confirmation
    url: https://link.springer.com/chapter/10.1007/978-981-96-5290-7_13
status: public
title: Adaptation
type: book_chapter
user_id: '90826'
year: '2026'
...
---
_id: '60935'
abstract:
- lang: eng
  text: Research suggests that presenting an action via multimodal stimulation (verbal
    and visual) enhances its perception. To highlight this, in most studies, assertive
    instructions are generally presented before the occurrence of the visual subevent(s).
    However, verbal instructions need not always be assertive; they can also include
    negation to contrast the present event with a prior one, thereby facilitating
    processing—a phenomenon known as contextual facilitation. In our study, we investigated
    whether using negation to guide an action sequence facilitates action perception,
    particularly when two consecutive subactions contrast with each other. Stimuli
    from previous studies on action demonstration were used to create (non)contrastive
    actions, that is, a ball following noncontrastive and identical (Over–Over or
    Under–Under) versus contrastive and opposite paths (Over–Under or Under–Over)
    before terminating at a goal location. In Experiment 1, either an assertive or
    a negative instruction was provided as verbal guidance before onset of each path.
    Analyzing data from 35 participants, we found that, whereas assertive instructions
    facilitate overall action recall, negating the later path for contrastive actions
    is equally facilitative. Given that action goal is the most salient aspect in
    event memory due to goal-path bias in attention, a second experiment was conducted
    to test the effect of multimodal synchrony on goal attention and action memory.
    Experiment 2 revealed that when instructions overlap with actions, they become
    more tailored—assertive instructions effectively guide noncontrastive actions,
    while assertive–negative instruction particularly guides contrastive actions.
    Both studies suggest that increased attention to the goal leads to coarser perception
    of midevents, with action-instruction synchrony modulating goal bias in real-time
    event apprehension to serve distinct purposes for action conceptualization. Whereas
    presenting instructions before subactions attenuates goal attention, overlapping
    instructions increase goal attention and reveal the selective roles of assertive
    and negative instructions in guiding contrastive and noncontrastive actions.
article_number: e70096
article_type: original
author:
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
citation:
  ama: 'Singh A, Rohlfing KJ. Contrastive Verbal Guidance: A Beneficial Context for
    Attention To Events and Their Memory? <i>Cognitive Science</i>. 2025;49(8). doi:<a
    href="https://doi.org/10.1111/cogs.70096">10.1111/cogs.70096</a>'
  apa: 'Singh, A., &#38; Rohlfing, K. J. (2025). Contrastive Verbal Guidance: A Beneficial
    Context for Attention To Events and Their Memory? <i>Cognitive Science</i>, <i>49</i>(8),
    Article e70096. <a href="https://doi.org/10.1111/cogs.70096">https://doi.org/10.1111/cogs.70096</a>'
  bibtex: '@article{Singh_Rohlfing_2025, title={Contrastive Verbal Guidance: A Beneficial
    Context for Attention To Events and Their Memory?}, volume={49}, DOI={<a href="https://doi.org/10.1111/cogs.70096">10.1111/cogs.70096</a>},
    number={8e70096}, journal={Cognitive Science}, publisher={Wiley}, author={Singh,
    Amit and Rohlfing, Katharina J.}, year={2025} }'
  chicago: 'Singh, Amit, and Katharina J. Rohlfing. “Contrastive Verbal Guidance:
    A Beneficial Context for Attention To Events and Their Memory?” <i>Cognitive Science</i>
    49, no. 8 (2025). <a href="https://doi.org/10.1111/cogs.70096">https://doi.org/10.1111/cogs.70096</a>.'
  ieee: 'A. Singh and K. J. Rohlfing, “Contrastive Verbal Guidance: A Beneficial Context
    for Attention To Events and Their Memory?,” <i>Cognitive Science</i>, vol. 49,
    no. 8, Art. no. e70096, 2025, doi: <a href="https://doi.org/10.1111/cogs.70096">10.1111/cogs.70096</a>.'
  mla: 'Singh, Amit, and Katharina J. Rohlfing. “Contrastive Verbal Guidance: A Beneficial
    Context for Attention To Events and Their Memory?” <i>Cognitive Science</i>, vol.
    49, no. 8, e70096, Wiley, 2025, doi:<a href="https://doi.org/10.1111/cogs.70096">10.1111/cogs.70096</a>.'
  short: A. Singh, K.J. Rohlfing, Cognitive Science 49 (2025).
date_created: 2025-08-18T08:30:30Z
date_updated: 2025-08-18T08:31:04Z
department:
- _id: '749'
- _id: '660'
doi: 10.1111/cogs.70096
external_id:
  pmid:
  - '40810767'
intvolume: '        49'
issue: '8'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://pubmed.ncbi.nlm.nih.gov/40810767/
oa: '1'
pmid: '1'
project:
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
publication: Cognitive Science
publication_status: published
publisher: Wiley
quality_controlled: '1'
status: public
title: 'Contrastive Verbal Guidance: A Beneficial Context for Attention To Events
  and Their Memory?'
type: journal_article
user_id: '91018'
volume: 49
year: '2025'
...
---
_id: '61119'
abstract:
- lang: eng
  text: '<p>The present article offers an assessment of intra-individual variability
    in visualattention using the Theory of Visual Attention, which provides a formal
    framework forquantifying attentional components. We specifically investigated
    overall attentionalcapacity – that is, the available processing speed – and its
    distribution, the relativeattentional weight.By reanalyzing a large existing dataset
    from Tünnermann and Scharlau (2021),we found that across multiple testing days,
    participants either remained stable within a20 Hz margin or showed consistent
    improvements in capacity – in some cases triplingtheir initial capacity. The weights
    in response to salient stimuli were remarkablyconsistent.To determine whether
    increases in capacity reflect pure test-retest effects or arefacilitated by consolidation
    between days, and to quantify within-day variability, weconducted a second study
    in which participants completed five self-administeredsessions within a single
    day. Capacities remained within the same magnitude and didnot show a consistent
    directional trend. The relative weights exhibited comparativelylittle variation
    in most participants, akin to the previously analyzed dataset. Further,estimation
    uncertainty increased with higher capacity values.These results suggest that capacity
    may be subject to training effects, but thatsuch improvements appear to depend
    on longer breaks between sessions. This hasimportant implications for individualized
    assessment: A personal prior could beestimated from a single session to accelerate
    future estimations, as long as subsequentsessions occur on the same day. Participants
    with higher capacities may require tailoredexperimentation methods when small
    to medium effects are of interest, due to increaseduncertainty.</p>'
author:
- first_name: Ngoc Chi
  full_name: Banh, Ngoc Chi
  last_name: Banh
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Banh NC, Scharlau I. Intra-individual variability in TVA attentional capacity
    and weight distribution: A reanalysis across days and an experiment within-day.
    Published online 2025.'
  apa: 'Banh, N. C., &#38; Scharlau, I. (2025). <i>Intra-individual variability in
    TVA attentional capacity and weight distribution: A reanalysis across days and
    an experiment within-day</i>. Center for Open Science.'
  bibtex: '@article{Banh_Scharlau_2025, title={Intra-individual variability in TVA
    attentional capacity and weight distribution: A reanalysis across days and an
    experiment within-day}, publisher={Center for Open Science}, author={Banh, Ngoc
    Chi and Scharlau, Ingrid}, year={2025} }'
  chicago: 'Banh, Ngoc Chi, and Ingrid Scharlau. “Intra-Individual Variability in
    TVA Attentional Capacity and Weight Distribution: A Reanalysis across Days and
    an Experiment within-Day.” Center for Open Science, 2025.'
  ieee: 'N. C. Banh and I. Scharlau, “Intra-individual variability in TVA attentional
    capacity and weight distribution: A reanalysis across days and an experiment within-day.”
    Center for Open Science, 2025.'
  mla: 'Banh, Ngoc Chi, and Ingrid Scharlau. <i>Intra-Individual Variability in TVA
    Attentional Capacity and Weight Distribution: A Reanalysis across Days and an
    Experiment within-Day</i>. Center for Open Science, 2025.'
  short: N.C. Banh, I. Scharlau, (2025).
date_created: 2025-09-03T11:30:48Z
date_updated: 2025-09-09T12:04:43Z
department:
- _id: '424'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://osf.io/preprints/psyarxiv/fzvph
oa: '1'
project:
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
publication_status: published
publisher: Center for Open Science
status: public
title: 'Intra-individual variability in TVA attentional capacity and weight distribution:
  A reanalysis across days and an experiment within-day'
type: preprint
user_id: '38219'
year: '2025'
...
---
_id: '61432'
abstract:
- lang: eng
  text: 'This study investigated how action histories – unfolding sequences of actions
    with objects – provide a context for both attentional allocation and linguistic
    repair strategies. Building on theories of enactive cognition and sensorimotor
    contingency theory, we experimentally manipulated action sequences (action history)
    to create either simple or rich “situational models,” and investigated how these
    models interact with attention and reflect in linguistic processes during human–robot
    interaction. Participants (N = 30) engaged in a controlled object placement task
    with a humanoid robot, where the action (manner) information was either provided
    or omitted. The omission elicited repair behaviors in participants that were in
    focus of our investigation. For rich models (competing action possibilities) participants
    demonstrated: a) increased attentional reorientation, reflecting active engagement
    with the situational model b) preference for restricted repairs, targeting the
    specific source of trouble in action selection. Conversely, a simple situational
    model led to more generalized attention patterns and open repair strategies, suggesting
    weaker constraints on internal processing. These findings highlight how situational
    structures emerge externally to scaffold internal cognitive processes, with action
    histories serving as a crucial context for the interface between perception, action,
    and language. We discuss how to implement such a tight loop in the assistance
    of a system.'
author:
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
citation:
  ama: 'Singh A, Rohlfing KJ. Manners Matter: Action history guides attention and
    repair choices during interaction. In: <i>IEEE International Conference on Development
    and Learning (ICDL)</i>. ; 2025. doi:<a href="https://doi.org/10.31234/osf.io/yn2we_v1">10.31234/osf.io/yn2we_v1</a>'
  apa: 'Singh, A., &#38; Rohlfing, K. J. (2025). Manners Matter: Action history guides
    attention and repair choices during interaction. <i>IEEE International Conference
    on Development and Learning (ICDL)</i>. IEEE International Conference on Development
    and Learning (ICDL), Prague. <a href="https://doi.org/10.31234/osf.io/yn2we_v1">https://doi.org/10.31234/osf.io/yn2we_v1</a>'
  bibtex: '@inproceedings{Singh_Rohlfing_2025, place={ Prague}, title={Manners Matter:
    Action history guides attention and repair choices during interaction}, DOI={<a
    href="https://doi.org/10.31234/osf.io/yn2we_v1">10.31234/osf.io/yn2we_v1</a>},
    booktitle={IEEE International Conference on Development and Learning (ICDL)},
    author={Singh, Amit and Rohlfing, Katharina J.}, year={2025} }'
  chicago: 'Singh, Amit, and Katharina J. Rohlfing. “Manners Matter: Action History
    Guides Attention and Repair Choices during Interaction.” In <i>IEEE International
    Conference on Development and Learning (ICDL)</i>.  Prague, 2025. <a href="https://doi.org/10.31234/osf.io/yn2we_v1">https://doi.org/10.31234/osf.io/yn2we_v1</a>.'
  ieee: 'A. Singh and K. J. Rohlfing, “Manners Matter: Action history guides attention
    and repair choices during interaction,” presented at the IEEE International Conference
    on Development and Learning (ICDL), Prague, 2025, doi: <a href="https://doi.org/10.31234/osf.io/yn2we_v1">10.31234/osf.io/yn2we_v1</a>.'
  mla: 'Singh, Amit, and Katharina J. Rohlfing. “Manners Matter: Action History Guides
    Attention and Repair Choices during Interaction.” <i>IEEE International Conference
    on Development and Learning (ICDL)</i>, 2025, doi:<a href="https://doi.org/10.31234/osf.io/yn2we_v1">10.31234/osf.io/yn2we_v1</a>.'
  short: 'A. Singh, K.J. Rohlfing, in: IEEE International Conference on Development
    and Learning (ICDL),  Prague, 2025.'
conference:
  end_date: 2025-09-19
  location: Prague
  name: IEEE International Conference on Development and Learning (ICDL)
  start_date: 2025-09-15
date_created: 2025-09-24T12:32:52Z
date_updated: 2025-09-24T12:39:25Z
department:
- _id: '749'
- _id: '660'
doi: 10.31234/osf.io/yn2we_v1
keyword:
- Attention
- Action
- Repairs
- Task model
- HRI
- Eyemovement
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.31234/osf.io/yn2we_v1
oa: '1'
place: ' Prague'
project:
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
publication: IEEE International Conference on Development and Learning (ICDL)
publication_status: published
quality_controlled: '1'
status: public
title: 'Manners Matter: Action history guides attention and repair choices during
  interaction'
type: conference
user_id: '91018'
year: '2025'
...
---
_id: '61401'
abstract:
- lang: eng
  text: "We introduce a method to study online language processes in human--robot
    interactive setup. In this interaction, language mediated eye movements can be
    studied as the dialogue unfolds between human and a robot.  \r\nTraditionally,
    real-time linguistic processes are studied using visual world paradigms (VWP),
    where either the comprehension or the production tasks are implemented on screens
    for controlled investigations. Going beyond these traditional and unidirectional
    approach, we bring together production--comprehension loop with the help of a
    humanoid robot to preserve interactivity in an ecologically valid yet controlled
    setup. We discuss the potential of such setups for designing and evaluating findings
    from language--vision interplay in psycholinguistics. Our setup shows a potential
    to depart from traditional screen based experiments, balancing the dynamics of
    the interaction with control of the human behaviors. "
author:
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
citation:
  ama: 'Singh A, Rohlfing KJ. Embedding Psycholinguistics: An Interactive Framework
    for Studying Language in Action. In: <i>6th Biannual Conference of the German
    Society for Cognitive Science, Bochum, Germany</i>. ; 2025. doi:<a href="https://doi.org/10.17605/OSF.IO/8PR23">10.17605/OSF.IO/8PR23</a>'
  apa: 'Singh, A., &#38; Rohlfing, K. J. (2025). Embedding Psycholinguistics: An Interactive
    Framework for Studying Language in Action. <i>6th Biannual Conference of the German
    Society for Cognitive Science, Bochum, Germany</i>. 6th Biannual Conference of
    the German Society for Cognitive Science, Bochum, Germany, Bochum. <a href="https://doi.org/10.17605/OSF.IO/8PR23">https://doi.org/10.17605/OSF.IO/8PR23</a>'
  bibtex: '@inproceedings{Singh_Rohlfing_2025, place={Bochum}, title={Embedding Psycholinguistics:
    An Interactive Framework for Studying Language in Action}, DOI={<a href="https://doi.org/10.17605/OSF.IO/8PR23">10.17605/OSF.IO/8PR23</a>},
    booktitle={6th Biannual Conference of the German Society for Cognitive Science,
    Bochum, Germany}, author={Singh, Amit and Rohlfing, Katharina J.}, year={2025}
    }'
  chicago: 'Singh, Amit, and Katharina J. Rohlfing. “Embedding Psycholinguistics:
    An Interactive Framework for Studying Language in Action.” In <i>6th Biannual
    Conference of the German Society for Cognitive Science, Bochum, Germany</i>. Bochum,
    2025. <a href="https://doi.org/10.17605/OSF.IO/8PR23">https://doi.org/10.17605/OSF.IO/8PR23</a>.'
  ieee: 'A. Singh and K. J. Rohlfing, “Embedding Psycholinguistics: An Interactive
    Framework for Studying Language in Action,” presented at the 6th Biannual Conference
    of the German Society for Cognitive Science, Bochum, Germany, Bochum, 2025, doi:
    <a href="https://doi.org/10.17605/OSF.IO/8PR23">10.17605/OSF.IO/8PR23</a>.'
  mla: 'Singh, Amit, and Katharina J. Rohlfing. “Embedding Psycholinguistics: An Interactive
    Framework for Studying Language in Action.” <i>6th Biannual Conference of the
    German Society for Cognitive Science, Bochum, Germany</i>, 2025, doi:<a href="https://doi.org/10.17605/OSF.IO/8PR23">10.17605/OSF.IO/8PR23</a>.'
  short: 'A. Singh, K.J. Rohlfing, in: 6th Biannual Conference of the German Society
    for Cognitive Science, Bochum, Germany, Bochum, 2025.'
conference:
  end_date: 2025-09-03
  location: Bochum
  name: 6th Biannual Conference of the German Society for Cognitive Science, Bochum,
    Germany
  start_date: 2025-09-01
date_created: 2025-09-23T09:04:40Z
date_updated: 2025-09-24T12:47:47Z
department:
- _id: '749'
- _id: '660'
doi: 10.17605/OSF.IO/8PR23
language:
- iso: eng
main_file_link:
- url: https://osf.io/ghymr
place: Bochum
project:
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
publication: 6th Biannual Conference of the German Society for Cognitive Science,
  Bochum, Germany
publication_status: published
quality_controlled: '1'
status: public
title: 'Embedding Psycholinguistics: An Interactive Framework for Studying Language
  in Action'
type: conference
user_id: '91018'
year: '2025'
...
---
_id: '61156'
abstract:
- lang: eng
  text: Explainability has become an important topic in computer science and artificial
    intelligence, leading to a subfield called Explainable Artificial Intelligence
    (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’
    on the part of the explainee. However, what it means to ‘understand’ is still
    not clearly defined, and the concept itself is rarely the subject of scientific
    investigation. This conceptual article aims to present a model of forms of understanding
    for XAI-explanations and beyond. From an interdisciplinary perspective bringing
    together computer science, linguistics, sociology, philosophy and psychology,
    a definition of understanding and its forms, assessment, and dynamics during the
    process of giving everyday explanations are explored. Two types of understanding
    are considered as possible outcomes of explanations, namely enabledness, ‘knowing
    how’ to do or decide something, and comprehension, ‘knowing that’ – both in different
    degrees (from shallow to deep). Explanations regularly start with shallow understanding
    in a specific domain and can lead to deep comprehension and enabledness of the
    explanandum, which we see as a prerequisite for human users to gain agency. In
    this process, the increase of comprehension and enabledness are highly interdependent.
    Against the background of this systematization, special challenges of understanding
    in XAI are discussed.
article_number: '101419'
article_type: original
author:
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  id: '76456'
  last_name: Buschmeier
  orcid: 0000-0002-9613-5713
- first_name: Heike M.
  full_name: Buhl, Heike M.
  id: '27152'
  last_name: Buhl
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Helen
  full_name: Beierling, Helen
  id: '50995'
  last_name: Beierling
- first_name: Josephine Beryl
  full_name: Fisher, Josephine Beryl
  id: '56345'
  last_name: Fisher
  orcid: 0000-0002-9997-9241
- first_name: André
  full_name: Groß, André
  id: '93405'
  last_name: Groß
  orcid: 0000-0002-9593-7220
- first_name: Ilona
  full_name: Horwath, Ilona
  id: '68836'
  last_name: Horwath
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Michael
  full_name: Lenke, Michael
  last_name: Lenke
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
  orcid: 0000-0002-5676-8233
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Lutz
  full_name: Terfloth, Lutz
  id: '37320'
  last_name: Terfloth
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  id: '86589'
  last_name: Vollmer
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Annedore
  full_name: Wilmes, Annedore
  last_name: Wilmes
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Buschmeier H, Buhl HM, Kern F, et al. Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>. 2025;94. doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>
  apa: Buschmeier, H., Buhl, H. M., Kern, F., Grimminger, A., Beierling, H., Fisher,
    J. B., Groß, A., Horwath, I., Klowait, N., Lazarov, S. T., Lenke, M., Lohmer,
    V., Rohlfing, K., Scharlau, I., Singh, A., Terfloth, L., Vollmer, A.-L., Wang,
    Y., Wilmes, A., &#38; Wrede, B. (2025). Forms of Understanding for XAI-Explanations.
    <i>Cognitive Systems Research</i>, <i>94</i>, Article 101419. <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>
  bibtex: '@article{Buschmeier_Buhl_Kern_Grimminger_Beierling_Fisher_Groß_Horwath_Klowait_Lazarov_et
    al._2025, title={Forms of Understanding for XAI-Explanations}, volume={94}, DOI={<a
    href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>},
    number={101419}, journal={Cognitive Systems Research}, author={Buschmeier, Hendrik
    and Buhl, Heike M. and Kern, Friederike and Grimminger, Angela and Beierling,
    Helen and Fisher, Josephine Beryl and Groß, André and Horwath, Ilona and Klowait,
    Nils and Lazarov, Stefan Teodorov and et al.}, year={2025} }'
  chicago: Buschmeier, Hendrik, Heike M. Buhl, Friederike Kern, Angela Grimminger,
    Helen Beierling, Josephine Beryl Fisher, André Groß, et al. “Forms of Understanding
    for XAI-Explanations.” <i>Cognitive Systems Research</i> 94 (2025). <a href="https://doi.org/10.1016/j.cogsys.2025.101419">https://doi.org/10.1016/j.cogsys.2025.101419</a>.
  ieee: 'H. Buschmeier <i>et al.</i>, “Forms of Understanding for XAI-Explanations,”
    <i>Cognitive Systems Research</i>, vol. 94, Art. no. 101419, 2025, doi: <a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.'
  mla: Buschmeier, Hendrik, et al. “Forms of Understanding for XAI-Explanations.”
    <i>Cognitive Systems Research</i>, vol. 94, 101419, 2025, doi:<a href="https://doi.org/10.1016/j.cogsys.2025.101419">10.1016/j.cogsys.2025.101419</a>.
  short: H. Buschmeier, H.M. Buhl, F. Kern, A. Grimminger, H. Beierling, J.B. Fisher,
    A. Groß, I. Horwath, N. Klowait, S.T. Lazarov, M. Lenke, V. Lohmer, K. Rohlfing,
    I. Scharlau, A. Singh, L. Terfloth, A.-L. Vollmer, Y. Wang, A. Wilmes, B. Wrede,
    Cognitive Systems Research 94 (2025).
date_created: 2025-09-08T14:24:32Z
date_updated: 2025-12-05T15:32:25Z
ddc:
- '006'
department:
- _id: '660'
doi: 10.1016/j.cogsys.2025.101419
file:
- access_level: closed
  content_type: application/pdf
  creator: hbuschme
  date_created: 2025-12-01T21:02:20Z
  date_updated: 2025-12-01T21:02:20Z
  file_id: '62730'
  file_name: Buschmeier-etal-2025-COGSYS.pdf
  file_size: 10114981
  relation: main_file
  success: 1
file_date_updated: 2025-12-01T21:02:20Z
has_accepted_license: '1'
intvolume: '        94'
keyword:
- understanding
- explaining
- explanations
- explainable
- AI
- interdisciplinarity
- comprehension
- enabledness
- agency
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.sciencedirect.com/science/article/pii/S1389041725000993?via%3Dihub
oa: '1'
project:
- _id: '111'
  name: 'TRR 318; TP A01: Adaptives Erklären'
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
- _id: '113'
  name: TRR 318 - Subproject A3
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
- _id: '115'
  name: 'TRR 318; TP A05: Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog'
- _id: '122'
  name: TRR 318 - Subproject B3
- _id: '123'
  name: TRR 318 - Subproject B5
- _id: '119'
  name: TRR 318 - Project Area Ö
publication: Cognitive Systems Research
publication_status: published
quality_controlled: '1'
status: public
title: Forms of Understanding for XAI-Explanations
type: journal_article
user_id: '57578'
volume: 94
year: '2025'
...
---
_id: '53069'
author:
- first_name: Ngoc Chi
  full_name: Banh, Ngoc Chi
  id: '38219'
  last_name: Banh
  orcid: 0000-0002-5946-4542
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Banh NC, Scharlau I. Effects of task difficulty on visual processing speed.
    In: ; 2024.'
  apa: Banh, N. C., &#38; Scharlau, I. (2024). <i>Effects of task difficulty on visual
    processing speed</i>. Tagung experimentell arbeitender Psycholog:innen (TeaP),
    Regensburg.
  bibtex: '@inproceedings{Banh_Scharlau_2024, title={Effects of task difficulty on
    visual processing speed}, author={Banh, Ngoc Chi and Scharlau, Ingrid}, year={2024}
    }'
  chicago: Banh, Ngoc Chi, and Ingrid Scharlau. “Effects of Task Difficulty on Visual
    Processing Speed,” 2024.
  ieee: N. C. Banh and I. Scharlau, “Effects of task difficulty on visual processing
    speed,” presented at the Tagung experimentell arbeitender Psycholog:innen (TeaP),
    Regensburg, 2024.
  mla: Banh, Ngoc Chi, and Ingrid Scharlau. <i>Effects of Task Difficulty on Visual
    Processing Speed</i>. 2024.
  short: 'N.C. Banh, I. Scharlau, in: 2024.'
conference:
  end_date: 2024-03-20
  location: Regensburg
  name: Tagung experimentell arbeitender Psycholog:innen (TeaP)
  start_date: 2024-03-17
date_created: 2024-03-27T11:43:51Z
date_updated: 2024-06-26T08:02:07Z
ddc:
- '150'
department:
- _id: '424'
- _id: '660'
file:
- access_level: closed
  content_type: application/pdf
  creator: ncbanh
  date_created: 2024-03-27T11:42:20Z
  date_updated: 2024-03-27T11:42:20Z
  file_id: '53070'
  file_name: Banh & Scharlau (2024) - Effects of task difficulty on visual processing
    speed.pdf
  file_size: 1237859
  relation: main_file
  success: 1
file_date_updated: 2024-03-27T11:42:20Z
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
oa: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
- _id: '52'
  name: 'PC2: Computing Resources Provided by the Paderborn Center for Parallel Computing'
quality_controlled: '1'
status: public
title: Effects of task difficulty on visual processing speed
type: conference_abstract
user_id: '38219'
year: '2024'
...
---
_id: '56660'
abstract:
- lang: eng
  text: In a successful dialogue in general and a successful explanation in specific,
    partners need to account for both, the task model (what is relevant for the task)
    and the partner model (what one can con- tribute). The phenomenon of coupling
    between task and the partner model becomes especially interesting in the context
    of Human– Robot Interaction where humans have to deal with unknown ca- pabilities
    of the robot, which can momentarily be perceived when the robot is unable to contribute
    to the task. Following research on the path over manner prominence in an action
    [31–33], a robot ex- plained actions to a human by emphasizing two aspects – the
    path ("where" component) and the manner ("how" component). On criti- cal trials,
    the robot occasionally omitted one of these components where participants sought
    missing information for the path or the manner. Participants’ information-seeking
    and gaze behaviour were analysed. Analysis confirms the initial predictions for,
    a) task model (path over manner prominence), i.e., earlier information-seeking
    for path-missing than manner-missing trials, and b) partner model, i.e., while
    information-seeking is predominantly tied to the attention on the robot’s face,
    when robot fails to provide resolution, attention shifts more often towards its
    torso – a behavior likely to indicate an exploration of the robot’s capabilities.
    An individual-level anal- ysis further confirms that the intra-individual variation
    in the task model is partly influenced by the perceived capability of the robot.
author:
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
citation:
  ama: 'Singh A, Rohlfing KJ. Coupling of Task and Partner Model: Investigating the
    Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue.
    In: <i>Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)</i>. ; 2024. doi:<a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>'
  apa: 'Singh, A., &#38; Rohlfing, K. J. (2024). Coupling of Task and Partner Model:
    Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory
    Dialogue. <i>Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)</i>. 26th ACM International Conference on Multimodal Interaction (ICMI
    2024), San Jose, Costa Rica. <a href="https://doi.org/10.1145/3686215.3689202">https://doi.org/10.1145/3686215.3689202</a>'
  bibtex: '@inproceedings{Singh_Rohlfing_2024, title={Coupling of Task and Partner
    Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot
    Explanatory Dialogue}, DOI={<a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>},
    booktitle={Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)}, author={Singh, Amit and Rohlfing, Katharina J.}, year={2024} }'
  chicago: 'Singh, Amit, and Katharina J. Rohlfing. “Coupling of Task and Partner
    Model: Investigating the Intra-Individual Variability in Gaze during Human–Robot
    Explanatory Dialogue.” In <i>Proceedings of 26th ACM International Conference
    on Multimodal Interaction (ICMI 2024)</i>, 2024. <a href="https://doi.org/10.1145/3686215.3689202">https://doi.org/10.1145/3686215.3689202</a>.'
  ieee: 'A. Singh and K. J. Rohlfing, “Coupling of Task and Partner Model: Investigating
    the Intra-Individual Variability in Gaze during Human–Robot Explanatory Dialogue,”
    presented at the 26th ACM International Conference on Multimodal Interaction (ICMI
    2024), San Jose, Costa Rica, 2024, doi: <a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>.'
  mla: 'Singh, Amit, and Katharina J. Rohlfing. “Coupling of Task and Partner Model:
    Investigating the Intra-Individual Variability in Gaze during Human–Robot Explanatory
    Dialogue.” <i>Proceedings of 26th ACM International Conference on Multimodal Interaction
    (ICMI 2024)</i>, 2024, doi:<a href="https://doi.org/10.1145/3686215.3689202">10.1145/3686215.3689202</a>.'
  short: 'A. Singh, K.J. Rohlfing, in: Proceedings of 26th ACM International Conference
    on Multimodal Interaction (ICMI 2024), 2024.'
conference:
  location: San Jose, Costa Rica
  name: 26th ACM International Conference on Multimodal Interaction (ICMI 2024)
date_created: 2024-10-17T09:35:32Z
date_updated: 2024-11-06T10:56:34Z
ddc:
- '410'
department:
- _id: '749'
- _id: '660'
doi: 10.1145/3686215.3689202
has_accepted_license: '1'
keyword:
- Explanation
- Scaffolding
- Eyetracking
- Partner Model
- HRI
language:
- iso: eng
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Proceedings of 26th ACM International Conference on Multimodal Interaction
  (ICMI 2024)
status: public
title: 'Coupling of Task and Partner Model: Investigating the Intra-Individual Variability
  in Gaze during Human–Robot Explanatory Dialogue'
type: conference
user_id: '91018'
year: '2024'
...
---
_id: '53072'
abstract:
- lang: eng
  text: "Negated statements require more processing efforts than assertions. However,
    in certain contexts, repeating negations undergo adaptation, which over time mitigates
    the effort.\r\nHere, we ask negations hamper visual processing and whether consecutive
    repetitions mitigate its influence. \r\nWe assessed the overall attentional capacity
    and its distribution, the relative weight, quantitatively using \r\nthe formal
    Theory of Visual Attention (TVA).\r\nWe employed a very simple form for negations,
    binary negations. Negated instructions, expressing the only alternative to the
    core supposition, were cognitively demanding, resulting in a loss of attentional
    capacity in three experiments. The overall attentional capacity recovered gradually
    but stagnated at a lower level than with assertions, even after many repetitions.
    Additionally, negations distributed the attention equally between target and reference
    stimulus. Repetitions slightly increased the reference' share of attention. Assertions,
    on the other hand, shifted the attentional weight towards the target. Few repetitions
    slightly decreased the bias towards the target, many repetitions increased the
    bias."
article_type: original
author:
- first_name: Ngoc Chi
  full_name: Banh, Ngoc Chi
  id: '38219'
  last_name: Banh
  orcid: 0000-0002-5946-4542
- first_name: Jan
  full_name: Tünnermann, Jan
  last_name: Tünnermann
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: Banh NC, Tünnermann J, Rohlfing KJ, Scharlau I. Benefiting from Binary Negations?
    Verbal Negations Decrease Visual Attention and Balance Its Distribution. <i>Frontiers
    in Psychology</i>. 2024;15. doi:<a href="https://doi.org/10.3389/fpsyg.2024.1451309">10.3389/fpsyg.2024.1451309</a>
  apa: Banh, N. C., Tünnermann, J., Rohlfing, K. J., &#38; Scharlau, I. (2024). Benefiting
    from Binary Negations? Verbal Negations Decrease Visual Attention and Balance
    Its Distribution. <i>Frontiers in Psychology</i>, <i>15</i>. <a href="https://doi.org/10.3389/fpsyg.2024.1451309">https://doi.org/10.3389/fpsyg.2024.1451309</a>
  bibtex: '@article{Banh_Tünnermann_Rohlfing_Scharlau_2024, title={Benefiting from
    Binary Negations? Verbal Negations Decrease Visual Attention and Balance Its Distribution},
    volume={15}, DOI={<a href="https://doi.org/10.3389/fpsyg.2024.1451309">10.3389/fpsyg.2024.1451309</a>},
    journal={Frontiers in Psychology}, author={Banh, Ngoc Chi and Tünnermann, Jan
    and Rohlfing, Katharina J. and Scharlau, Ingrid}, year={2024} }'
  chicago: Banh, Ngoc Chi, Jan Tünnermann, Katharina J. Rohlfing, and Ingrid Scharlau.
    “Benefiting from Binary Negations? Verbal Negations Decrease Visual Attention
    and Balance Its Distribution.” <i>Frontiers in Psychology</i> 15 (2024). <a href="https://doi.org/10.3389/fpsyg.2024.1451309">https://doi.org/10.3389/fpsyg.2024.1451309</a>.
  ieee: 'N. C. Banh, J. Tünnermann, K. J. Rohlfing, and I. Scharlau, “Benefiting from
    Binary Negations? Verbal Negations Decrease Visual Attention and Balance Its Distribution,”
    <i>Frontiers in Psychology</i>, vol. 15, 2024, doi: <a href="https://doi.org/10.3389/fpsyg.2024.1451309">10.3389/fpsyg.2024.1451309</a>.'
  mla: Banh, Ngoc Chi, et al. “Benefiting from Binary Negations? Verbal Negations
    Decrease Visual Attention and Balance Its Distribution.” <i>Frontiers in Psychology</i>,
    vol. 15, 2024, doi:<a href="https://doi.org/10.3389/fpsyg.2024.1451309">10.3389/fpsyg.2024.1451309</a>.
  short: N.C. Banh, J. Tünnermann, K.J. Rohlfing, I. Scharlau, Frontiers in Psychology
    15 (2024).
date_created: 2024-03-27T12:16:33Z
date_updated: 2024-12-02T09:41:36Z
department:
- _id: '424'
- _id: '660'
doi: 10.3389/fpsyg.2024.1451309
intvolume: '        15'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1451309/abstract
oa: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
- _id: '52'
  name: 'PC2: Computing Resources Provided by the Paderborn Center for Parallel Computing'
publication: Frontiers in Psychology
publication_status: published
status: public
title: Benefiting from Binary Negations? Verbal Negations Decrease Visual Attention
  and Balance Its Distribution
type: journal_article
user_id: '38219'
volume: 15
year: '2024'
...
---
_id: '49516'
abstract:
- lang: eng
  text: <jats:p>In this article, we present RISE—a <jats:bold>R</jats:bold>obotics
    <jats:bold>I</jats:bold>ntegration and <jats:bold>S</jats:bold>cenario-Management
    <jats:bold>E</jats:bold>xtensible-Architecture—for designing human–robot dialogs
    and conducting <jats:italic>Human–Robot Interaction</jats:italic> (HRI) studies.
    In current HRI research, interdisciplinarity in the creation and implementation
    of interaction studies is becoming increasingly important. In addition, there
    is a lack of reproducibility of the research results. With the presented open-source
    architecture, we aim to address these two topics. Therefore, we discuss the advantages
    and disadvantages of various existing tools from different sub-fields within robotics.
    Requirements for an architecture can be derived from this overview of the literature,
    which 1) supports interdisciplinary research, 2) allows reproducibility of the
    research, and 3) is accessible to other researchers in the field of HRI. With
    our architecture, we tackle these requirements by providing a <jats:italic>Graphical
    User Interface</jats:italic> which explains the robot behavior and allows introspection
    into the current state of the dialog. Additionally, it offers controlling possibilities
    to easily conduct <jats:italic>Wizard of Oz</jats:italic> studies. To achieve
    transparency, the dialog is modeled explicitly, and the robot behavior can be
    configured. Furthermore, the modular architecture offers an interface for external
    features and sensors and is expandable to new robots and modalities.</jats:p>
article_type: original
author:
- first_name: André
  full_name: Groß, André
  last_name: Groß
- first_name: Christian
  full_name: Schütze, Christian
  last_name: Schütze
- first_name: Mara
  full_name: Brandt, Mara
  last_name: Brandt
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Birte
  full_name: Richter, Birte
  last_name: Richter
citation:
  ama: 'Groß A, Schütze C, Brandt M, Wrede B, Richter B. RISE: an open-source architecture
    for interdisciplinary and reproducible human–robot interaction research. <i>Frontiers
    in Robotics and AI</i>. 2023;10. doi:<a href="https://doi.org/10.3389/frobt.2023.1245501">10.3389/frobt.2023.1245501</a>'
  apa: 'Groß, A., Schütze, C., Brandt, M., Wrede, B., &#38; Richter, B. (2023). RISE:
    an open-source architecture for interdisciplinary and reproducible human–robot
    interaction research. <i>Frontiers in Robotics and AI</i>, <i>10</i>. <a href="https://doi.org/10.3389/frobt.2023.1245501">https://doi.org/10.3389/frobt.2023.1245501</a>'
  bibtex: '@article{Groß_Schütze_Brandt_Wrede_Richter_2023, title={RISE: an open-source
    architecture for interdisciplinary and reproducible human–robot interaction research},
    volume={10}, DOI={<a href="https://doi.org/10.3389/frobt.2023.1245501">10.3389/frobt.2023.1245501</a>},
    journal={Frontiers in Robotics and AI}, publisher={Frontiers Media SA}, author={Groß,
    André and Schütze, Christian and Brandt, Mara and Wrede, Britta and Richter, Birte},
    year={2023} }'
  chicago: 'Groß, André, Christian Schütze, Mara Brandt, Britta Wrede, and Birte Richter.
    “RISE: An Open-Source Architecture for Interdisciplinary and Reproducible Human–Robot
    Interaction Research.” <i>Frontiers in Robotics and AI</i> 10 (2023). <a href="https://doi.org/10.3389/frobt.2023.1245501">https://doi.org/10.3389/frobt.2023.1245501</a>.'
  ieee: 'A. Groß, C. Schütze, M. Brandt, B. Wrede, and B. Richter, “RISE: an open-source
    architecture for interdisciplinary and reproducible human–robot interaction research,”
    <i>Frontiers in Robotics and AI</i>, vol. 10, 2023, doi: <a href="https://doi.org/10.3389/frobt.2023.1245501">10.3389/frobt.2023.1245501</a>.'
  mla: 'Groß, André, et al. “RISE: An Open-Source Architecture for Interdisciplinary
    and Reproducible Human–Robot Interaction Research.” <i>Frontiers in Robotics and
    AI</i>, vol. 10, Frontiers Media SA, 2023, doi:<a href="https://doi.org/10.3389/frobt.2023.1245501">10.3389/frobt.2023.1245501</a>.'
  short: A. Groß, C. Schütze, M. Brandt, B. Wrede, B. Richter, Frontiers in Robotics
    and AI 10 (2023).
date_created: 2023-12-07T09:17:09Z
date_updated: 2023-12-07T12:09:41Z
ddc:
- '000'
doi: 10.3389/frobt.2023.1245501
file:
- access_level: closed
  content_type: application/pdf
  creator: angross
  date_created: 2023-12-07T09:18:55Z
  date_updated: 2023-12-07T09:18:55Z
  file_id: '49517'
  file_name: frobt-10-1245501.pdf
  file_size: 40679118
  relation: main_file
  success: 1
file_date_updated: 2023-12-07T09:18:55Z
has_accepted_license: '1'
intvolume: '        10'
keyword:
- Artificial Intelligence
- Computer Science Applications
language:
- iso: eng
project:
- _id: '109'
  grant_number: '438445824'
  name: 'TRR 318: TRR 318 - Erklärbarkeit konstruieren'
- _id: '113'
  name: 'TRR 318 - A3: TRR 318 - Subproject A3'
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Frontiers in Robotics and AI
publication_identifier:
  issn:
  - 2296-9144
publication_status: published
publisher: Frontiers Media SA
status: public
title: 'RISE: an open-source architecture for interdisciplinary and reproducible human–robot
  interaction research'
type: journal_article
user_id: '93405'
volume: 10
year: '2023'
...
---
_id: '51371'
abstract:
- lang: eng
  text: <jats:p>In this paper, we investigate the effect of distractions and hesitations
    as a scaffolding strategy. Recent research points to the potential beneficial
    effects of a speaker’s hesitations on the listeners’ comprehension of utterances,
    although results from studies on this issue indicate that humans do not make strategic
    use of them. The role of hesitations and their communicative function in human-human
    interaction is a much-discussed topic in current research. To better understand
    the underlying cognitive processes, we developed a human–robot interaction (HRI)
    setup that allows the measurement of the electroencephalogram (EEG) signals of
    a human participant while interacting with a robot. We thereby address the research
    question of whether we find effects on single-trial EEG based on the distraction
    and the corresponding robot’s hesitation scaffolding strategy. To carry out the
    experiments, we leverage our LabLinking method, which enables interdisciplinary
    joint research between remote labs. This study could not have been conducted without
    LabLinking, as the two involved labs needed to combine their individual expertise
    and equipment to achieve the goal together. The results of our study indicate
    that the EEG correlates in the distracted condition are different from the baseline
    condition without distractions. Furthermore, we could differentiate the EEG correlates
    of distraction with and without a hesitation scaffolding strategy. This proof-of-concept
    study shows that LabLinking makes it possible to conduct collaborative HRI studies
    in remote laboratories and lays the first foundation for more in-depth research
    into robotic scaffolding strategies.</jats:p>
article_number: '37'
author:
- first_name: Birte
  full_name: Richter, Birte
  last_name: Richter
- first_name: Felix
  full_name: Putze, Felix
  last_name: Putze
- first_name: Gabriel
  full_name: Ivucic, Gabriel
  last_name: Ivucic
- first_name: Mara
  full_name: Brandt, Mara
  last_name: Brandt
- first_name: Christian
  full_name: Schütze, Christian
  last_name: Schütze
- first_name: Rafael
  full_name: Reisenhofer, Rafael
  last_name: Reisenhofer
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Tanja
  full_name: Schultz, Tanja
  last_name: Schultz
citation:
  ama: 'Richter B, Putze F, Ivucic G, et al. EEG Correlates of Distractions and Hesitations
    in Human–Robot Interaction: A LabLinking Pilot Study. <i>Multimodal Technologies
    and Interaction</i>. 2023;7(4). doi:<a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>'
  apa: 'Richter, B., Putze, F., Ivucic, G., Brandt, M., Schütze, C., Reisenhofer,
    R., Wrede, B., &#38; Schultz, T. (2023). EEG Correlates of Distractions and Hesitations
    in Human–Robot Interaction: A LabLinking Pilot Study. <i>Multimodal Technologies
    and Interaction</i>, <i>7</i>(4), Article 37. <a href="https://doi.org/10.3390/mti7040037">https://doi.org/10.3390/mti7040037</a>'
  bibtex: '@article{Richter_Putze_Ivucic_Brandt_Schütze_Reisenhofer_Wrede_Schultz_2023,
    title={EEG Correlates of Distractions and Hesitations in Human–Robot Interaction:
    A LabLinking Pilot Study}, volume={7}, DOI={<a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>},
    number={437}, journal={Multimodal Technologies and Interaction}, publisher={MDPI
    AG}, author={Richter, Birte and Putze, Felix and Ivucic, Gabriel and Brandt, Mara
    and Schütze, Christian and Reisenhofer, Rafael and Wrede, Britta and Schultz,
    Tanja}, year={2023} }'
  chicago: 'Richter, Birte, Felix Putze, Gabriel Ivucic, Mara Brandt, Christian Schütze,
    Rafael Reisenhofer, Britta Wrede, and Tanja Schultz. “EEG Correlates of Distractions
    and Hesitations in Human–Robot Interaction: A LabLinking Pilot Study.” <i>Multimodal
    Technologies and Interaction</i> 7, no. 4 (2023). <a href="https://doi.org/10.3390/mti7040037">https://doi.org/10.3390/mti7040037</a>.'
  ieee: 'B. Richter <i>et al.</i>, “EEG Correlates of Distractions and Hesitations
    in Human–Robot Interaction: A LabLinking Pilot Study,” <i>Multimodal Technologies
    and Interaction</i>, vol. 7, no. 4, Art. no. 37, 2023, doi: <a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>.'
  mla: 'Richter, Birte, et al. “EEG Correlates of Distractions and Hesitations in
    Human–Robot Interaction: A LabLinking Pilot Study.” <i>Multimodal Technologies
    and Interaction</i>, vol. 7, no. 4, 37, MDPI AG, 2023, doi:<a href="https://doi.org/10.3390/mti7040037">10.3390/mti7040037</a>.'
  short: B. Richter, F. Putze, G. Ivucic, M. Brandt, C. Schütze, R. Reisenhofer, B.
    Wrede, T. Schultz, Multimodal Technologies and Interaction 7 (2023).
date_created: 2024-02-18T10:45:53Z
date_updated: 2024-02-26T08:44:32Z
department:
- _id: '660'
doi: 10.3390/mti7040037
intvolume: '         7'
issue: '4'
keyword:
- Computer Networks and Communications
- Computer Science Applications
- Human-Computer Interaction
- Neuroscience (miscellaneous)
language:
- iso: eng
project:
- _id: '113'
  name: 'TRR 318 - A3: TRR 318 - Subproject A3'
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Multimodal Technologies and Interaction
publication_identifier:
  issn:
  - 2414-4088
publication_status: published
publisher: MDPI AG
status: public
title: 'EEG Correlates of Distractions and Hesitations in Human–Robot Interaction:
  A LabLinking Pilot Study'
type: journal_article
user_id: '54779'
volume: 7
year: '2023'
...
---
_id: '46283'
author:
- first_name: Ngoc Chi
  full_name: Banh, Ngoc Chi
  id: '38219'
  last_name: Banh
  orcid: 0000-0002-5946-4542
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
citation:
  ama: 'Banh NC, Scharlau I. First steps towards real-time assessment of attentional
    weights and capacity according to TVA. In: Merz S, Frings C, Leuchtenberg B, et
    al., eds. <i>Abstracts of the 65th TeaP</i>. ZPID (Leibniz Institute for Psychology);
    2023. doi:<a href="https://doi.org/10.23668/PSYCHARCHIVES.12945">10.23668/PSYCHARCHIVES.12945</a>'
  apa: Banh, N. C., &#38; Scharlau, I. (2023). First steps towards real-time assessment
    of attentional weights and capacity according to TVA. In S. Merz, C. Frings, B.
    Leuchtenberg, B. Moeller, S. Mueller, R. Neumann, B. Pastötter, L. Pingen, &#38;
    G. Schui (Eds.), <i>Abstracts of the 65th TeaP</i>. ZPID (Leibniz Institute for
    Psychology). <a href="https://doi.org/10.23668/PSYCHARCHIVES.12945">https://doi.org/10.23668/PSYCHARCHIVES.12945</a>
  bibtex: '@inproceedings{Banh_Scharlau_2023, title={First steps towards real-time
    assessment of attentional weights and capacity according to TVA}, DOI={<a href="https://doi.org/10.23668/PSYCHARCHIVES.12945">10.23668/PSYCHARCHIVES.12945</a>},
    booktitle={Abstracts of the 65th TeaP}, publisher={ZPID (Leibniz Institute for
    Psychology)}, author={Banh, Ngoc Chi and Scharlau, Ingrid}, editor={Merz, Simon
    and Frings, Christian and Leuchtenberg, Bettina and Moeller, Birte and Mueller,
    Stefanie and Neumann, Roland and Pastötter, Bernhard and Pingen, Leah and Schui,
    Gabriel}, year={2023} }'
  chicago: Banh, Ngoc Chi, and Ingrid Scharlau. “First Steps towards Real-Time Assessment
    of Attentional Weights and Capacity According to TVA.” In <i>Abstracts of the
    65th TeaP</i>, edited by Simon Merz, Christian Frings, Bettina Leuchtenberg, Birte
    Moeller, Stefanie Mueller, Roland Neumann, Bernhard Pastötter, Leah Pingen, and
    Gabriel Schui. ZPID (Leibniz Institute for Psychology), 2023. <a href="https://doi.org/10.23668/PSYCHARCHIVES.12945">https://doi.org/10.23668/PSYCHARCHIVES.12945</a>.
  ieee: 'N. C. Banh and I. Scharlau, “First steps towards real-time assessment of
    attentional weights and capacity according to TVA,” in <i>Abstracts of the 65th
    TeaP</i>, Trier, Germany, 2023, doi: <a href="https://doi.org/10.23668/PSYCHARCHIVES.12945">10.23668/PSYCHARCHIVES.12945</a>.'
  mla: Banh, Ngoc Chi, and Ingrid Scharlau. “First Steps towards Real-Time Assessment
    of Attentional Weights and Capacity According to TVA.” <i>Abstracts of the 65th
    TeaP</i>, edited by Simon Merz et al., ZPID (Leibniz Institute for Psychology),
    2023, doi:<a href="https://doi.org/10.23668/PSYCHARCHIVES.12945">10.23668/PSYCHARCHIVES.12945</a>.
  short: 'N.C. Banh, I. Scharlau, in: S. Merz, C. Frings, B. Leuchtenberg, B. Moeller,
    S. Mueller, R. Neumann, B. Pastötter, L. Pingen, G. Schui (Eds.), Abstracts of
    the 65th TeaP, ZPID (Leibniz Institute for Psychology), 2023.'
conference:
  end_date: 2023-03-29
  location: Trier, Germany
  name: Tagung experimentell arbeitender Psycholog:innen (TeaP)
  start_date: 2023-03-26
date_created: 2023-08-03T13:10:02Z
date_updated: 2024-03-27T10:41:59Z
department:
- _id: '424'
doi: 10.23668/PSYCHARCHIVES.12945
editor:
- first_name: Simon
  full_name: Merz, Simon
  last_name: Merz
- first_name: Christian
  full_name: Frings, Christian
  last_name: Frings
- first_name: Bettina
  full_name: Leuchtenberg, Bettina
  last_name: Leuchtenberg
- first_name: Birte
  full_name: Moeller, Birte
  last_name: Moeller
- first_name: Stefanie
  full_name: Mueller, Stefanie
  last_name: Mueller
- first_name: Roland
  full_name: Neumann, Roland
  last_name: Neumann
- first_name: Bernhard
  full_name: Pastötter, Bernhard
  last_name: Pastötter
- first_name: Leah
  full_name: Pingen, Leah
  last_name: Pingen
- first_name: Gabriel
  full_name: Schui, Gabriel
  last_name: Schui
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://pada.psycharchives.org/bitstream/3ec340a2-095e-42e2-b998-524856efce07
oa: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
- _id: '52'
  name: 'PC2: Computing Resources Provided by the Paderborn Center for Parallel Computing'
publication: Abstracts of the 65th TeaP
publisher: ZPID (Leibniz Institute for Psychology)
quality_controlled: '1'
status: public
title: First steps towards real-time assessment of attentional weights and capacity
  according to TVA
type: conference_abstract
user_id: '38219'
year: '2023'
...
---
_id: '48543'
abstract:
- lang: eng
  text: Explanation has been identified as an important capability for AI-based systems,
    but research on systematic strategies for achieving understanding in interaction
    with such systems is still sparse. Negation is a linguistic strategy that is often
    used in explanations. It creates a contrast space between the affirmed and the
    negated item that enriches explaining processes with additional contextual information.
    While negation in human speech has been shown to lead to higher processing costs
    and worse task performance in terms of recall or action execution when used in
    isolation, it can decrease processing costs when used in context. So far, it has
    not been considered as a guiding strategy for explanations in human-robot interaction.
    We conducted an empirical study to investigate the use of negation as a guiding
    strategy in explanatory human-robot dialogue, in which a virtual robot explains
    tasks and possible actions to a human explainee to solve them in terms of gestures
    on a touchscreen. Our results show that negation vs. affirmation 1) increases
    processing costs measured as reaction time and 2) increases several aspects of
    task performance. While there was no significant effect of negation on the number
    of initially correctly executed gestures, we found a significantly lower number
    of attempts—measured as breaks in the finger movement data before the correct
    gesture was carried out—when being instructed through a negation. We further found
    that the gestures significantly resembled the presented prototype gesture more
    following an instruction with a negation as opposed to an affirmation. Also, the
    participants rated the benefit of contrastive vs. affirmative explanations significantly
    higher. Repeating the instructions decreased the effects of negation, yielding
    similar processing costs and task performance measures for negation and affirmation
    after several iterations. We discuss our results with respect to possible effects
    of negation on linguistic processing of explanations and limitations of our study.
article_type: original
author:
- first_name: A.
  full_name: Groß, A.
  last_name: Groß
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Ngoc Chi
  full_name: Banh, Ngoc Chi
  id: '38219'
  last_name: Banh
  orcid: 0000-0002-5946-4542
- first_name: B.
  full_name: Richter, B.
  last_name: Richter
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
- first_name: B.
  full_name: Wrede, B.
  last_name: Wrede
citation:
  ama: Groß A, Singh A, Banh NC, et al. Scaffolding the human partner by contrastive
    guidance in an explanatory human-robot dialogue. <i>Frontiers in Robotics and
    AI</i>. 2023;10. doi:<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>
  apa: Groß, A., Singh, A., Banh, N. C., Richter, B., Scharlau, I., Rohlfing, K. J.,
    &#38; Wrede, B. (2023). Scaffolding the human partner by contrastive guidance
    in an explanatory human-robot dialogue. <i>Frontiers in Robotics and AI</i>, <i>10</i>.
    <a href="https://doi.org/10.3389/frobt.2023.1236184">https://doi.org/10.3389/frobt.2023.1236184</a>
  bibtex: '@article{Groß_Singh_Banh_Richter_Scharlau_Rohlfing_Wrede_2023, title={Scaffolding
    the human partner by contrastive guidance in an explanatory human-robot dialogue},
    volume={10}, DOI={<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>},
    journal={Frontiers in Robotics and AI}, author={Groß, A. and Singh, Amit and Banh,
    Ngoc Chi and Richter, B. and Scharlau, Ingrid and Rohlfing, Katharina J. and Wrede,
    B.}, year={2023} }'
  chicago: Groß, A., Amit Singh, Ngoc Chi Banh, B. Richter, Ingrid Scharlau, Katharina
    J. Rohlfing, and B. Wrede. “Scaffolding the Human Partner by Contrastive Guidance
    in an Explanatory Human-Robot Dialogue.” <i>Frontiers in Robotics and AI</i> 10
    (2023). <a href="https://doi.org/10.3389/frobt.2023.1236184">https://doi.org/10.3389/frobt.2023.1236184</a>.
  ieee: 'A. Groß <i>et al.</i>, “Scaffolding the human partner by contrastive guidance
    in an explanatory human-robot dialogue,” <i>Frontiers in Robotics and AI</i>,
    vol. 10, 2023, doi: <a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>.'
  mla: Groß, A., et al. “Scaffolding the Human Partner by Contrastive Guidance in
    an Explanatory Human-Robot Dialogue.” <i>Frontiers in Robotics and AI</i>, vol.
    10, 2023, doi:<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>.
  short: A. Groß, A. Singh, N.C. Banh, B. Richter, I. Scharlau, K.J. Rohlfing, B.
    Wrede, Frontiers in Robotics and AI 10 (2023).
date_created: 2023-10-30T09:29:16Z
date_updated: 2024-06-26T08:01:50Z
department:
- _id: '749'
doi: 10.3389/frobt.2023.1236184
funded_apc: '1'
intvolume: '        10'
keyword:
- HRI
- XAI
- negation
- understanding
- explaining
- touch interaction
- gesture
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.frontiersin.org/articles/10.3389/frobt.2023.1236184/full
oa: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Frontiers in Robotics and AI
publication_status: published
quality_controlled: '1'
status: public
title: Scaffolding the human partner by contrastive guidance in an explanatory human-robot
  dialogue
type: journal_article
user_id: '38219'
volume: 10
year: '2023'
...
---
_id: '46067'
abstract:
- lang: eng
  text: '<p>The study investigates two different ways of guiding the addressee of
    an explanation - an explainee, through action demonstration: contrastive and non-contrastive.
    Their effect was tested on attention to specific action elements (goal) as well
    as on event memory. In an eye-tracking experiment, participants were shown different
    motion videos that were either contrastive or non-contrastive with respect to
    the segments of movement presentation. Given that everyday action demonstration
    is often multimodal, the stimuli were created with re- spect to their visual and
    verbal presentation. For visual presentation, a video combined two movements in
    a contrastive (e.g., Up-motion following a Down-motion) or non-contrastive way
    (e.g., two Up-motions following each other). For verbal presentation, each video
    was combined with a sequence of instruction descriptions in the form of negative
    (i.e., contrastive) or assertive (i.e., non-contrastive) guidance. It was found
    that a) attention to the event goal increased for this condition in the later
    time window, and b) participants’ recall of the event was facilitated when a visually
    contrastive motion was combined with a verbal contrast.</p>'
author:
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
citation:
  ama: 'Singh A, Rohlfing KJ. Contrastiveness in the context of action demonstration:
    an eye-tracking study on its effects on action perception and action recall. In:
    <i>Proceedings of the Annual Meeting of the Cognitive Science Society 45 (45)</i>.
    Cognitive Science Society; 2023.'
  apa: 'Singh, A., &#38; Rohlfing, K. J. (2023). Contrastiveness in the context of
    action demonstration: an eye-tracking study on its effects on action perception
    and action recall. <i>Proceedings of the Annual Meeting of the Cognitive Science
    Society 45 (45)</i>. 45th Annual Conference of the Cognitive Science Society,
    Sydney.'
  bibtex: '@inproceedings{Singh_Rohlfing_2023, place={Sydney, Australia}, title={Contrastiveness
    in the context of action demonstration: an eye-tracking study on its effects on
    action perception and action recall}, booktitle={Proceedings of the Annual Meeting
    of the Cognitive Science Society 45 (45)}, publisher={Cognitive Science Society},
    author={Singh, Amit and Rohlfing, Katharina J.}, year={2023} }'
  chicago: 'Singh, Amit, and Katharina J. Rohlfing. “Contrastiveness in the Context
    of Action Demonstration: An Eye-Tracking Study on Its Effects on Action Perception
    and Action Recall.” In <i>Proceedings of the Annual Meeting of the Cognitive Science
    Society 45 (45)</i>. Sydney, Australia: Cognitive Science Society, 2023.'
  ieee: 'A. Singh and K. J. Rohlfing, “Contrastiveness in the context of action demonstration:
    an eye-tracking study on its effects on action perception and action recall,”
    presented at the 45th Annual Conference of the Cognitive Science Society, Sydney,
    2023.'
  mla: 'Singh, Amit, and Katharina J. Rohlfing. “Contrastiveness in the Context of
    Action Demonstration: An Eye-Tracking Study on Its Effects on Action Perception
    and Action Recall.” <i>Proceedings of the Annual Meeting of the Cognitive Science
    Society 45 (45)</i>, Cognitive Science Society, 2023.'
  short: 'A. Singh, K.J. Rohlfing, in: Proceedings of the Annual Meeting of the Cognitive
    Science Society 45 (45), Cognitive Science Society, Sydney, Australia, 2023.'
conference:
  location: Sydney
  name: 45th Annual Conference of the Cognitive Science Society
date_created: 2023-07-15T12:16:42Z
date_updated: 2023-09-27T13:51:42Z
department:
- _id: '749'
- _id: '660'
keyword:
- Attention
- negation
- contrastive  guidance
- eye-movements
- action understanding
- event representation
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://escholarship.org/uc/item/2w94t4cv
oa: '1'
place: Sydney, Australia
popular_science: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Proceedings of the Annual Meeting of the Cognitive Science Society 45
  (45)
publication_status: published
publisher: Cognitive Science Society
quality_controlled: '1'
related_material:
  record:
  - id: '46067'
    relation: contains
    status: public
status: public
title: 'Contrastiveness in the context of action demonstration: an eye-tracking study
  on its effects on action perception and action recall'
type: conference
user_id: '91018'
year: '2023'
...
---
_id: '51348'
abstract:
- lang: eng
  text: <jats:title>Abstract</jats:title><jats:p>With the perspective on applications
    of AI-technology, especially data intensive deep learning approaches, the need
    for methods to control and understand such models has been recognized and gave
    rise to a new research domain labeled explainable artificial intelligence (XAI).
    In this overview paper we give an interim appraisal of what has been achieved
    so far and where there are still gaps in the research. We take an interdisciplinary
    perspective to identify challenges on XAI research and point to open questions
    with respect to the quality of the explanations regarding faithfulness and consistency
    of explanations. On the other hand we see a need regarding the interaction between
    XAI and user to allow for adaptability to specific information needs and explanatory
    dialog for informed decision making as well as the possibility to correct models
    and explanations by interaction. This endeavor requires an integrated interdisciplinary
    perspective and rigorous approaches to empirical evaluation based on psychological,
    linguistic and even sociological theories.</jats:p>
alternative_title:
- An Interdisciplinary Perspective
author:
- first_name: Ute
  full_name: Schmid, Ute
  last_name: Schmid
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Schmid U, Wrede B. What is Missing in XAI So Far? <i>KI - Künstliche Intelligenz</i>.
    2022;36(3-4):303-315. doi:<a href="https://doi.org/10.1007/s13218-022-00786-2">10.1007/s13218-022-00786-2</a>
  apa: Schmid, U., &#38; Wrede, B. (2022). What is Missing in XAI So Far? <i>KI -
    Künstliche Intelligenz</i>, <i>36</i>(3–4), 303–315. <a href="https://doi.org/10.1007/s13218-022-00786-2">https://doi.org/10.1007/s13218-022-00786-2</a>
  bibtex: '@article{Schmid_Wrede_2022, title={What is Missing in XAI So Far?}, volume={36},
    DOI={<a href="https://doi.org/10.1007/s13218-022-00786-2">10.1007/s13218-022-00786-2</a>},
    number={3–4}, journal={KI - Künstliche Intelligenz}, publisher={Springer Science
    and Business Media LLC}, author={Schmid, Ute and Wrede, Britta}, year={2022},
    pages={303–315} }'
  chicago: 'Schmid, Ute, and Britta Wrede. “What Is Missing in XAI So Far?” <i>KI
    - Künstliche Intelligenz</i> 36, no. 3–4 (2022): 303–15. <a href="https://doi.org/10.1007/s13218-022-00786-2">https://doi.org/10.1007/s13218-022-00786-2</a>.'
  ieee: 'U. Schmid and B. Wrede, “What is Missing in XAI So Far?,” <i>KI - Künstliche
    Intelligenz</i>, vol. 36, no. 3–4, pp. 303–315, 2022, doi: <a href="https://doi.org/10.1007/s13218-022-00786-2">10.1007/s13218-022-00786-2</a>.'
  mla: Schmid, Ute, and Britta Wrede. “What Is Missing in XAI So Far?” <i>KI - Künstliche
    Intelligenz</i>, vol. 36, no. 3–4, Springer Science and Business Media LLC, 2022,
    pp. 303–15, doi:<a href="https://doi.org/10.1007/s13218-022-00786-2">10.1007/s13218-022-00786-2</a>.
  short: U. Schmid, B. Wrede, KI - Künstliche Intelligenz 36 (2022) 303–315.
date_created: 2024-02-14T09:41:56Z
date_updated: 2024-02-26T08:48:49Z
department:
- _id: '660'
doi: 10.1007/s13218-022-00786-2
intvolume: '        36'
issue: 3-4
keyword:
- Artificial Intelligence
language:
- iso: eng
page: 303-315
project:
- _id: '113'
  name: 'TRR 318 - A3: TRR 318 - Subproject A3'
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: KI - Künstliche Intelligenz
publication_identifier:
  issn:
  - 0933-1875
  - 1610-1987
publication_status: published
publisher: Springer Science and Business Media LLC
status: public
title: What is Missing in XAI So Far?
type: journal_article
user_id: '54779'
volume: 36
year: '2022'
...
---
_id: '51366'
author:
- first_name: Ute
  full_name: Schmid, Ute
  last_name: Schmid
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Schmid U, Wrede B. Explainable AI. <i>KI - Künstliche Intelligenz</i>. 2022;36(3-4):207-210.
    doi:<a href="https://doi.org/10.1007/s13218-022-00788-0">10.1007/s13218-022-00788-0</a>
  apa: Schmid, U., &#38; Wrede, B. (2022). Explainable AI. <i>KI - Künstliche Intelligenz</i>,
    <i>36</i>(3–4), 207–210. <a href="https://doi.org/10.1007/s13218-022-00788-0">https://doi.org/10.1007/s13218-022-00788-0</a>
  bibtex: '@article{Schmid_Wrede_2022, title={Explainable AI}, volume={36}, DOI={<a
    href="https://doi.org/10.1007/s13218-022-00788-0">10.1007/s13218-022-00788-0</a>},
    number={3–4}, journal={KI - Künstliche Intelligenz}, publisher={Springer Science
    and Business Media LLC}, author={Schmid, Ute and Wrede, Britta}, year={2022},
    pages={207–210} }'
  chicago: 'Schmid, Ute, and Britta Wrede. “Explainable AI.” <i>KI - Künstliche Intelligenz</i>
    36, no. 3–4 (2022): 207–10. <a href="https://doi.org/10.1007/s13218-022-00788-0">https://doi.org/10.1007/s13218-022-00788-0</a>.'
  ieee: 'U. Schmid and B. Wrede, “Explainable AI,” <i>KI - Künstliche Intelligenz</i>,
    vol. 36, no. 3–4, pp. 207–210, 2022, doi: <a href="https://doi.org/10.1007/s13218-022-00788-0">10.1007/s13218-022-00788-0</a>.'
  mla: Schmid, Ute, and Britta Wrede. “Explainable AI.” <i>KI - Künstliche Intelligenz</i>,
    vol. 36, no. 3–4, Springer Science and Business Media LLC, 2022, pp. 207–10, doi:<a
    href="https://doi.org/10.1007/s13218-022-00788-0">10.1007/s13218-022-00788-0</a>.
  short: U. Schmid, B. Wrede, KI - Künstliche Intelligenz 36 (2022) 207–210.
date_created: 2024-02-18T10:03:11Z
date_updated: 2024-02-26T08:48:00Z
department:
- _id: '660'
doi: 10.1007/s13218-022-00788-0
intvolume: '        36'
issue: 3-4
keyword:
- Artificial Intelligence
language:
- iso: eng
page: 207-210
project:
- _id: '113'
  name: 'TRR 318 - A3: TRR 318 - Subproject A3'
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: KI - Künstliche Intelligenz
publication_identifier:
  issn:
  - 0933-1875
  - 1610-1987
publication_status: published
publisher: Springer Science and Business Media LLC
status: public
title: Explainable AI
type: journal_article
user_id: '54779'
volume: 36
year: '2022'
...
---
_id: '51344'
abstract:
- lang: eng
  text: <jats:p>Modified action demonstration—dubbed <jats:italic>motionese—</jats:italic>has
    been proposed as a way to help children recognize the structure and meaning of
    actions. However, until now, it has been investigated only in young infants. This
    brief research report presents findings from a cross-sectional study of parental
    action demonstrations to three groups of 8–11, 12–23, and 24–30-month-old children
    that applied seven motionese parameters; a second study investigated the youngest
    group of participants longitudinally to corroborate the cross-sectional results.
    Results of both studies suggested that four motionese parameters (Motion Pauses,
    Pace, Velocity, Acceleration) seem to structure the action by organizing it in
    motion pauses. Whereas these parameters persist over different ages, three other
    parameters (Demonstration Length, Roundness, and Range) occur predominantly in
    the younger group and seem to serve to organize infants' attention on the basis
    of movement. Results are discussed in terms of facilitative vs. pedagogical learning.</jats:p>
author:
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  last_name: Vollmer
- first_name: Jannik
  full_name: Fritsch, Jannik
  last_name: Fritsch
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
citation:
  ama: Rohlfing K, Vollmer A-L, Fritsch J, Wrede B. Which “motionese” parameters change
    with children’s age? Disentangling attention-getting from action-structuring modifications.
    <i>Frontiers in Communication</i>. 2022;7. doi:<a href="https://doi.org/10.3389/fcomm.2022.922405">10.3389/fcomm.2022.922405</a>
  apa: Rohlfing, K., Vollmer, A.-L., Fritsch, J., &#38; Wrede, B. (2022). Which “motionese”
    parameters change with children’s age? Disentangling attention-getting from action-structuring
    modifications. <i>Frontiers in Communication</i>, <i>7</i>. <a href="https://doi.org/10.3389/fcomm.2022.922405">https://doi.org/10.3389/fcomm.2022.922405</a>
  bibtex: '@article{Rohlfing_Vollmer_Fritsch_Wrede_2022, title={Which “motionese”
    parameters change with children’s age? Disentangling attention-getting from action-structuring
    modifications}, volume={7}, DOI={<a href="https://doi.org/10.3389/fcomm.2022.922405">10.3389/fcomm.2022.922405</a>},
    journal={Frontiers in Communication}, publisher={Frontiers Media SA}, author={Rohlfing,
    Katharina and Vollmer, Anna-Lisa and Fritsch, Jannik and Wrede, Britta}, year={2022}
    }'
  chicago: Rohlfing, Katharina, Anna-Lisa Vollmer, Jannik Fritsch, and Britta Wrede.
    “Which ‘Motionese’ Parameters Change with Children’s Age? Disentangling Attention-Getting
    from Action-Structuring Modifications.” <i>Frontiers in Communication</i> 7 (2022).
    <a href="https://doi.org/10.3389/fcomm.2022.922405">https://doi.org/10.3389/fcomm.2022.922405</a>.
  ieee: 'K. Rohlfing, A.-L. Vollmer, J. Fritsch, and B. Wrede, “Which ‘motionese’
    parameters change with children’s age? Disentangling attention-getting from action-structuring
    modifications,” <i>Frontiers in Communication</i>, vol. 7, 2022, doi: <a href="https://doi.org/10.3389/fcomm.2022.922405">10.3389/fcomm.2022.922405</a>.'
  mla: Rohlfing, Katharina, et al. “Which ‘Motionese’ Parameters Change with Children’s
    Age? Disentangling Attention-Getting from Action-Structuring Modifications.” <i>Frontiers
    in Communication</i>, vol. 7, Frontiers Media SA, 2022, doi:<a href="https://doi.org/10.3389/fcomm.2022.922405">10.3389/fcomm.2022.922405</a>.
  short: K. Rohlfing, A.-L. Vollmer, J. Fritsch, B. Wrede, Frontiers in Communication
    7 (2022).
date_created: 2024-02-14T09:07:53Z
date_updated: 2024-02-26T08:53:33Z
department:
- _id: '660'
doi: 10.3389/fcomm.2022.922405
intvolume: '         7'
keyword:
- Social Sciences (miscellaneous)
- Communication
language:
- iso: eng
project:
- _id: '111'
  grant_number: '438445824'
  name: 'TRR 318 - A01: TRR 318 - Adaptives Erklären (Teilprojekt A01)'
- _id: '113'
  name: 'TRR 318 - A3: TRR 318 - Subproject A3'
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Frontiers in Communication
publication_identifier:
  issn:
  - 2297-900X
publication_status: published
publisher: Frontiers Media SA
status: public
title: Which “motionese” parameters change with children's age? Disentangling attention-getting
  from action-structuring modifications
type: journal_article
user_id: '54779'
volume: 7
year: '2022'
...
---
_id: '51346'
author:
- first_name: André
  full_name: Groß, André
  id: '93405'
  last_name: Groß
  orcid: 0000-0002-9593-7220
- first_name: Christian
  full_name: Schütze, Christian
  last_name: Schütze
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Birte
  full_name: Richter, Birte
  last_name: Richter
citation:
  ama: 'Groß A, Schütze C, Wrede B, Richter B. An Architecture Supporting Configurable
    Autonomous Multimodal Joint-Attention-Therapy for Various Robotic Systems. In:
    <i>INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION</i>. ACM; 2022:154-159.
    doi:<a href="https://doi.org/10.1145/3536220.3558070">10.1145/3536220.3558070</a>'
  apa: Groß, A., Schütze, C., Wrede, B., &#38; Richter, B. (2022). An Architecture
    Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various
    Robotic Systems. <i>INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION</i>, 154–159.
    <a href="https://doi.org/10.1145/3536220.3558070">https://doi.org/10.1145/3536220.3558070</a>
  bibtex: '@inproceedings{Groß_Schütze_Wrede_Richter_2022, title={An Architecture
    Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various
    Robotic Systems}, DOI={<a href="https://doi.org/10.1145/3536220.3558070">10.1145/3536220.3558070</a>},
    booktitle={INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION}, publisher={ACM},
    author={Groß, André and Schütze, Christian and Wrede, Britta and Richter, Birte},
    year={2022}, pages={154–159} }'
  chicago: Groß, André, Christian Schütze, Britta Wrede, and Birte Richter. “An Architecture
    Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy for Various
    Robotic Systems.” In <i>INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION</i>,
    154–59. ACM, 2022. <a href="https://doi.org/10.1145/3536220.3558070">https://doi.org/10.1145/3536220.3558070</a>.
  ieee: 'A. Groß, C. Schütze, B. Wrede, and B. Richter, “An Architecture Supporting
    Configurable Autonomous Multimodal Joint-Attention-Therapy for Various Robotic
    Systems,” in <i>INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION</i>, 2022,
    pp. 154–159, doi: <a href="https://doi.org/10.1145/3536220.3558070">10.1145/3536220.3558070</a>.'
  mla: Groß, André, et al. “An Architecture Supporting Configurable Autonomous Multimodal
    Joint-Attention-Therapy for Various Robotic Systems.” <i>INTERNATIONAL CONFERENCE
    ON MULTIMODAL INTERACTION</i>, ACM, 2022, pp. 154–59, doi:<a href="https://doi.org/10.1145/3536220.3558070">10.1145/3536220.3558070</a>.
  short: 'A. Groß, C. Schütze, B. Wrede, B. Richter, in: INTERNATIONAL CONFERENCE
    ON MULTIMODAL INTERACTION, ACM, 2022, pp. 154–159.'
date_created: 2024-02-14T09:28:57Z
date_updated: 2024-02-26T08:52:52Z
department:
- _id: '660'
doi: 10.1145/3536220.3558070
language:
- iso: eng
page: 154-159
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
- _id: '113'
  name: 'TRR 318 - A3: TRR 318 - Subproject A3'
publication: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
publication_status: published
publisher: ACM
status: public
title: An Architecture Supporting Configurable Autonomous Multimodal Joint-Attention-Therapy
  for Various Robotic Systems
type: conference
user_id: '54779'
year: '2022'
...
