---
_id: '57204'
abstract:
- lang: eng
  text: In this study on the use of gesture deixis during explanations, a sample of
    24 videorecorded dyadic interactions of a board game explanation was analyzed.
    The relation between the use of gesture deixis by different explainers and their
    interpretation of explainees' understanding was investigated. In addition, we
    describe explainers' intra-individual variations related to their interactions
    with three different explainees consecutively. While we did not find a relation
    between interpretations of explainees' complete understanding and a decrease in
    explainers' use of gesture deixis, we demonstrated that the overall use of gesture
    deixis is related to the process of interactional monitoring and the attendance
    of a different explainee.
author:
- first_name: Stefan Teodorov
  full_name: Lazarov, Stefan Teodorov
  id: '90345'
  last_name: Lazarov
  orcid: 0009-0009-0892-9483
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
citation:
  ama: 'Lazarov ST, Grimminger A. Variations in explainers’ gesture deixis in explanations
    related to the monitoring of explainees’ understanding. In: <i>Proceedings of
    the Annual Meeting of the Cognitive Science Society</i>. Vol 46. ; 2024.'
  apa: Lazarov, S. T., &#38; Grimminger, A. (2024). Variations in explainers’ gesture
    deixis in explanations related to the monitoring of explainees’ understanding.
    <i>Proceedings of the Annual Meeting of the Cognitive Science Society</i>, <i>46</i>.
  bibtex: '@inproceedings{Lazarov_Grimminger_2024, title={Variations in explainers’
    gesture deixis in explanations related to the monitoring of explainees’ understanding},
    volume={46}, booktitle={Proceedings of the Annual Meeting of the Cognitive Science
    Society}, author={Lazarov, Stefan Teodorov and Grimminger, Angela}, year={2024}
    }'
  chicago: Lazarov, Stefan Teodorov, and Angela Grimminger. “Variations in Explainers’
    Gesture Deixis in Explanations Related to the Monitoring of Explainees’ Understanding.”
    In <i>Proceedings of the Annual Meeting of the Cognitive Science Society</i>,
    Vol. 46, 2024.
  ieee: S. T. Lazarov and A. Grimminger, “Variations in explainers’ gesture deixis
    in explanations related to the monitoring of explainees’ understanding,” in <i>Proceedings
    of the Annual Meeting of the Cognitive Science Society</i>, Rotterdam, 2024, vol.
    46.
  mla: Lazarov, Stefan Teodorov, and Angela Grimminger. “Variations in Explainers’
    Gesture Deixis in Explanations Related to the Monitoring of Explainees’ Understanding.”
    <i>Proceedings of the Annual Meeting of the Cognitive Science Society</i>, vol.
    46, 2024.
  short: 'S.T. Lazarov, A. Grimminger, in: Proceedings of the Annual Meeting of the
    Cognitive Science Society, 2024.'
conference:
  end_date: 2024-07-27
  location: Rotterdam
  name: Sognitive Science Society
  start_date: 2024-07-24
date_created: 2024-11-18T13:40:09Z
date_updated: 2024-11-18T13:40:39Z
department:
- _id: '660'
intvolume: '        46'
keyword:
- explanation
- gesture deixis
- monitoring
- understanding
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://escholarship.org/uc/item/7dz8n8tf
oa: '1'
project:
- _id: '112'
  grant_number: '438445824'
  name: 'TRR 318 - A02: TRR 318 - Verstehensprozess einer Erklärung beobachten und
    auswerten (Teilprojekt A02)'
publication: Proceedings of the Annual Meeting of the Cognitive Science Society
publication_status: published
status: public
title: Variations in explainers’ gesture deixis in explanations related to the monitoring
  of explainees’ understanding
type: conference
user_id: '90345'
volume: 46
year: '2024'
...
---
_id: '61403'
author:
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
citation:
  ama: 'Lohmer V, Kern F. The role of interactive gestures in explanatory interactions.
    In: <i>Second International Multimodal Communication Symposium (MMSYM) - Book
    of Abstract</i>. ; 2024.'
  apa: Lohmer, V., &#38; Kern, F. (2024). The role of interactive gestures in explanatory
    interactions. <i>Second International Multimodal Communication Symposium (MMSYM)
    - Book of Abstract</i>.  2nd International Multimodal Communication Symposium,
    Goethe-Universität Frankfurt, Deutschland.
  bibtex: '@inproceedings{Lohmer_Kern_2024, title={The role of interactive gestures
    in explanatory interactions}, booktitle={Second International Multimodal Communication
    Symposium (MMSYM) - Book of Abstract}, author={Lohmer, Vivien and Kern, Friederike},
    year={2024} }'
  chicago: Lohmer, Vivien, and Friederike Kern. “The Role of Interactive Gestures
    in Explanatory Interactions.” In <i>Second International Multimodal Communication
    Symposium (MMSYM) - Book of Abstract</i>, 2024.
  ieee: V. Lohmer and F. Kern, “The role of interactive gestures in explanatory interactions,”
    presented at the  2nd International Multimodal Communication Symposium, Goethe-Universität
    Frankfurt, Deutschland, 2024.
  mla: Lohmer, Vivien, and Friederike Kern. “The Role of Interactive Gestures in Explanatory
    Interactions.” <i>Second International Multimodal Communication Symposium (MMSYM)
    - Book of Abstract</i>, 2024.
  short: 'V. Lohmer, F. Kern, in: Second International Multimodal Communication Symposium
    (MMSYM) - Book of Abstract, 2024.'
conference:
  end_date: 2024-09-27
  location: Goethe-Universität Frankfurt, Deutschland
  name: ' 2nd International Multimodal Communication Symposium'
  start_date: 2024-09-25
date_created: 2025-09-23T09:53:51Z
date_updated: 2025-09-23T10:13:59Z
keyword:
- gesture
- explanations
- conversation analysis
language:
- iso: eng
project:
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
publication: Second International Multimodal Communication Symposium (MMSYM) - Book
  of Abstract
publication_status: published
status: public
title: The role of interactive gestures in explanatory interactions
type: conference_abstract
user_id: '99097'
year: '2024'
...
---
_id: '43437'
abstract:
- lang: eng
  text: '<jats:p>In virtual reality (VR), participants may not always have hands,
    bodies, eyes, or even voices—using VR helmets and two controllers, participants
    control an avatar through virtual worlds that do not necessarily obey familiar
    laws of physics; moreover, the avatar’s bodily characteristics may not neatly
    match our bodies in the physical world. Despite these limitations and specificities,
    humans get things done through collaboration and the creative use of the environment.
    While multiuser interactive VR is attracting greater numbers of participants,
    there are currently few attempts to analyze the in situ interaction systematically.
    This paper proposes a video-analytic detail-oriented methodological framework
    for studying virtual reality interaction. Using multimodal conversation analysis,
    the paper investigates a nonverbal, embodied, two-person interaction: two players
    in a survival game strive to gesturally resolve a misunderstanding regarding an
    in-game mechanic—however, both of their microphones are turned off for the duration
    of play. The players’ inability to resort to complex language to resolve this
    issue results in a dense sequence of back-and-forth activity involving gestures,
    object manipulation, gaze, and body work. Most crucially, timing and modified
    repetitions of previously produced actions turn out to be the key to overcome
    both technical and communicative challenges. The paper analyzes these action sequences,
    demonstrates how they generate intended outcomes, and proposes a vocabulary to
    speak about these types of interaction more generally. The findings demonstrate
    the viability of multimodal analysis of VR interaction, shed light on unique challenges
    of analyzing interaction in virtual reality, and generate broader methodological
    insights about the study of nonverbal action.</jats:p>'
article_type: original
author:
- first_name: Nils
  full_name: Klowait, Nils
  id: '98454'
  last_name: Klowait
  orcid: 0000-0002-7347-099X
citation:
  ama: Klowait N. On the Multimodal Resolution of a Search Sequence in Virtual Reality.
    <i>Human Behavior and Emerging Technologies</i>. 2023;2023:1-15. doi:<a href="https://doi.org/10.1155/2023/8417012">10.1155/2023/8417012</a>
  apa: Klowait, N. (2023). On the Multimodal Resolution of a Search Sequence in Virtual
    Reality. <i>Human Behavior and Emerging Technologies</i>, <i>2023</i>, 1–15. <a
    href="https://doi.org/10.1155/2023/8417012">https://doi.org/10.1155/2023/8417012</a>
  bibtex: '@article{Klowait_2023, title={On the Multimodal Resolution of a Search
    Sequence in Virtual Reality}, volume={2023}, DOI={<a href="https://doi.org/10.1155/2023/8417012">10.1155/2023/8417012</a>},
    journal={Human Behavior and Emerging Technologies}, publisher={Hindawi Limited},
    author={Klowait, Nils}, year={2023}, pages={1–15} }'
  chicago: 'Klowait, Nils. “On the Multimodal Resolution of a Search Sequence in Virtual
    Reality.” <i>Human Behavior and Emerging Technologies</i> 2023 (2023): 1–15. <a
    href="https://doi.org/10.1155/2023/8417012">https://doi.org/10.1155/2023/8417012</a>.'
  ieee: 'N. Klowait, “On the Multimodal Resolution of a Search Sequence in Virtual
    Reality,” <i>Human Behavior and Emerging Technologies</i>, vol. 2023, pp. 1–15,
    2023, doi: <a href="https://doi.org/10.1155/2023/8417012">10.1155/2023/8417012</a>.'
  mla: Klowait, Nils. “On the Multimodal Resolution of a Search Sequence in Virtual
    Reality.” <i>Human Behavior and Emerging Technologies</i>, vol. 2023, Hindawi
    Limited, 2023, pp. 1–15, doi:<a href="https://doi.org/10.1155/2023/8417012">10.1155/2023/8417012</a>.
  short: N. Klowait, Human Behavior and Emerging Technologies 2023 (2023) 1–15.
date_created: 2023-04-06T10:57:28Z
date_updated: 2024-03-26T09:40:53Z
ddc:
- '300'
department:
- _id: '9'
doi: 10.1155/2023/8417012
file:
- access_level: closed
  content_type: application/pdf
  creator: nklowait
  date_created: 2023-04-06T11:00:01Z
  date_updated: 2023-04-06T11:00:01Z
  file_id: '43438'
  file_name: Klowait_2023a.pdf
  file_size: 2877385
  relation: main_file
  success: 1
file_date_updated: 2023-04-06T11:00:01Z
funded_apc: '1'
has_accepted_license: '1'
intvolume: '      2023'
keyword:
- Human-Computer Interaction
- General Social Sciences
- Social Psychology
- 'Virtual Reality : Multimodality'
- Nonverbal Interaction
- Search Sequence
- Gesture
- Co-Operative Action
- Goodwin
- Ethnomethodology
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1155/2023/8417012
oa: '1'
page: 1-15
project:
- _id: '119'
  name: 'TRR 318 - Ö: TRR 318 - Project Area Ö'
publication: Human Behavior and Emerging Technologies
publication_identifier:
  issn:
  - 2578-1863
publication_status: published
publisher: Hindawi Limited
quality_controlled: '1'
status: public
title: On the Multimodal Resolution of a Search Sequence in Virtual Reality
type: journal_article
user_id: '98454'
volume: 2023
year: '2023'
...
---
_id: '48543'
abstract:
- lang: eng
  text: Explanation has been identified as an important capability for AI-based systems,
    but research on systematic strategies for achieving understanding in interaction
    with such systems is still sparse. Negation is a linguistic strategy that is often
    used in explanations. It creates a contrast space between the affirmed and the
    negated item that enriches explaining processes with additional contextual information.
    While negation in human speech has been shown to lead to higher processing costs
    and worse task performance in terms of recall or action execution when used in
    isolation, it can decrease processing costs when used in context. So far, it has
    not been considered as a guiding strategy for explanations in human-robot interaction.
    We conducted an empirical study to investigate the use of negation as a guiding
    strategy in explanatory human-robot dialogue, in which a virtual robot explains
    tasks and possible actions to a human explainee to solve them in terms of gestures
    on a touchscreen. Our results show that negation vs. affirmation 1) increases
    processing costs measured as reaction time and 2) increases several aspects of
    task performance. While there was no significant effect of negation on the number
    of initially correctly executed gestures, we found a significantly lower number
    of attempts—measured as breaks in the finger movement data before the correct
    gesture was carried out—when being instructed through a negation. We further found
    that the gestures significantly resembled the presented prototype gesture more
    following an instruction with a negation as opposed to an affirmation. Also, the
    participants rated the benefit of contrastive vs. affirmative explanations significantly
    higher. Repeating the instructions decreased the effects of negation, yielding
    similar processing costs and task performance measures for negation and affirmation
    after several iterations. We discuss our results with respect to possible effects
    of negation on linguistic processing of explanations and limitations of our study.
article_type: original
author:
- first_name: A.
  full_name: Groß, A.
  last_name: Groß
- first_name: Amit
  full_name: Singh, Amit
  id: '91018'
  last_name: Singh
  orcid: 0000-0002-7789-1521
- first_name: Ngoc Chi
  full_name: Banh, Ngoc Chi
  id: '38219'
  last_name: Banh
  orcid: 0000-0002-5946-4542
- first_name: B.
  full_name: Richter, B.
  last_name: Richter
- first_name: Ingrid
  full_name: Scharlau, Ingrid
  id: '451'
  last_name: Scharlau
  orcid: 0000-0003-2364-9489
- first_name: Katharina J.
  full_name: Rohlfing, Katharina J.
  id: '50352'
  last_name: Rohlfing
- first_name: B.
  full_name: Wrede, B.
  last_name: Wrede
citation:
  ama: Groß A, Singh A, Banh NC, et al. Scaffolding the human partner by contrastive
    guidance in an explanatory human-robot dialogue. <i>Frontiers in Robotics and
    AI</i>. 2023;10. doi:<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>
  apa: Groß, A., Singh, A., Banh, N. C., Richter, B., Scharlau, I., Rohlfing, K. J.,
    &#38; Wrede, B. (2023). Scaffolding the human partner by contrastive guidance
    in an explanatory human-robot dialogue. <i>Frontiers in Robotics and AI</i>, <i>10</i>.
    <a href="https://doi.org/10.3389/frobt.2023.1236184">https://doi.org/10.3389/frobt.2023.1236184</a>
  bibtex: '@article{Groß_Singh_Banh_Richter_Scharlau_Rohlfing_Wrede_2023, title={Scaffolding
    the human partner by contrastive guidance in an explanatory human-robot dialogue},
    volume={10}, DOI={<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>},
    journal={Frontiers in Robotics and AI}, author={Groß, A. and Singh, Amit and Banh,
    Ngoc Chi and Richter, B. and Scharlau, Ingrid and Rohlfing, Katharina J. and Wrede,
    B.}, year={2023} }'
  chicago: Groß, A., Amit Singh, Ngoc Chi Banh, B. Richter, Ingrid Scharlau, Katharina
    J. Rohlfing, and B. Wrede. “Scaffolding the Human Partner by Contrastive Guidance
    in an Explanatory Human-Robot Dialogue.” <i>Frontiers in Robotics and AI</i> 10
    (2023). <a href="https://doi.org/10.3389/frobt.2023.1236184">https://doi.org/10.3389/frobt.2023.1236184</a>.
  ieee: 'A. Groß <i>et al.</i>, “Scaffolding the human partner by contrastive guidance
    in an explanatory human-robot dialogue,” <i>Frontiers in Robotics and AI</i>,
    vol. 10, 2023, doi: <a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>.'
  mla: Groß, A., et al. “Scaffolding the Human Partner by Contrastive Guidance in
    an Explanatory Human-Robot Dialogue.” <i>Frontiers in Robotics and AI</i>, vol.
    10, 2023, doi:<a href="https://doi.org/10.3389/frobt.2023.1236184">10.3389/frobt.2023.1236184</a>.
  short: A. Groß, A. Singh, N.C. Banh, B. Richter, I. Scharlau, K.J. Rohlfing, B.
    Wrede, Frontiers in Robotics and AI 10 (2023).
date_created: 2023-10-30T09:29:16Z
date_updated: 2024-06-26T08:01:50Z
department:
- _id: '749'
doi: 10.3389/frobt.2023.1236184
funded_apc: '1'
intvolume: '        10'
keyword:
- HRI
- XAI
- negation
- understanding
- explaining
- touch interaction
- gesture
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.frontiersin.org/articles/10.3389/frobt.2023.1236184/full
oa: '1'
project:
- _id: '115'
  grant_number: '438445824'
  name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
    (Teilprojekt A05)'
publication: Frontiers in Robotics and AI
publication_status: published
quality_controlled: '1'
status: public
title: Scaffolding the human partner by contrastive guidance in an explanatory human-robot
  dialogue
type: journal_article
user_id: '38219'
volume: 10
year: '2023'
...
---
_id: '61402'
author:
- first_name: Vivien
  full_name: Lohmer, Vivien
  last_name: Lohmer
- first_name: Lutz
  full_name: Terfloth, Lutz
  last_name: Terfloth
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
citation:
  ama: 'Lohmer V, Terfloth L, Kern F. Explaining the Technical Artifact Quarto!: How
    Gestures are used in Everyday Explanations. In: <i>First International Multimodal
    Communication Symposium - Book of Abstract</i>. ; 2023.'
  apa: 'Lohmer, V., Terfloth, L., &#38; Kern, F. (2023). Explaining the Technical
    Artifact Quarto!: How Gestures are used in Everyday Explanations. <i>First International
    Multimodal Communication Symposium - Book of Abstract</i>. 1st International Multimodal
    Communication Symposium, Universität Pompeu Fabra, Barcelona.'
  bibtex: '@inproceedings{Lohmer_Terfloth_Kern_2023, title={Explaining the Technical
    Artifact Quarto!: How Gestures are used in Everyday Explanations}, booktitle={First
    International Multimodal Communication Symposium - Book of Abstract}, author={Lohmer,
    Vivien and Terfloth, Lutz and Kern, Friederike}, year={2023} }'
  chicago: 'Lohmer, Vivien, Lutz Terfloth, and Friederike Kern. “Explaining the Technical
    Artifact Quarto!: How Gestures Are Used in Everyday Explanations.” In <i>First
    International Multimodal Communication Symposium - Book of Abstract</i>, 2023.'
  ieee: 'V. Lohmer, L. Terfloth, and F. Kern, “Explaining the Technical Artifact Quarto!:
    How Gestures are used in Everyday Explanations,” presented at the 1st International
    Multimodal Communication Symposium, Universität Pompeu Fabra, Barcelona, 2023.'
  mla: 'Lohmer, Vivien, et al. “Explaining the Technical Artifact Quarto!: How Gestures
    Are Used in Everyday Explanations.” <i>First International Multimodal Communication
    Symposium - Book of Abstract</i>, 2023.'
  short: 'V. Lohmer, L. Terfloth, F. Kern, in: First International Multimodal Communication
    Symposium - Book of Abstract, 2023.'
conference:
  end_date: 2023-04-28
  location: Universität Pompeu Fabra, Barcelona
  name: 1st International Multimodal Communication Symposium
  start_date: 2023-04-26
date_created: 2025-09-23T09:50:20Z
date_updated: 2025-09-23T10:13:42Z
keyword:
- gesture
- dual nature
- explanations
- architecture
- relevance
language:
- iso: eng
project:
- _id: '114'
  name: 'TRR 318; TP A04: Integration des technischen Modells in das Partnermodell
    bei der Erklärung von digitalen Artefakten'
publication: First International Multimodal Communication Symposium - Book of Abstract
publication_status: published
status: public
title: 'Explaining the Technical Artifact Quarto!: How Gestures are used in Everyday
  Explanations'
type: conference_abstract
user_id: '99097'
year: '2023'
...
---
_id: '17557'
abstract:
- lang: eng
  text: 'Previous work by [1] studied gesture-speech interaction in adults. [1] focussed
    on temporal and semantic coordination of gesture and speech and found that while
    adult speech is mostly coordinated (or redundant) with gestures, semantic coordination
    increases the temporal synchrony. These observations do not necessarily hold for
    children (in particular with respect to iconic gestures, see [2]), where the speech
    and gesture systems are still under development. We studied the semantic and temporal
    coordination of speech and gesture in 4-year old children using a corpus of 40
    children producing action descriptions in task oriented dialogues. In particular,
    we examined what kinds of information are transmitted verbally vs. non-verbally
    and how they are related. To account for this, we extended the semantic features
    (SFs) developed in [3] for object descriptions in order to include the semantics
    of actions. We coded the SFs on the children’s speech and gestures separately
    using video data. In our presentation, we will focus on the quantitative distribution
    of SFs across gesture and speech. Our results indicate that speech and gestures
    of 4-year olds are less integrated than those of the adults, although there is
    a large variability among the children. We will discuss the results with respect
    to the cognitive processes (e.g., visual memory, language) underlying children’s
    abilities at this stage of development. Our work paves the way for the cognitive
    architecture of speech-gesture interaction in preschoolers which to our knowledge
    is missing so far. '
author:
- first_name: Olga
  full_name: Abramov, Olga
  last_name: Abramov
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
- first_name: Anne
  full_name: Nemeth, Anne
  last_name: Nemeth
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Ulrich
  full_name: Mertens, Ulrich
  last_name: Mertens
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
citation:
  ama: 'Abramov O, Kopp S, Nemeth A, Kern F, Mertens U, Rohlfing K. Towards a Computational
    Model of Child Gesture-Speech Production. In: <i>KOGWIS2018: Computational Approaches
    to Cognitive Science</i>. ; 2018.'
  apa: 'Abramov, O., Kopp, S., Nemeth, A., Kern, F., Mertens, U., &#38; Rohlfing,
    K. (2018). Towards a Computational Model of Child Gesture-Speech Production. <i>KOGWIS2018:
    Computational Approaches to Cognitive Science</i>.'
  bibtex: '@inproceedings{Abramov_Kopp_Nemeth_Kern_Mertens_Rohlfing_2018, title={Towards
    a Computational Model of Child Gesture-Speech Production}, booktitle={KOGWIS2018:
    Computational Approaches to Cognitive Science}, author={Abramov, Olga and Kopp,
    Stefan and Nemeth, Anne and Kern, Friederike and Mertens, Ulrich and Rohlfing,
    Katharina}, year={2018} }'
  chicago: 'Abramov, Olga, Stefan Kopp, Anne Nemeth, Friederike Kern, Ulrich Mertens,
    and Katharina Rohlfing. “Towards a Computational Model of Child Gesture-Speech
    Production.” In <i>KOGWIS2018: Computational Approaches to Cognitive Science</i>,
    2018.'
  ieee: O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, and K. Rohlfing, “Towards
    a Computational Model of Child Gesture-Speech Production,” 2018.
  mla: 'Abramov, Olga, et al. “Towards a Computational Model of Child Gesture-Speech
    Production.” <i>KOGWIS2018: Computational Approaches to Cognitive Science</i>,
    2018.'
  short: 'O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, K. Rohlfing, in: KOGWIS2018:
    Computational Approaches to Cognitive Science, 2018.'
date_created: 2020-08-03T11:00:54Z
date_updated: 2023-02-01T12:50:21Z
department:
- _id: '749'
keyword:
- Speech-gesture integration
- semantic features
language:
- iso: eng
publication: 'KOGWIS2018: Computational Approaches to Cognitive Science'
status: public
title: Towards a Computational Model of Child Gesture-Speech Production
type: conference
user_id: '14931'
year: '2018'
...
---
_id: '17179'
abstract:
- lang: eng
  text: 'Previous work by [1] studied gesture-speech interaction in adults. [1] focussed
    on temporal and semantic coordination of gesture and speech and found that while
    adult speech is mostly coordinated (or redundant) with gestures, semantic coordination
    increases the temporal synchrony. These observations do not necessarily hold for
    children (in particular with respect to iconic gestures, see [2]), where the speech
    and gesture systems are still under development. We studied the semantic and temporal
    coordination of speech and gesture in 4-year old children using a corpus of 40
    children producing action descriptions in task oriented dialogues. In particular,
    we examined what kinds of information are transmitted verbally vs. non-verbally
    and how they are related. To account for this, we extended the semantic features
    (SFs) developed in [3] for object descriptions in order to include the semantics
    of actions. We coded the SFs on the children’s speech and gestures separately
    using video data. In our presentation, we will focus on the quantitative distribution
    of SFs across gesture and speech. Our results indicate that speech and gestures
    of 4-year olds are less integrated than those of the adults, although there is
    a large variability among the children. We will discuss the results with respect
    to the cognitive processes (e.g., visual memory, language) underlying children’s
    abilities at this stage of development. Our work paves the way for the cognitive
    architecture of speech-gesture interaction in preschoolers which to our knowledge
    is missing so far. '
author:
- first_name: Olga
  full_name: Abramov, Olga
  last_name: Abramov
- first_name: Stefan
  full_name: Kopp, Stefan
  last_name: Kopp
- first_name: Anne
  full_name: Nemeth, Anne
  last_name: Nemeth
- first_name: Friederike
  full_name: Kern, Friederike
  last_name: Kern
- first_name: Ulrich
  full_name: Mertens, Ulrich
  last_name: Mertens
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
citation:
  ama: 'Abramov O, Kopp S, Nemeth A, Kern F, Mertens U, Rohlfing K. Towards a Computational
    Model of Child Gesture-Speech Production. In: <i>KOGWIS2018: Computational Approaches
    to Cognitive Science</i>. ; 2018.'
  apa: 'Abramov, O., Kopp, S., Nemeth, A., Kern, F., Mertens, U., &#38; Rohlfing,
    K. (2018). Towards a Computational Model of Child Gesture-Speech Production. <i>KOGWIS2018:
    Computational Approaches to Cognitive Science</i>.'
  bibtex: '@inproceedings{Abramov_Kopp_Nemeth_Kern_Mertens_Rohlfing_2018, title={Towards
    a Computational Model of Child Gesture-Speech Production}, booktitle={KOGWIS2018:
    Computational Approaches to Cognitive Science}, author={Abramov, Olga and Kopp,
    Stefan and Nemeth, Anne and Kern, Friederike and Mertens, Ulrich and Rohlfing,
    Katharina}, year={2018} }'
  chicago: 'Abramov, Olga, Stefan Kopp, Anne Nemeth, Friederike Kern, Ulrich Mertens,
    and Katharina Rohlfing. “Towards a Computational Model of Child Gesture-Speech
    Production.” In <i>KOGWIS2018: Computational Approaches to Cognitive Science</i>,
    2018.'
  ieee: O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, and K. Rohlfing, “Towards
    a Computational Model of Child Gesture-Speech Production,” 2018.
  mla: 'Abramov, Olga, et al. “Towards a Computational Model of Child Gesture-Speech
    Production.” <i>KOGWIS2018: Computational Approaches to Cognitive Science</i>,
    2018.'
  short: 'O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, K. Rohlfing, in: KOGWIS2018:
    Computational Approaches to Cognitive Science, 2018.'
date_created: 2020-06-24T13:00:54Z
date_updated: 2023-02-01T16:24:45Z
department:
- _id: '749'
keyword:
- Speech-gesture integration
- semantic features
language:
- iso: eng
publication: 'KOGWIS2018: Computational Approaches to Cognitive Science'
status: public
title: Towards a Computational Model of Child Gesture-Speech Production
type: conference
user_id: '14931'
year: '2018'
...
---
_id: '17184'
abstract:
- lang: eng
  text: There is ongoing discussion on the function of the early production of gestures
    with regard to whether they reduce children's cognitive demands and free their
    capacity to perform other tasks (e.g., Goldin-Meadow & Wagner, 2005) or whether
    young children point in order to share their interest or to elicit information
    from their caregivers (e.g., Begus & Southgate, 2012; Liszkowski, Carpenter, Henning,
    Striano & Tomasello, 2004). The different assumptions lead to diverse predictions
    regarding infants' gestural or multimodal behavior in recurring situations, in
    which some objects are familiar and others are unfamiliar. To examine these different
    predictions, we observed 14 children aged between 14 and 16 months biweekly in
    a semi-experimental situation with a caregiver and explored how children's verbal
    and gestural behaviors change as a function of their familiarization with objects.
    We split the children into two groups based on their reported vocabulary size
    at 21 months of age (larger vs. smaller vocabulary). We found that children with
    a larger vocabulary at 21 months had an increase in their pointing with words
    toward unfamiliar objects as well as in their total amount of words, whereas for
    children with smaller vocabularies we did not find differences in relation to
    their familiarization with objects. We discuss these findings in terms of a social-pragmatic
    use of pointing gestures.
author:
- first_name: Angela
  full_name: Grimminger, Angela
  id: '57578'
  last_name: Grimminger
- first_name: Carina
  full_name: Lüke, Carina
  last_name: Lüke
- first_name: Ute
  full_name: Ritterfeld, Ute
  last_name: Ritterfeld
- first_name: Ulf
  full_name: Liszkowski, Ulf
  last_name: Liszkowski
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
citation:
  ama: Grimminger A, Lüke C, Ritterfeld U, Liszkowski U, Rohlfing K. Effekte von Objekt-Familiarisierung
    auf die frühe gestische Kommunikation. Individuelle Unterschiede in Hinblick auf
    den späteren Wortschatz. <i>Frühe Bildung</i>. 2016;5(2):91-97. doi:<a href="https://doi.org/10.1026/2191-9186/a000257">10.1026/2191-9186/a000257</a>
  apa: Grimminger, A., Lüke, C., Ritterfeld, U., Liszkowski, U., &#38; Rohlfing, K.
    (2016). Effekte von Objekt-Familiarisierung auf die frühe gestische Kommunikation.
    Individuelle Unterschiede in Hinblick auf den späteren Wortschatz. <i>Frühe Bildung</i>,
    <i>5</i>(2), 91–97. <a href="https://doi.org/10.1026/2191-9186/a000257">https://doi.org/10.1026/2191-9186/a000257</a>
  bibtex: '@article{Grimminger_Lüke_Ritterfeld_Liszkowski_Rohlfing_2016, title={Effekte
    von Objekt-Familiarisierung auf die frühe gestische Kommunikation. Individuelle
    Unterschiede in Hinblick auf den späteren Wortschatz}, volume={5}, DOI={<a href="https://doi.org/10.1026/2191-9186/a000257">10.1026/2191-9186/a000257</a>},
    number={2}, journal={Frühe Bildung}, publisher={Hogrefe &#38; Huber Publishers},
    author={Grimminger, Angela and Lüke, Carina and Ritterfeld, Ute and Liszkowski,
    Ulf and Rohlfing, Katharina}, year={2016}, pages={91–97} }'
  chicago: 'Grimminger, Angela, Carina Lüke, Ute Ritterfeld, Ulf Liszkowski, and Katharina
    Rohlfing. “Effekte von Objekt-Familiarisierung Auf Die Frühe Gestische Kommunikation.
    Individuelle Unterschiede in Hinblick Auf Den Späteren Wortschatz.” <i>Frühe Bildung</i>
    5, no. 2 (2016): 91–97. <a href="https://doi.org/10.1026/2191-9186/a000257">https://doi.org/10.1026/2191-9186/a000257</a>.'
  ieee: 'A. Grimminger, C. Lüke, U. Ritterfeld, U. Liszkowski, and K. Rohlfing, “Effekte
    von Objekt-Familiarisierung auf die frühe gestische Kommunikation. Individuelle
    Unterschiede in Hinblick auf den späteren Wortschatz,” <i>Frühe Bildung</i>, vol.
    5, no. 2, pp. 91–97, 2016, doi: <a href="https://doi.org/10.1026/2191-9186/a000257">10.1026/2191-9186/a000257</a>.'
  mla: Grimminger, Angela, et al. “Effekte von Objekt-Familiarisierung Auf Die Frühe
    Gestische Kommunikation. Individuelle Unterschiede in Hinblick Auf Den Späteren
    Wortschatz.” <i>Frühe Bildung</i>, vol. 5, no. 2, Hogrefe &#38; Huber Publishers,
    2016, pp. 91–97, doi:<a href="https://doi.org/10.1026/2191-9186/a000257">10.1026/2191-9186/a000257</a>.
  short: A. Grimminger, C. Lüke, U. Ritterfeld, U. Liszkowski, K. Rohlfing, Frühe
    Bildung 5 (2016) 91–97.
date_created: 2020-06-24T13:01:00Z
date_updated: 2023-02-01T16:05:30Z
department:
- _id: '749'
doi: 10.1026/2191-9186/a000257
intvolume: '         5'
issue: '2'
keyword:
- gesture
- pointing
- familiarity
- individual differences
language:
- iso: eng
page: 91-97
publication: Frühe Bildung
publication_identifier:
  issn:
  - 2191-9194
publisher: Hogrefe & Huber Publishers
status: public
title: Effekte von Objekt-Familiarisierung auf die frühe gestische Kommunikation.
  Individuelle Unterschiede in Hinblick auf den späteren Wortschatz
type: journal_article
user_id: '14931'
volume: 5
year: '2016'
...
---
_id: '17200'
abstract:
- lang: eng
  text: This research investigated infants’ online perception of give-me gestures
    during observation of a social interaction. In the first experiment, goal-directed
    eye movements of 12-month-olds were recorded as they observed a give-and-take
    interaction in which an object is passed from one individual to another. Infants’
    gaze shifts from the passing hand to the receiving hand were significantly faster
    when the receiving hand formed a give-me gesture relative to when it was presented
    as an inverted hand shape. Experiment 2 revealed that infants’ goal-directed gaze
    shifts were not based on different affordances of the two receiving hands. Two
    additional control experiments further demonstrated that differences in infants’
    online gaze behavior were not mediated by an attentional preference for the give-me
    gesture. Together, our findings provide evidence that properties of social action
    goals influence infants’ online gaze during action observation. The current studies
    demonstrate that infants have expectations about well-formed object transfer actions
    between social agents. We suggest that 12-month-olds are sensitive to social goals
    within the context of give-and-take interactions while observing from a third-party
    perspective.
author:
- first_name: Claudia
  full_name: Elsner, Claudia
  last_name: Elsner
- first_name: Marta
  full_name: Bakker, Marta
  last_name: Bakker
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
- first_name: Gustaf
  full_name: Gredebäck, Gustaf
  last_name: Gredebäck
citation:
  ama: Elsner C, Bakker M, Rohlfing K, Gredebäck G. Infants’ online perception of
    give-and-take interactions. <i>Journal of Experimental Child Psychology</i>. 2014;126:280-294.
    doi:<a href="https://doi.org/10.1016/j.jecp.2014.05.007">10.1016/j.jecp.2014.05.007</a>
  apa: Elsner, C., Bakker, M., Rohlfing, K., &#38; Gredebäck, G. (2014). Infants’
    online perception of give-and-take interactions. <i>Journal of Experimental Child
    Psychology</i>, <i>126</i>, 280–294. <a href="https://doi.org/10.1016/j.jecp.2014.05.007">https://doi.org/10.1016/j.jecp.2014.05.007</a>
  bibtex: '@article{Elsner_Bakker_Rohlfing_Gredebäck_2014, title={Infants’ online
    perception of give-and-take interactions}, volume={126}, DOI={<a href="https://doi.org/10.1016/j.jecp.2014.05.007">10.1016/j.jecp.2014.05.007</a>},
    journal={Journal of Experimental Child Psychology}, publisher={Elsevier BV}, author={Elsner,
    Claudia and Bakker, Marta and Rohlfing, Katharina and Gredebäck, Gustaf}, year={2014},
    pages={280–294} }'
  chicago: 'Elsner, Claudia, Marta Bakker, Katharina Rohlfing, and Gustaf Gredebäck.
    “Infants’ Online Perception of Give-and-Take Interactions.” <i>Journal of Experimental
    Child Psychology</i> 126 (2014): 280–94. <a href="https://doi.org/10.1016/j.jecp.2014.05.007">https://doi.org/10.1016/j.jecp.2014.05.007</a>.'
  ieee: 'C. Elsner, M. Bakker, K. Rohlfing, and G. Gredebäck, “Infants’ online perception
    of give-and-take interactions,” <i>Journal of Experimental Child Psychology</i>,
    vol. 126, pp. 280–294, 2014, doi: <a href="https://doi.org/10.1016/j.jecp.2014.05.007">10.1016/j.jecp.2014.05.007</a>.'
  mla: Elsner, Claudia, et al. “Infants’ Online Perception of Give-and-Take Interactions.”
    <i>Journal of Experimental Child Psychology</i>, vol. 126, Elsevier BV, 2014,
    pp. 280–94, doi:<a href="https://doi.org/10.1016/j.jecp.2014.05.007">10.1016/j.jecp.2014.05.007</a>.
  short: C. Elsner, M. Bakker, K. Rohlfing, G. Gredebäck, Journal of Experimental
    Child Psychology 126 (2014) 280–294.
date_created: 2020-06-24T13:01:19Z
date_updated: 2023-02-01T16:11:16Z
department:
- _id: '749'
doi: 10.1016/j.jecp.2014.05.007
intvolume: '       126'
keyword:
- Give-me gesture
- Infant
- Anticipation
- Eye movement
- Gesture
- Social interaction
language:
- iso: eng
page: 280-294
publication: Journal of Experimental Child Psychology
publication_identifier:
  issn:
  - 0022-0965
publisher: Elsevier BV
status: public
title: Infants' online perception of give-and-take interactions
type: journal_article
user_id: '14931'
volume: 126
year: '2014'
...
---
_id: '17259'
abstract:
- lang: eng
  text: Learning is a social endeavor, in which the learner generally receives support
    from his/her social partner(s). In developmental research – even though tutors/adults
    behavior modifications in their speech, gestures and motions have been extensively
    studied, studies barely consider the recipient’s (i.e. the child’s) perspective
    in the analysis of the adult’s presentation, In addition, the variability in parental
    behavior, i.e. the fact that not every parent modifies her/his behavior in the
    same way, found less fine-grained analysis. In contrast, in this paper, we assume
    an interactional perspective investigating the loop between the tutor’s and the
    learner’s actions. With this approach, we aim both at discovering the levels and
    features of variability and at achieving a better understanding of how they come
    about within the course of the interaction. For our analysis, we used a combination
    of (1) qualitative investigation derived from ethnomethodological Conversation
    Analysis (CA), (2) semi-automatic computational 2D hand tracking and (3) a mathematically
    based visualization of the data. Our analysis reveals that tutors not only shape
    their demonstrations differently with regard to the intended recipient per se
    (adult-directed vs. child-directed), but most importantly that the learner’s feedback
    during the presentation is consequential for the concrete ways in which the presentation
    is carried out.
author:
- first_name: Karola
  full_name: Pitsch, Karola
  last_name: Pitsch
- first_name: Anna-Lisa
  full_name: Vollmer, Anna-Lisa
  last_name: Vollmer
- first_name: Jannik
  full_name: Fritsch, Jannik
  last_name: Fritsch
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
- first_name: Gerhard
  full_name: Sagerer, Gerhard
  last_name: Sagerer
citation:
  ama: 'Pitsch K, Vollmer A-L, Fritsch J, Wrede B, Rohlfing K, Sagerer G. On the loop
    of action modification and the recipient’s gaze in adult-child interaction. In:
    <i>Gesture and Speech in Interaction</i>. ; 2009.'
  apa: Pitsch, K., Vollmer, A.-L., Fritsch, J., Wrede, B., Rohlfing, K., &#38; Sagerer,
    G. (2009). On the loop of action modification and the recipient’s gaze in adult-child
    interaction. <i>Gesture and Speech in Interaction</i>.
  bibtex: '@inproceedings{Pitsch_Vollmer_Fritsch_Wrede_Rohlfing_Sagerer_2009, title={On
    the loop of action modification and the recipient’s gaze in adult-child interaction},
    booktitle={Gesture and Speech in Interaction}, author={Pitsch, Karola and Vollmer,
    Anna-Lisa and Fritsch, Jannik and Wrede, Britta and Rohlfing, Katharina and Sagerer,
    Gerhard}, year={2009} }'
  chicago: Pitsch, Karola, Anna-Lisa Vollmer, Jannik Fritsch, Britta Wrede, Katharina
    Rohlfing, and Gerhard Sagerer. “On the Loop of Action Modification and the Recipient’s
    Gaze in Adult-Child Interaction.” In <i>Gesture and Speech in Interaction</i>,
    2009.
  ieee: K. Pitsch, A.-L. Vollmer, J. Fritsch, B. Wrede, K. Rohlfing, and G. Sagerer,
    “On the loop of action modification and the recipient’s gaze in adult-child interaction,”
    2009.
  mla: Pitsch, Karola, et al. “On the Loop of Action Modification and the Recipient’s
    Gaze in Adult-Child Interaction.” <i>Gesture and Speech in Interaction</i>, 2009.
  short: 'K. Pitsch, A.-L. Vollmer, J. Fritsch, B. Wrede, K. Rohlfing, G. Sagerer,
    in: Gesture and Speech in Interaction, 2009.'
date_created: 2020-06-24T13:02:27Z
date_updated: 2023-02-01T13:02:31Z
department:
- _id: '749'
keyword:
- gaze
- gesture
- Multimodal
- adult-child interaction
language:
- iso: eng
publication: Gesture and Speech in Interaction
status: public
title: On the loop of action modification and the recipient's gaze in adult-child
  interaction
type: conference
user_id: '14931'
year: '2009'
...
---
_id: '17278'
abstract:
- lang: eng
  text: This paper investigates the influence of feedback provided by an autonomous
    robot (BIRON) on users’ discursive behavior. A user study is described during
    which users show objects to the robot. The results of the experiment indicate,
    that the robot’s verbal feedback utterances cause the humans to adapt their own
    way of speaking. The changes in users’ verbal behavior are due to their beliefs
    about the robots knowledge and abilities. In this paper they are identified and
    grouped. Moreover, the data implies variations in user behavior regarding gestures.
    Unlike speech, the robot was not able to give feedback with gestures. Due to the
    lack of feedback, users did not seem to have a consistent mental representation
    of the robot’s abilities to recognize gestures. As a result, changes between different
    gestures are interpreted to be unconscious variations accompanying speech.
author:
- first_name: Manja
  full_name: Lohse, Manja
  last_name: Lohse
- first_name: Katharina
  full_name: Rohlfing, Katharina
  id: '50352'
  last_name: Rohlfing
- first_name: Britta
  full_name: Wrede, Britta
  last_name: Wrede
- first_name: Gerhard
  full_name: Sagerer, Gerhard
  last_name: Sagerer
citation:
  ama: 'Lohse M, Rohlfing K, Wrede B, Sagerer G. “Try something else!” — When users
    change their discursive behavior in human-robot interaction. In: ; 2008:3481-3486.
    doi:<a href="https://doi.org/10.1109/ROBOT.2008.4543743">10.1109/ROBOT.2008.4543743</a>'
  apa: Lohse, M., Rohlfing, K., Wrede, B., &#38; Sagerer, G. (2008). <i>“Try something
    else!” — When users change their discursive behavior in human-robot interaction</i>.
    3481–3486. <a href="https://doi.org/10.1109/ROBOT.2008.4543743">https://doi.org/10.1109/ROBOT.2008.4543743</a>
  bibtex: '@inproceedings{Lohse_Rohlfing_Wrede_Sagerer_2008, title={“Try something
    else!” — When users change their discursive behavior in human-robot interaction},
    DOI={<a href="https://doi.org/10.1109/ROBOT.2008.4543743">10.1109/ROBOT.2008.4543743</a>},
    author={Lohse, Manja and Rohlfing, Katharina and Wrede, Britta and Sagerer, Gerhard},
    year={2008}, pages={3481–3486} }'
  chicago: Lohse, Manja, Katharina Rohlfing, Britta Wrede, and Gerhard Sagerer. “‘Try
    Something Else!’ — When Users Change Their Discursive Behavior in Human-Robot
    Interaction,” 3481–86, 2008. <a href="https://doi.org/10.1109/ROBOT.2008.4543743">https://doi.org/10.1109/ROBOT.2008.4543743</a>.
  ieee: 'M. Lohse, K. Rohlfing, B. Wrede, and G. Sagerer, “‘Try something else!’ —
    When users change their discursive behavior in human-robot interaction,” 2008,
    pp. 3481–3486, doi: <a href="https://doi.org/10.1109/ROBOT.2008.4543743">10.1109/ROBOT.2008.4543743</a>.'
  mla: Lohse, Manja, et al. <i>“Try Something Else!” — When Users Change Their Discursive
    Behavior in Human-Robot Interaction</i>. 2008, pp. 3481–86, doi:<a href="https://doi.org/10.1109/ROBOT.2008.4543743">10.1109/ROBOT.2008.4543743</a>.
  short: 'M. Lohse, K. Rohlfing, B. Wrede, G. Sagerer, in: 2008, pp. 3481–3486.'
date_created: 2020-06-24T13:02:49Z
date_updated: 2023-02-01T13:08:20Z
department:
- _id: '749'
doi: 10.1109/ROBOT.2008.4543743
keyword:
- discursive behavior
- autonomous robot
- BIRON
- man-machine systems
- robot abilities
- robot knowledge
- user gestures
- robot verbal feedback utterance
- speech processing
- user verbal behavior
- service robots
- human-robot interaction
- human computer interaction
- gesture recognition
language:
- iso: eng
page: 3481-3486
publication_identifier:
  isbn:
  - 1050-4729
status: public
title: “Try something else!” — When users change their discursive behavior in human-robot
  interaction
type: conference
user_id: '14931'
year: '2008'
...
