---
_id: '48543'
abstract:
- lang: eng
text: Explanation has been identified as an important capability for AI-based systems,
but research on systematic strategies for achieving understanding in interaction
with such systems is still sparse. Negation is a linguistic strategy that is often
used in explanations. It creates a contrast space between the affirmed and the
negated item that enriches explaining processes with additional contextual information.
While negation in human speech has been shown to lead to higher processing costs
and worse task performance in terms of recall or action execution when used in
isolation, it can decrease processing costs when used in context. So far, it has
not been considered as a guiding strategy for explanations in human-robot interaction.
We conducted an empirical study to investigate the use of negation as a guiding
strategy in explanatory human-robot dialogue, in which a virtual robot explains
tasks and possible actions to a human explainee to solve them in terms of gestures
on a touchscreen. Our results show that negation vs. affirmation 1) increases
processing costs measured as reaction time and 2) increases several aspects of
task performance. While there was no significant effect of negation on the number
of initially correctly executed gestures, we found a significantly lower number
of attempts—measured as breaks in the finger movement data before the correct
gesture was carried out—when being instructed through a negation. We further found
that the gestures significantly resembled the presented prototype gesture more
following an instruction with a negation as opposed to an affirmation. Also, the
participants rated the benefit of contrastive vs. affirmative explanations significantly
higher. Repeating the instructions decreased the effects of negation, yielding
similar processing costs and task performance measures for negation and affirmation
after several iterations. We discuss our results with respect to possible effects
of negation on linguistic processing of explanations and limitations of our study.
article_type: original
author:
- first_name: A.
full_name: Groß, A.
last_name: Groß
- first_name: Amit
full_name: Singh, Amit
id: '91018'
last_name: Singh
orcid: 0000-0002-7789-1521
- first_name: Ngoc Chi
full_name: Banh, Ngoc Chi
id: '38219'
last_name: Banh
orcid: 0000-0002-5946-4542
- first_name: B.
full_name: Richter, B.
last_name: Richter
- first_name: Ingrid
full_name: Scharlau, Ingrid
id: '451'
last_name: Scharlau
orcid: 0000-0003-2364-9489
- first_name: Katharina J.
full_name: Rohlfing, Katharina J.
id: '50352'
last_name: Rohlfing
- first_name: B.
full_name: Wrede, B.
last_name: Wrede
citation:
ama: Groß A, Singh A, Banh NC, et al. Scaffolding the human partner by contrastive
guidance in an explanatory human-robot dialogue. Frontiers in Robotics and
AI. 2023;10. doi:10.3389/frobt.2023.1236184
apa: Groß, A., Singh, A., Banh, N. C., Richter, B., Scharlau, I., Rohlfing, K. J.,
& Wrede, B. (2023). Scaffolding the human partner by contrastive guidance
in an explanatory human-robot dialogue. Frontiers in Robotics and AI, 10.
https://doi.org/10.3389/frobt.2023.1236184
bibtex: '@article{Groß_Singh_Banh_Richter_Scharlau_Rohlfing_Wrede_2023, title={Scaffolding
the human partner by contrastive guidance in an explanatory human-robot dialogue},
volume={10}, DOI={10.3389/frobt.2023.1236184},
journal={Frontiers in Robotics and AI}, author={Groß, A. and Singh, Amit and Banh,
Ngoc Chi and Richter, B. and Scharlau, Ingrid and Rohlfing, Katharina J. and Wrede,
B.}, year={2023} }'
chicago: Groß, A., Amit Singh, Ngoc Chi Banh, B. Richter, Ingrid Scharlau, Katharina
J. Rohlfing, and B. Wrede. “Scaffolding the Human Partner by Contrastive Guidance
in an Explanatory Human-Robot Dialogue.” Frontiers in Robotics and AI 10
(2023). https://doi.org/10.3389/frobt.2023.1236184.
ieee: 'A. Groß et al., “Scaffolding the human partner by contrastive guidance
in an explanatory human-robot dialogue,” Frontiers in Robotics and AI,
vol. 10, 2023, doi: 10.3389/frobt.2023.1236184.'
mla: Groß, A., et al. “Scaffolding the Human Partner by Contrastive Guidance in
an Explanatory Human-Robot Dialogue.” Frontiers in Robotics and AI, vol.
10, 2023, doi:10.3389/frobt.2023.1236184.
short: A. Groß, A. Singh, N.C. Banh, B. Richter, I. Scharlau, K.J. Rohlfing, B.
Wrede, Frontiers in Robotics and AI 10 (2023).
date_created: 2023-10-30T09:29:16Z
date_updated: 2023-10-30T09:43:47Z
department:
- _id: '749'
doi: 10.3389/frobt.2023.1236184
funded_apc: '1'
intvolume: ' 10'
keyword:
- HRI
- XAI
- negation
- understanding
- explaining
- touch interaction
- gesture
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://www.frontiersin.org/articles/10.3389/frobt.2023.1236184/full
oa: '1'
project:
- _id: '115'
grant_number: '438445824'
name: 'TRR 318 - A05: TRR 318 - Echtzeitmessung der Aufmerksamkeit im Mensch-Roboter-Erklärdialog
(Teilprojekt A05)'
publication: Frontiers in Robotics and AI
publication_status: published
status: public
title: Scaffolding the human partner by contrastive guidance in an explanatory human-robot
dialogue
type: journal_article
user_id: '91018'
volume: 10
year: '2023'
...
---
_id: '43437'
abstract:
- lang: eng
text: 'In virtual reality (VR), participants may not always have hands,
bodies, eyes, or even voices—using VR helmets and two controllers, participants
control an avatar through virtual worlds that do not necessarily obey familiar
laws of physics; moreover, the avatar’s bodily characteristics may not neatly
match our bodies in the physical world. Despite these limitations and specificities,
humans get things done through collaboration and the creative use of the environment.
While multiuser interactive VR is attracting greater numbers of participants,
there are currently few attempts to analyze the in situ interaction systematically.
This paper proposes a video-analytic detail-oriented methodological framework
for studying virtual reality interaction. Using multimodal conversation analysis,
the paper investigates a nonverbal, embodied, two-person interaction: two players
in a survival game strive to gesturally resolve a misunderstanding regarding an
in-game mechanic—however, both of their microphones are turned off for the duration
of play. The players’ inability to resort to complex language to resolve this
issue results in a dense sequence of back-and-forth activity involving gestures,
object manipulation, gaze, and body work. Most crucially, timing and modified
repetitions of previously produced actions turn out to be the key to overcome
both technical and communicative challenges. The paper analyzes these action sequences,
demonstrates how they generate intended outcomes, and proposes a vocabulary to
speak about these types of interaction more generally. The findings demonstrate
the viability of multimodal analysis of VR interaction, shed light on unique challenges
of analyzing interaction in virtual reality, and generate broader methodological
insights about the study of nonverbal action.'
article_type: original
author:
- first_name: Nils
full_name: Klowait, Nils
id: '98454'
last_name: Klowait
orcid: 0000-0002-7347-099X
citation:
ama: Klowait N. On the Multimodal Resolution of a Search Sequence in Virtual Reality.
Human Behavior and Emerging Technologies. 2023;2023:1-15. doi:10.1155/2023/8417012
apa: Klowait, N. (2023). On the Multimodal Resolution of a Search Sequence in Virtual
Reality. Human Behavior and Emerging Technologies, 2023, 1–15. https://doi.org/10.1155/2023/8417012
bibtex: '@article{Klowait_2023, title={On the Multimodal Resolution of a Search
Sequence in Virtual Reality}, volume={2023}, DOI={10.1155/2023/8417012},
journal={Human Behavior and Emerging Technologies}, publisher={Hindawi Limited},
author={Klowait, Nils}, year={2023}, pages={1–15} }'
chicago: 'Klowait, Nils. “On the Multimodal Resolution of a Search Sequence in Virtual
Reality.” Human Behavior and Emerging Technologies 2023 (2023): 1–15. https://doi.org/10.1155/2023/8417012.'
ieee: 'N. Klowait, “On the Multimodal Resolution of a Search Sequence in Virtual
Reality,” Human Behavior and Emerging Technologies, vol. 2023, pp. 1–15,
2023, doi: 10.1155/2023/8417012.'
mla: Klowait, Nils. “On the Multimodal Resolution of a Search Sequence in Virtual
Reality.” Human Behavior and Emerging Technologies, vol. 2023, Hindawi
Limited, 2023, pp. 1–15, doi:10.1155/2023/8417012.
short: N. Klowait, Human Behavior and Emerging Technologies 2023 (2023) 1–15.
date_created: 2023-04-06T10:57:28Z
date_updated: 2024-03-26T09:40:53Z
ddc:
- '300'
department:
- _id: '9'
doi: 10.1155/2023/8417012
file:
- access_level: closed
content_type: application/pdf
creator: nklowait
date_created: 2023-04-06T11:00:01Z
date_updated: 2023-04-06T11:00:01Z
file_id: '43438'
file_name: Klowait_2023a.pdf
file_size: 2877385
relation: main_file
success: 1
file_date_updated: 2023-04-06T11:00:01Z
funded_apc: '1'
has_accepted_license: '1'
intvolume: ' 2023'
keyword:
- Human-Computer Interaction
- General Social Sciences
- Social Psychology
- 'Virtual Reality : Multimodality'
- Nonverbal Interaction
- Search Sequence
- Gesture
- Co-Operative Action
- Goodwin
- Ethnomethodology
language:
- iso: eng
main_file_link:
- open_access: '1'
url: https://doi.org/10.1155/2023/8417012
oa: '1'
page: 1-15
project:
- _id: '119'
name: 'TRR 318 - Ö: TRR 318 - Project Area Ö'
publication: Human Behavior and Emerging Technologies
publication_identifier:
issn:
- 2578-1863
publication_status: published
publisher: Hindawi Limited
quality_controlled: '1'
status: public
title: On the Multimodal Resolution of a Search Sequence in Virtual Reality
type: journal_article
user_id: '98454'
volume: 2023
year: '2023'
...
---
_id: '17557'
abstract:
- lang: eng
text: 'Previous work by [1] studied gesture-speech interaction in adults. [1] focussed
on temporal and semantic coordination of gesture and speech and found that while
adult speech is mostly coordinated (or redundant) with gestures, semantic coordination
increases the temporal synchrony. These observations do not necessarily hold for
children (in particular with respect to iconic gestures, see [2]), where the speech
and gesture systems are still under development. We studied the semantic and temporal
coordination of speech and gesture in 4-year old children using a corpus of 40
children producing action descriptions in task oriented dialogues. In particular,
we examined what kinds of information are transmitted verbally vs. non-verbally
and how they are related. To account for this, we extended the semantic features
(SFs) developed in [3] for object descriptions in order to include the semantics
of actions. We coded the SFs on the children’s speech and gestures separately
using video data. In our presentation, we will focus on the quantitative distribution
of SFs across gesture and speech. Our results indicate that speech and gestures
of 4-year olds are less integrated than those of the adults, although there is
a large variability among the children. We will discuss the results with respect
to the cognitive processes (e.g., visual memory, language) underlying children’s
abilities at this stage of development. Our work paves the way for the cognitive
architecture of speech-gesture interaction in preschoolers which to our knowledge
is missing so far. '
author:
- first_name: Olga
full_name: Abramov, Olga
last_name: Abramov
- first_name: Stefan
full_name: Kopp, Stefan
last_name: Kopp
- first_name: Anne
full_name: Nemeth, Anne
last_name: Nemeth
- first_name: Friederike
full_name: Kern, Friederike
last_name: Kern
- first_name: Ulrich
full_name: Mertens, Ulrich
last_name: Mertens
- first_name: Katharina
full_name: Rohlfing, Katharina
id: '50352'
last_name: Rohlfing
citation:
ama: 'Abramov O, Kopp S, Nemeth A, Kern F, Mertens U, Rohlfing K. Towards a Computational
Model of Child Gesture-Speech Production. In: KOGWIS2018: Computational Approaches
to Cognitive Science. ; 2018.'
apa: 'Abramov, O., Kopp, S., Nemeth, A., Kern, F., Mertens, U., & Rohlfing,
K. (2018). Towards a Computational Model of Child Gesture-Speech Production. KOGWIS2018:
Computational Approaches to Cognitive Science.'
bibtex: '@inproceedings{Abramov_Kopp_Nemeth_Kern_Mertens_Rohlfing_2018, title={Towards
a Computational Model of Child Gesture-Speech Production}, booktitle={KOGWIS2018:
Computational Approaches to Cognitive Science}, author={Abramov, Olga and Kopp,
Stefan and Nemeth, Anne and Kern, Friederike and Mertens, Ulrich and Rohlfing,
Katharina}, year={2018} }'
chicago: 'Abramov, Olga, Stefan Kopp, Anne Nemeth, Friederike Kern, Ulrich Mertens,
and Katharina Rohlfing. “Towards a Computational Model of Child Gesture-Speech
Production.” In KOGWIS2018: Computational Approaches to Cognitive Science,
2018.'
ieee: O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, and K. Rohlfing, “Towards
a Computational Model of Child Gesture-Speech Production,” 2018.
mla: 'Abramov, Olga, et al. “Towards a Computational Model of Child Gesture-Speech
Production.” KOGWIS2018: Computational Approaches to Cognitive Science,
2018.'
short: 'O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, K. Rohlfing, in: KOGWIS2018:
Computational Approaches to Cognitive Science, 2018.'
date_created: 2020-08-03T11:00:54Z
date_updated: 2023-02-01T12:50:21Z
department:
- _id: '749'
keyword:
- Speech-gesture integration
- semantic features
language:
- iso: eng
publication: 'KOGWIS2018: Computational Approaches to Cognitive Science'
status: public
title: Towards a Computational Model of Child Gesture-Speech Production
type: conference
user_id: '14931'
year: '2018'
...
---
_id: '17179'
abstract:
- lang: eng
text: 'Previous work by [1] studied gesture-speech interaction in adults. [1] focussed
on temporal and semantic coordination of gesture and speech and found that while
adult speech is mostly coordinated (or redundant) with gestures, semantic coordination
increases the temporal synchrony. These observations do not necessarily hold for
children (in particular with respect to iconic gestures, see [2]), where the speech
and gesture systems are still under development. We studied the semantic and temporal
coordination of speech and gesture in 4-year old children using a corpus of 40
children producing action descriptions in task oriented dialogues. In particular,
we examined what kinds of information are transmitted verbally vs. non-verbally
and how they are related. To account for this, we extended the semantic features
(SFs) developed in [3] for object descriptions in order to include the semantics
of actions. We coded the SFs on the children’s speech and gestures separately
using video data. In our presentation, we will focus on the quantitative distribution
of SFs across gesture and speech. Our results indicate that speech and gestures
of 4-year olds are less integrated than those of the adults, although there is
a large variability among the children. We will discuss the results with respect
to the cognitive processes (e.g., visual memory, language) underlying children’s
abilities at this stage of development. Our work paves the way for the cognitive
architecture of speech-gesture interaction in preschoolers which to our knowledge
is missing so far. '
author:
- first_name: Olga
full_name: Abramov, Olga
last_name: Abramov
- first_name: Stefan
full_name: Kopp, Stefan
last_name: Kopp
- first_name: Anne
full_name: Nemeth, Anne
last_name: Nemeth
- first_name: Friederike
full_name: Kern, Friederike
last_name: Kern
- first_name: Ulrich
full_name: Mertens, Ulrich
last_name: Mertens
- first_name: Katharina
full_name: Rohlfing, Katharina
id: '50352'
last_name: Rohlfing
citation:
ama: 'Abramov O, Kopp S, Nemeth A, Kern F, Mertens U, Rohlfing K. Towards a Computational
Model of Child Gesture-Speech Production. In: KOGWIS2018: Computational Approaches
to Cognitive Science. ; 2018.'
apa: 'Abramov, O., Kopp, S., Nemeth, A., Kern, F., Mertens, U., & Rohlfing,
K. (2018). Towards a Computational Model of Child Gesture-Speech Production. KOGWIS2018:
Computational Approaches to Cognitive Science.'
bibtex: '@inproceedings{Abramov_Kopp_Nemeth_Kern_Mertens_Rohlfing_2018, title={Towards
a Computational Model of Child Gesture-Speech Production}, booktitle={KOGWIS2018:
Computational Approaches to Cognitive Science}, author={Abramov, Olga and Kopp,
Stefan and Nemeth, Anne and Kern, Friederike and Mertens, Ulrich and Rohlfing,
Katharina}, year={2018} }'
chicago: 'Abramov, Olga, Stefan Kopp, Anne Nemeth, Friederike Kern, Ulrich Mertens,
and Katharina Rohlfing. “Towards a Computational Model of Child Gesture-Speech
Production.” In KOGWIS2018: Computational Approaches to Cognitive Science,
2018.'
ieee: O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, and K. Rohlfing, “Towards
a Computational Model of Child Gesture-Speech Production,” 2018.
mla: 'Abramov, Olga, et al. “Towards a Computational Model of Child Gesture-Speech
Production.” KOGWIS2018: Computational Approaches to Cognitive Science,
2018.'
short: 'O. Abramov, S. Kopp, A. Nemeth, F. Kern, U. Mertens, K. Rohlfing, in: KOGWIS2018:
Computational Approaches to Cognitive Science, 2018.'
date_created: 2020-06-24T13:00:54Z
date_updated: 2023-02-01T16:24:45Z
department:
- _id: '749'
keyword:
- Speech-gesture integration
- semantic features
language:
- iso: eng
publication: 'KOGWIS2018: Computational Approaches to Cognitive Science'
status: public
title: Towards a Computational Model of Child Gesture-Speech Production
type: conference
user_id: '14931'
year: '2018'
...
---
_id: '17184'
abstract:
- lang: eng
text: There is ongoing discussion on the function of the early production of gestures
with regard to whether they reduce children's cognitive demands and free their
capacity to perform other tasks (e.g., Goldin-Meadow & Wagner, 2005) or whether
young children point in order to share their interest or to elicit information
from their caregivers (e.g., Begus & Southgate, 2012; Liszkowski, Carpenter, Henning,
Striano & Tomasello, 2004). The different assumptions lead to diverse predictions
regarding infants' gestural or multimodal behavior in recurring situations, in
which some objects are familiar and others are unfamiliar. To examine these different
predictions, we observed 14 children aged between 14 and 16 months biweekly in
a semi-experimental situation with a caregiver and explored how children's verbal
and gestural behaviors change as a function of their familiarization with objects.
We split the children into two groups based on their reported vocabulary size
at 21 months of age (larger vs. smaller vocabulary). We found that children with
a larger vocabulary at 21 months had an increase in their pointing with words
toward unfamiliar objects as well as in their total amount of words, whereas for
children with smaller vocabularies we did not find differences in relation to
their familiarization with objects. We discuss these findings in terms of a social-pragmatic
use of pointing gestures.
author:
- first_name: Angela
full_name: Grimminger, Angela
id: '57578'
last_name: Grimminger
- first_name: Carina
full_name: Lüke, Carina
last_name: Lüke
- first_name: Ute
full_name: Ritterfeld, Ute
last_name: Ritterfeld
- first_name: Ulf
full_name: Liszkowski, Ulf
last_name: Liszkowski
- first_name: Katharina
full_name: Rohlfing, Katharina
id: '50352'
last_name: Rohlfing
citation:
ama: Grimminger A, Lüke C, Ritterfeld U, Liszkowski U, Rohlfing K. Effekte von Objekt-Familiarisierung
auf die frühe gestische Kommunikation. Individuelle Unterschiede in Hinblick auf
den späteren Wortschatz. Frühe Bildung. 2016;5(2):91-97. doi:10.1026/2191-9186/a000257
apa: Grimminger, A., Lüke, C., Ritterfeld, U., Liszkowski, U., & Rohlfing, K.
(2016). Effekte von Objekt-Familiarisierung auf die frühe gestische Kommunikation.
Individuelle Unterschiede in Hinblick auf den späteren Wortschatz. Frühe Bildung,
5(2), 91–97. https://doi.org/10.1026/2191-9186/a000257
bibtex: '@article{Grimminger_Lüke_Ritterfeld_Liszkowski_Rohlfing_2016, title={Effekte
von Objekt-Familiarisierung auf die frühe gestische Kommunikation. Individuelle
Unterschiede in Hinblick auf den späteren Wortschatz}, volume={5}, DOI={10.1026/2191-9186/a000257},
number={2}, journal={Frühe Bildung}, publisher={Hogrefe & Huber Publishers},
author={Grimminger, Angela and Lüke, Carina and Ritterfeld, Ute and Liszkowski,
Ulf and Rohlfing, Katharina}, year={2016}, pages={91–97} }'
chicago: 'Grimminger, Angela, Carina Lüke, Ute Ritterfeld, Ulf Liszkowski, and Katharina
Rohlfing. “Effekte von Objekt-Familiarisierung Auf Die Frühe Gestische Kommunikation.
Individuelle Unterschiede in Hinblick Auf Den Späteren Wortschatz.” Frühe Bildung
5, no. 2 (2016): 91–97. https://doi.org/10.1026/2191-9186/a000257.'
ieee: 'A. Grimminger, C. Lüke, U. Ritterfeld, U. Liszkowski, and K. Rohlfing, “Effekte
von Objekt-Familiarisierung auf die frühe gestische Kommunikation. Individuelle
Unterschiede in Hinblick auf den späteren Wortschatz,” Frühe Bildung, vol.
5, no. 2, pp. 91–97, 2016, doi: 10.1026/2191-9186/a000257.'
mla: Grimminger, Angela, et al. “Effekte von Objekt-Familiarisierung Auf Die Frühe
Gestische Kommunikation. Individuelle Unterschiede in Hinblick Auf Den Späteren
Wortschatz.” Frühe Bildung, vol. 5, no. 2, Hogrefe & Huber Publishers,
2016, pp. 91–97, doi:10.1026/2191-9186/a000257.
short: A. Grimminger, C. Lüke, U. Ritterfeld, U. Liszkowski, K. Rohlfing, Frühe
Bildung 5 (2016) 91–97.
date_created: 2020-06-24T13:01:00Z
date_updated: 2023-02-01T16:05:30Z
department:
- _id: '749'
doi: 10.1026/2191-9186/a000257
intvolume: ' 5'
issue: '2'
keyword:
- gesture
- pointing
- familiarity
- individual differences
language:
- iso: eng
page: 91-97
publication: Frühe Bildung
publication_identifier:
issn:
- 2191-9194
publisher: Hogrefe & Huber Publishers
status: public
title: Effekte von Objekt-Familiarisierung auf die frühe gestische Kommunikation.
Individuelle Unterschiede in Hinblick auf den späteren Wortschatz
type: journal_article
user_id: '14931'
volume: 5
year: '2016'
...
---
_id: '17200'
abstract:
- lang: eng
text: This research investigated infants’ online perception of give-me gestures
during observation of a social interaction. In the first experiment, goal-directed
eye movements of 12-month-olds were recorded as they observed a give-and-take
interaction in which an object is passed from one individual to another. Infants’
gaze shifts from the passing hand to the receiving hand were significantly faster
when the receiving hand formed a give-me gesture relative to when it was presented
as an inverted hand shape. Experiment 2 revealed that infants’ goal-directed gaze
shifts were not based on different affordances of the two receiving hands. Two
additional control experiments further demonstrated that differences in infants’
online gaze behavior were not mediated by an attentional preference for the give-me
gesture. Together, our findings provide evidence that properties of social action
goals influence infants’ online gaze during action observation. The current studies
demonstrate that infants have expectations about well-formed object transfer actions
between social agents. We suggest that 12-month-olds are sensitive to social goals
within the context of give-and-take interactions while observing from a third-party
perspective.
author:
- first_name: Claudia
full_name: Elsner, Claudia
last_name: Elsner
- first_name: Marta
full_name: Bakker, Marta
last_name: Bakker
- first_name: Katharina
full_name: Rohlfing, Katharina
id: '50352'
last_name: Rohlfing
- first_name: Gustaf
full_name: Gredebäck, Gustaf
last_name: Gredebäck
citation:
ama: Elsner C, Bakker M, Rohlfing K, Gredebäck G. Infants’ online perception of
give-and-take interactions. Journal of Experimental Child Psychology. 2014;126:280-294.
doi:10.1016/j.jecp.2014.05.007
apa: Elsner, C., Bakker, M., Rohlfing, K., & Gredebäck, G. (2014). Infants’
online perception of give-and-take interactions. Journal of Experimental Child
Psychology, 126, 280–294. https://doi.org/10.1016/j.jecp.2014.05.007
bibtex: '@article{Elsner_Bakker_Rohlfing_Gredebäck_2014, title={Infants’ online
perception of give-and-take interactions}, volume={126}, DOI={10.1016/j.jecp.2014.05.007},
journal={Journal of Experimental Child Psychology}, publisher={Elsevier BV}, author={Elsner,
Claudia and Bakker, Marta and Rohlfing, Katharina and Gredebäck, Gustaf}, year={2014},
pages={280–294} }'
chicago: 'Elsner, Claudia, Marta Bakker, Katharina Rohlfing, and Gustaf Gredebäck.
“Infants’ Online Perception of Give-and-Take Interactions.” Journal of Experimental
Child Psychology 126 (2014): 280–94. https://doi.org/10.1016/j.jecp.2014.05.007.'
ieee: 'C. Elsner, M. Bakker, K. Rohlfing, and G. Gredebäck, “Infants’ online perception
of give-and-take interactions,” Journal of Experimental Child Psychology,
vol. 126, pp. 280–294, 2014, doi: 10.1016/j.jecp.2014.05.007.'
mla: Elsner, Claudia, et al. “Infants’ Online Perception of Give-and-Take Interactions.”
Journal of Experimental Child Psychology, vol. 126, Elsevier BV, 2014,
pp. 280–94, doi:10.1016/j.jecp.2014.05.007.
short: C. Elsner, M. Bakker, K. Rohlfing, G. Gredebäck, Journal of Experimental
Child Psychology 126 (2014) 280–294.
date_created: 2020-06-24T13:01:19Z
date_updated: 2023-02-01T16:11:16Z
department:
- _id: '749'
doi: 10.1016/j.jecp.2014.05.007
intvolume: ' 126'
keyword:
- Give-me gesture
- Infant
- Anticipation
- Eye movement
- Gesture
- Social interaction
language:
- iso: eng
page: 280-294
publication: Journal of Experimental Child Psychology
publication_identifier:
issn:
- 0022-0965
publisher: Elsevier BV
status: public
title: Infants' online perception of give-and-take interactions
type: journal_article
user_id: '14931'
volume: 126
year: '2014'
...
---
_id: '17259'
abstract:
- lang: eng
text: Learning is a social endeavor, in which the learner generally receives support
from his/her social partner(s). In developmental research – even though tutors/adults
behavior modifications in their speech, gestures and motions have been extensively
studied, studies barely consider the recipient’s (i.e. the child’s) perspective
in the analysis of the adult’s presentation, In addition, the variability in parental
behavior, i.e. the fact that not every parent modifies her/his behavior in the
same way, found less fine-grained analysis. In contrast, in this paper, we assume
an interactional perspective investigating the loop between the tutor’s and the
learner’s actions. With this approach, we aim both at discovering the levels and
features of variability and at achieving a better understanding of how they come
about within the course of the interaction. For our analysis, we used a combination
of (1) qualitative investigation derived from ethnomethodological Conversation
Analysis (CA), (2) semi-automatic computational 2D hand tracking and (3) a mathematically
based visualization of the data. Our analysis reveals that tutors not only shape
their demonstrations differently with regard to the intended recipient per se
(adult-directed vs. child-directed), but most importantly that the learner’s feedback
during the presentation is consequential for the concrete ways in which the presentation
is carried out.
author:
- first_name: Karola
full_name: Pitsch, Karola
last_name: Pitsch
- first_name: Anna-Lisa
full_name: Vollmer, Anna-Lisa
last_name: Vollmer
- first_name: Jannik
full_name: Fritsch, Jannik
last_name: Fritsch
- first_name: Britta
full_name: Wrede, Britta
last_name: Wrede
- first_name: Katharina
full_name: Rohlfing, Katharina
id: '50352'
last_name: Rohlfing
- first_name: Gerhard
full_name: Sagerer, Gerhard
last_name: Sagerer
citation:
ama: 'Pitsch K, Vollmer A-L, Fritsch J, Wrede B, Rohlfing K, Sagerer G. On the loop
of action modification and the recipient’s gaze in adult-child interaction. In:
Gesture and Speech in Interaction. ; 2009.'
apa: Pitsch, K., Vollmer, A.-L., Fritsch, J., Wrede, B., Rohlfing, K., & Sagerer,
G. (2009). On the loop of action modification and the recipient’s gaze in adult-child
interaction. Gesture and Speech in Interaction.
bibtex: '@inproceedings{Pitsch_Vollmer_Fritsch_Wrede_Rohlfing_Sagerer_2009, title={On
the loop of action modification and the recipient’s gaze in adult-child interaction},
booktitle={Gesture and Speech in Interaction}, author={Pitsch, Karola and Vollmer,
Anna-Lisa and Fritsch, Jannik and Wrede, Britta and Rohlfing, Katharina and Sagerer,
Gerhard}, year={2009} }'
chicago: Pitsch, Karola, Anna-Lisa Vollmer, Jannik Fritsch, Britta Wrede, Katharina
Rohlfing, and Gerhard Sagerer. “On the Loop of Action Modification and the Recipient’s
Gaze in Adult-Child Interaction.” In Gesture and Speech in Interaction,
2009.
ieee: K. Pitsch, A.-L. Vollmer, J. Fritsch, B. Wrede, K. Rohlfing, and G. Sagerer,
“On the loop of action modification and the recipient’s gaze in adult-child interaction,”
2009.
mla: Pitsch, Karola, et al. “On the Loop of Action Modification and the Recipient’s
Gaze in Adult-Child Interaction.” Gesture and Speech in Interaction, 2009.
short: 'K. Pitsch, A.-L. Vollmer, J. Fritsch, B. Wrede, K. Rohlfing, G. Sagerer,
in: Gesture and Speech in Interaction, 2009.'
date_created: 2020-06-24T13:02:27Z
date_updated: 2023-02-01T13:02:31Z
department:
- _id: '749'
keyword:
- gaze
- gesture
- Multimodal
- adult-child interaction
language:
- iso: eng
publication: Gesture and Speech in Interaction
status: public
title: On the loop of action modification and the recipient's gaze in adult-child
interaction
type: conference
user_id: '14931'
year: '2009'
...
---
_id: '17278'
abstract:
- lang: eng
text: This paper investigates the influence of feedback provided by an autonomous
robot (BIRON) on users’ discursive behavior. A user study is described during
which users show objects to the robot. The results of the experiment indicate,
that the robot’s verbal feedback utterances cause the humans to adapt their own
way of speaking. The changes in users’ verbal behavior are due to their beliefs
about the robots knowledge and abilities. In this paper they are identified and
grouped. Moreover, the data implies variations in user behavior regarding gestures.
Unlike speech, the robot was not able to give feedback with gestures. Due to the
lack of feedback, users did not seem to have a consistent mental representation
of the robot’s abilities to recognize gestures. As a result, changes between different
gestures are interpreted to be unconscious variations accompanying speech.
author:
- first_name: Manja
full_name: Lohse, Manja
last_name: Lohse
- first_name: Katharina
full_name: Rohlfing, Katharina
id: '50352'
last_name: Rohlfing
- first_name: Britta
full_name: Wrede, Britta
last_name: Wrede
- first_name: Gerhard
full_name: Sagerer, Gerhard
last_name: Sagerer
citation:
ama: 'Lohse M, Rohlfing K, Wrede B, Sagerer G. “Try something else!” — When users
change their discursive behavior in human-robot interaction. In: ; 2008:3481-3486.
doi:10.1109/ROBOT.2008.4543743'
apa: Lohse, M., Rohlfing, K., Wrede, B., & Sagerer, G. (2008). “Try something
else!” — When users change their discursive behavior in human-robot interaction.
3481–3486. https://doi.org/10.1109/ROBOT.2008.4543743
bibtex: '@inproceedings{Lohse_Rohlfing_Wrede_Sagerer_2008, title={“Try something
else!” — When users change their discursive behavior in human-robot interaction},
DOI={10.1109/ROBOT.2008.4543743},
author={Lohse, Manja and Rohlfing, Katharina and Wrede, Britta and Sagerer, Gerhard},
year={2008}, pages={3481–3486} }'
chicago: Lohse, Manja, Katharina Rohlfing, Britta Wrede, and Gerhard Sagerer. “‘Try
Something Else!’ — When Users Change Their Discursive Behavior in Human-Robot
Interaction,” 3481–86, 2008. https://doi.org/10.1109/ROBOT.2008.4543743.
ieee: 'M. Lohse, K. Rohlfing, B. Wrede, and G. Sagerer, “‘Try something else!’ —
When users change their discursive behavior in human-robot interaction,” 2008,
pp. 3481–3486, doi: 10.1109/ROBOT.2008.4543743.'
mla: Lohse, Manja, et al. “Try Something Else!” — When Users Change Their Discursive
Behavior in Human-Robot Interaction. 2008, pp. 3481–86, doi:10.1109/ROBOT.2008.4543743.
short: 'M. Lohse, K. Rohlfing, B. Wrede, G. Sagerer, in: 2008, pp. 3481–3486.'
date_created: 2020-06-24T13:02:49Z
date_updated: 2023-02-01T13:08:20Z
department:
- _id: '749'
doi: 10.1109/ROBOT.2008.4543743
keyword:
- discursive behavior
- autonomous robot
- BIRON
- man-machine systems
- robot abilities
- robot knowledge
- user gestures
- robot verbal feedback utterance
- speech processing
- user verbal behavior
- service robots
- human-robot interaction
- human computer interaction
- gesture recognition
language:
- iso: eng
page: 3481-3486
publication_identifier:
isbn:
- 1050-4729
status: public
title: “Try something else!” — When users change their discursive behavior in human-robot
interaction
type: conference
user_id: '14931'
year: '2008'
...