{"department":[{"_id":"749"}],"citation":{"short":"M. Rolf, M. Hanheide, K. Rohlfing, IEEE Transactions on Autonomous Mental Development 1 (2009) 55–67.","ama":"Rolf M, Hanheide M, Rohlfing K. Attention via synchrony. Making use of multimodal cues in social learning. IEEE Transactions on Autonomous Mental Development. 2009;1(1):55-67. doi:10.1109/TAMD.2009.2021091","chicago":"Rolf, Matthias, Marc Hanheide, and Katharina Rohlfing. “Attention via Synchrony. Making Use of Multimodal Cues in Social Learning.” IEEE Transactions on Autonomous Mental Development 1, no. 1 (2009): 55–67. https://doi.org/10.1109/TAMD.2009.2021091.","bibtex":"@article{Rolf_Hanheide_Rohlfing_2009, title={Attention via synchrony. Making use of multimodal cues in social learning}, volume={1}, DOI={10.1109/TAMD.2009.2021091}, number={1}, journal={IEEE Transactions on Autonomous Mental Development}, publisher={Institute of Electrical & Electronics Engineers (IEEE)}, author={Rolf, Matthias and Hanheide, Marc and Rohlfing, Katharina}, year={2009}, pages={55–67} }","mla":"Rolf, Matthias, et al. “Attention via Synchrony. Making Use of Multimodal Cues in Social Learning.” IEEE Transactions on Autonomous Mental Development, vol. 1, no. 1, Institute of Electrical & Electronics Engineers (IEEE), 2009, pp. 55–67, doi:10.1109/TAMD.2009.2021091.","ieee":"M. Rolf, M. Hanheide, and K. Rohlfing, “Attention via synchrony. Making use of multimodal cues in social learning,” IEEE Transactions on Autonomous Mental Development, vol. 1, no. 1, pp. 55–67, 2009, doi: 10.1109/TAMD.2009.2021091.","apa":"Rolf, M., Hanheide, M., & Rohlfing, K. (2009). Attention via synchrony. Making use of multimodal cues in social learning. IEEE Transactions on Autonomous Mental Development, 1(1), 55–67. https://doi.org/10.1109/TAMD.2009.2021091"},"publisher":"Institute of Electrical & Electronics Engineers (IEEE)","user_id":"14931","title":"Attention via synchrony. Making use of multimodal cues in social learning","date_updated":"2023-02-01T13:05:47Z","language":[{"iso":"eng"}],"doi":"10.1109/TAMD.2009.2021091","abstract":[{"text":"Infants learning about their environment are confronted with many stimuli of different modalities. Therefore, a crucial problem is how to discover which stimuli are related, for instance, in learning words. In making these multimodal \"bindings,\" infants depend on social interaction with a caregiver to guide their attention towards relevant stimuli. The caregiver might, for example, visually highlight an object by shaking it while vocalizing the object's name. These cues are known to help structuring the continuous stream of stimuli. To detect and exploit them, we propose a model of bottom-up attention by multimodal signal-level synchrony. We focus on the guidance of visual attention from audio-visual synchrony informed by recent adult-infant interaction studies. Consequently, we demonstrate that our model is receptive to parental cues during child-directed tutoring. The findings discussed in this paper are consistent with recent results from developmental psychology but for the first time are obtained employing an objective, computational model. The presence of \" multimodal motherese\" is verified directly on the audio-visual signal. Lastly, we hypothesize how our computational model facilitates tutoring interaction and discuss its application in interactive learning scenarios, enabling social robots to benefit from adult-like tutoring. Document Type: Article","lang":"eng"}],"volume":1,"type":"journal_article","page":"55-67","issue":"1","publication":"IEEE Transactions on Autonomous Mental Development","date_created":"2020-06-24T13:02:39Z","author":[{"last_name":"Rolf","full_name":"Rolf, Matthias","first_name":"Matthias"},{"last_name":"Hanheide","full_name":"Hanheide, Marc","first_name":"Marc"},{"full_name":"Rohlfing, Katharina","last_name":"Rohlfing","first_name":"Katharina","id":"50352"}],"status":"public","publication_identifier":{"issn":["1943-0612"]},"_id":"17269","intvolume":" 1","year":"2009"}