{"citation":{"chicago":"Rohlfing, Katharina, Jannik Fritsch, Britta Wrede, and Tanja Jungmann. “How Can Multimodal Cues from Child-Directed Interaction Reduce Learning Complexity in Robots?” Advanced Robotics 20, no. 10 (2006): 1183–99. https://doi.org/10.1163/156855306778522532.","bibtex":"@article{Rohlfing_Fritsch_Wrede_Jungmann_2006, title={How can multimodal cues from child-directed interaction reduce learning complexity in robots?}, volume={20}, DOI={10.1163/156855306778522532}, number={10}, journal={Advanced Robotics}, publisher={VSP BV}, author={Rohlfing, Katharina and Fritsch, Jannik and Wrede, Britta and Jungmann, Tanja}, year={2006}, pages={1183–1199} }","ama":"Rohlfing K, Fritsch J, Wrede B, Jungmann T. How can multimodal cues from child-directed interaction reduce learning complexity in robots? Advanced Robotics. 2006;20(10):1183-1199. doi:10.1163/156855306778522532","apa":"Rohlfing, K., Fritsch, J., Wrede, B., & Jungmann, T. (2006). How can multimodal cues from child-directed interaction reduce learning complexity in robots? Advanced Robotics, 20(10), 1183–1199. https://doi.org/10.1163/156855306778522532","short":"K. Rohlfing, J. Fritsch, B. Wrede, T. Jungmann, Advanced Robotics 20 (2006) 1183–1199.","ieee":"K. Rohlfing, J. Fritsch, B. Wrede, and T. Jungmann, “How can multimodal cues from child-directed interaction reduce learning complexity in robots?,” Advanced Robotics, vol. 20, no. 10, pp. 1183–1199, 2006, doi: 10.1163/156855306778522532.","mla":"Rohlfing, Katharina, et al. “How Can Multimodal Cues from Child-Directed Interaction Reduce Learning Complexity in Robots?” Advanced Robotics, vol. 20, no. 10, VSP BV, 2006, pp. 1183–99, doi:10.1163/156855306778522532."},"author":[{"id":"50352","full_name":"Rohlfing, Katharina","last_name":"Rohlfing","first_name":"Katharina"},{"full_name":"Fritsch, Jannik","last_name":"Fritsch","first_name":"Jannik"},{"full_name":"Wrede, Britta","last_name":"Wrede","first_name":"Britta"},{"full_name":"Jungmann, Tanja","first_name":"Tanja","last_name":"Jungmann"}],"volume":20,"year":"2006","date_updated":"2023-02-01T13:14:36Z","language":[{"iso":"eng"}],"publication_identifier":{"issn":["1568-5535"]},"doi":"10.1163/156855306778522532","department":[{"_id":"749"}],"intvolume":" 20","_id":"17289","page":"1183-1199","issue":"10","title":"How can multimodal cues from child-directed interaction reduce learning complexity in robots?","publication":"Advanced Robotics","user_id":"14931","date_created":"2020-06-24T13:03:02Z","status":"public","type":"journal_article","keyword":["multi-modal motherese","child-directed input","motionese","learning mechanisms"],"publisher":"VSP BV","abstract":[{"text":"Robots have to deal with an enormous amount of sensory stimuli. One solution in making sense of them is to enable a robot system to actively search for cues that help structuring the information. Studies with infants reveal that parents support the learning-process by modifying their interaction style, dependent on their child's developmental age. In our study, in which parents demonstrated everyday actions to their preverbal children (8-11 months old), our aim was to identify objective parameters for multimodal action modification. Our results reveal two action parameters being modified in adult-child interaction: roundness and pace. Furthermore, we found that language has the power to help children structuring actions sequences by synchrony and emphasis. These insights are discussed with respect to the built-in attention architecture of a socially interactive robot, which enables it to understand demonstrated actions. Our algorithmic approach towards automatically detecting the task structure in child-designed input demonstrates the potential impact of insights from developmental learning on robotics. The presented findings pave the way to automatically detect when to imitate in a demonstration","lang":"eng"}]}