Difference between revisions of "Yamazaki-etal2013"

From emcawiki
Jump to: navigation, search
(Created page with "{{BibEntry |BibType=ARTICLE |Author(s)=Akiko Yamazaki; Keiichi Yamazaki; Keiko Ikeda; Matthew Burdelski; Mihoko Fukushima; Tomoyuki Suzuki; Miyuki Kurihara; Yoshinori Kuno; Y...")
 
 
(One intermediate revision by one other user not shown)
Line 3: Line 3:
 
|Author(s)=Akiko Yamazaki;  Keiichi Yamazaki; Keiko Ikeda; Matthew Burdelski; Mihoko Fukushima; Tomoyuki Suzuki; Miyuki Kurihara; Yoshinori Kuno; Yoshinori Kobayashi;
 
|Author(s)=Akiko Yamazaki;  Keiichi Yamazaki; Keiko Ikeda; Matthew Burdelski; Mihoko Fukushima; Tomoyuki Suzuki; Miyuki Kurihara; Yoshinori Kuno; Yoshinori Kobayashi;
 
|Title=Interactions between a quiz robot  and multiple participants: Focusing on speech, gaze and bodily conduct in Japanese and English speakers
 
|Title=Interactions between a quiz robot  and multiple participants: Focusing on speech, gaze and bodily conduct in Japanese and English speakers
|Tag(s)=EMCA; coordination of verbal and non-verbal actions; robot gaze; comparison between English and Japanese; human-robot interaction (HRI); transition relevance place (TRP); conversation analysis;
+
|Tag(s)=EMCA; coordination of verbal and non-verbal actions; robot gaze; comparison between English and Japanese; human-robot interaction (HRI); transition relevance place (TRP); conversation analysis; AI reference list
 
|Key=Yamazaki-etal2013
 
|Key=Yamazaki-etal2013
 
|Year=2013
 
|Year=2013
Line 9: Line 9:
 
|Volume=14
 
|Volume=14
 
|Number=3
 
|Number=3
|Pages=366-389
+
|Pages=366–389
 +
|URL=https://www.jbe-platform.com/content/journals/10.1075/is.14.3.04yam
 
|DOI=10.1075/is.14.3.04yam
 
|DOI=10.1075/is.14.3.04yam
|Abstract=This paper reports on a quiz robot experiment in which we explore similarities  
+
|Abstract=This paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot’s speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot’s questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e.g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers’ head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot’s verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants’ native language.
and differences in human participant speech, gaze, and bodily conduct in  
 
responding to a robot’s speech, gaze, and bodily conduct across two languages.  
 
Our experiment involved three-person groups of Japanese and English-speaking  
 
participants who stood facing the robot and a projection screen that displayed  
 
pictures related to the robot’s questions. The robot was programmed so that its  
 
speech was coordinated with its gaze, body position, and gestures in relation to  
 
transition relevance places (TRPs), key words, and deictic words and expressions  
 
(e.g. this, this picture) in both languages. Contrary to findings on human  
 
interaction, we found that the frequency of English speakers’ head nodding was  
 
higher than that of Japanese speakers in human-robot interaction (HRI). Our  
 
findings suggest that the coordination of the robot’s verbal and non-verbal actions  
 
surrounding TRPs, key words, and deictic words and expressions is important for  
 
facilitating HRI irrespective of participants’ native language.
 
 
}}
 
}}

Latest revision as of 00:13, 24 February 2021

Yamazaki-etal2013
BibType ARTICLE
Key Yamazaki-etal2013
Author(s) Akiko Yamazaki, Keiichi Yamazaki, Keiko Ikeda, Matthew Burdelski, Mihoko Fukushima, Tomoyuki Suzuki, Miyuki Kurihara, Yoshinori Kuno, Yoshinori Kobayashi
Title Interactions between a quiz robot and multiple participants: Focusing on speech, gaze and bodily conduct in Japanese and English speakers
Editor(s)
Tag(s) EMCA, coordination of verbal and non-verbal actions, robot gaze, comparison between English and Japanese, human-robot interaction (HRI), transition relevance place (TRP), conversation analysis, AI reference list
Publisher
Year 2013
Language
City
Month
Journal Interaction Studies
Volume 14
Number 3
Pages 366–389
URL Link
DOI 10.1075/is.14.3.04yam
ISBN
Organization
Institution
School
Type
Edition
Series
Howpublished
Book title
Chapter

Download BibTex

Abstract

This paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot’s speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot’s questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e.g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers’ head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot’s verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants’ native language.

Notes