Corti-Gillespie2016
Corti-Gillespie2016 | |
---|---|
BibType | ARTICLE |
Key | Corti-Gillespie2016 |
Author(s) | Kevin Corti, Alex Gillespie |
Title | Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human |
Editor(s) | |
Tag(s) | EMCA, HCI, Human-computer interaction, Repair, Intersubjectivity |
Publisher | |
Year | 2016 |
Language | |
City | |
Month | |
Journal | Computers in Human Behavior |
Volume | 58 |
Number | |
Pages | 431-442 |
URL | Link |
DOI | doi:10.1016/j.chb.2015.12.039 |
ISBN | |
Organization | |
Institution | |
School | |
Type | |
Edition | |
Series | |
Howpublished | |
Book title | |
Chapter |
Abstract
This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction.
Notes