Difference between revisions of "Corti-Gillespie2016"

From emcawiki
Jump to: navigation, search
(Created page with "{{BibEntry |BibType=ARTICLE |Author(s)=Kevin Corti; Alex Gillespie |Title=Co-constructing intersubjectivity with artificial conversational agents: People are more likely to in...")
 
 
(One intermediate revision by one other user not shown)
Line 3: Line 3:
 
|Author(s)=Kevin Corti; Alex Gillespie
 
|Author(s)=Kevin Corti; Alex Gillespie
 
|Title=Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human
 
|Title=Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human
|Tag(s)=EMCA; HCI; Human-computer interaction; Repair; Intersubjectivity;  
+
|Tag(s)=EMCA; HCI; Human-computer interaction; Repair; Intersubjectivity; AI reference list
 
|Key=Corti-Gillespie2016
 
|Key=Corti-Gillespie2016
 
|Year=2016
 
|Year=2016
 +
|Language=English
 
|Journal=Computers in Human Behavior
 
|Journal=Computers in Human Behavior
 
|Volume=58
 
|Volume=58
 
|Pages=431-442
 
|Pages=431-442
 
|URL=http://www.sciencedirect.com/science/article/pii/S0747563215303101
 
|URL=http://www.sciencedirect.com/science/article/pii/S0747563215303101
|DOI=doi:10.1016/j.chb.2015.12.039
+
|DOI=10.1016/j.chb.2015.12.039
 
|Abstract=This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction.
 
|Abstract=This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction.
 
 
}}
 
}}

Latest revision as of 23:45, 23 February 2021

Corti-Gillespie2016
BibType ARTICLE
Key Corti-Gillespie2016
Author(s) Kevin Corti, Alex Gillespie
Title Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human
Editor(s)
Tag(s) EMCA, HCI, Human-computer interaction, Repair, Intersubjectivity, AI reference list
Publisher
Year 2016
Language English
City
Month
Journal Computers in Human Behavior
Volume 58
Number
Pages 431-442
URL Link
DOI 10.1016/j.chb.2015.12.039
ISBN
Organization
Institution
School
Type
Edition
Series
Howpublished
Book title
Chapter

Download BibTex

Abstract

This article explores whether people more frequently attempt to repair misunderstandings when speaking to an artificial conversational agent if it is represented as fully human. Interactants in dyadic conversations with an agent (the chat bot Cleverbot) spoke to either a text screen interface (agent's responses shown on a screen) or a human body interface (agent's responses vocalized by a human speech shadower via the echoborg method) and were either informed or not informed prior to interlocution that their interlocutor's responses would be agent-generated. Results show that an interactant is less likely to initiate repairs when an agent-interlocutor communicates via a text screen interface as well as when they explicitly know their interlocutor's words to be agent-generated. That is to say, people demonstrate the most “intersubjective effort” toward establishing common ground when they engage an agent under the same social psychological conditions as face-to-face human–human interaction (i.e., when they both encounter another human body and assume that they are speaking to an autonomously-communicating person). This article's methodology presents a novel means of benchmarking intersubjectivity and intersubjective effort in human-agent interaction.

Notes