Difference between revisions of "Ivarsson2023"

From emcawiki
Jump to: navigation, search
m
 
Line 3: Line 3:
 
|Author(s)=Jonas Ivarsson; Oskar Lindwall;
 
|Author(s)=Jonas Ivarsson; Oskar Lindwall;
 
|Title=Suspicious Minds: the Problem of Trust and Conversational Agents
 
|Title=Suspicious Minds: the Problem of Trust and Conversational Agents
|Tag(s)=EMCA; Conversation; Human-computer interaction; Natural language processing; Trust; Understanding; In press; AI Reference List
+
|Tag(s)=EMCA; Conversation; Human-computer interaction; Natural language processing; Trust; Understanding; AI Reference List
 
|Key=Ivarsson2023
 
|Key=Ivarsson2023
 
|Year=2023
 
|Year=2023
 
|Language=English
 
|Language=English
 
|Journal=Computer Supported Cooperative Work (CSCW)
 
|Journal=Computer Supported Cooperative Work (CSCW)
 +
|Volume=32
 +
|Number=3
 +
|Pages=545–571
 
|URL=https://link.springer.com/article/10.1007/s10606-023-09465-8
 
|URL=https://link.springer.com/article/10.1007/s10606-023-09465-8
 
|DOI=10.1007/s10606-023-09465-8
 
|DOI=10.1007/s10606-023-09465-8
 
|Abstract=In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services. The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces. Consequently, the ‘Turing test’ has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry. When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human–computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions. By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations. Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted. Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human–human interactions.
 
|Abstract=In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services. The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces. Consequently, the ‘Turing test’ has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry. When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human–computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions. By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations. Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted. Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human–human interactions.
 
}}
 
}}

Latest revision as of 01:41, 20 October 2023

Ivarsson2023
BibType ARTICLE
Key Ivarsson2023
Author(s) Jonas Ivarsson, Oskar Lindwall
Title Suspicious Minds: the Problem of Trust and Conversational Agents
Editor(s)
Tag(s) EMCA, Conversation, Human-computer interaction, Natural language processing, Trust, Understanding, AI Reference List
Publisher
Year 2023
Language English
City
Month
Journal Computer Supported Cooperative Work (CSCW)
Volume 32
Number 3
Pages 545–571
URL Link
DOI 10.1007/s10606-023-09465-8
ISBN
Organization
Institution
School
Type
Edition
Series
Howpublished
Book title
Chapter

Download BibTex

Abstract

In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services. The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces. Consequently, the ‘Turing test’ has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry. When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human–computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions. By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations. Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted. Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human–human interactions.

Notes