Mlynar2024

From emcawiki
Jump to: navigation, search
Mlynar2024
BibType ARTICLE
Key Mlynar2024
Author(s) Jakub Mlynář, Adrien Depeursinge, John O. Prior, Roger Schaer, Alexandre Martroye de Joly, Florian Evéquoz
Title Making sense of radiomics: insights on human–AI collaboration in medical interaction from an observational user study
Editor(s)
Tag(s) EMCA, Conversation analysis, Artificial intelligence, Ethnomethodology, Human-computer interaction, Oncology, Radiology, Radiomics, Social interaction, AI Reference List
Publisher
Year 2024
Language English
City
Month
Journal Frontiers in Communication
Volume 8
Number
Pages 1234987
URL Link
DOI 10.3389/fcomm.2023.1234987
ISBN
Organization
Institution
School
Type
Edition
Series
Howpublished
Book title
Chapter

Download BibTex

Abstract

Technologies based on “artificial intelligence” (AI) are transforming every part of our society, including healthcare and medical institutions. An example of this trend is the novel field in oncology and radiology called radiomics, which is the extracting and mining of large-scale quantitative features from medical imaging by machine-learning (ML) algorithms. This paper explores situated work with a radiomics software platform, QuantImage (v2), and interaction around it, in educationally framed hands-on trial sessions where pairs of novice users (physicians and medical radiology technicians) work on a radiomics task consisting of developing a predictive ML model with a co-present tutor. Informed by ethnomethodology and conversation analysis (EM/CA), the results show that learning about radiomics more generally and learning how to use this platform specifically are deeply intertwined. Common-sense knowledge (e.g., about meanings of colors) can interfere with the visual representation standards established in the professional domain. Participants' skills in using the platform and knowledge of radiomics are routinely displayed in the assessment of performance measures of the resulting ML models, in the monitoring of the platform's pace of operation for possible problems, and in the ascribing of independent actions (e.g., related to algorithms) to the platform. The findings are relevant to current discussions about the explainability of AI in medicine as well as issues of machinic agency.

Notes