CHI2022-Explainability-in-action
This is a holding page for a proposed workshop for CHI 2022 (https://chi2022.acm.org/) on the theme of explainability (X-AI) in relation to concepts and research on human social action.
The workshop organizers will use this page to collate submission information and other resources until the workshop website is set up in time for submissions.
Contents
Workshop overview
Explainability in AI is a crucial aspect of interaction with and through machines. It has also been a key concern in HCI from early usability heuristics onwards. This workshop focuses on three under-explored aspects of explainability that can enrich and expand upon existing HCI research in explainable AI (X-AI).
These include:
- explanation as a joint, co-constructed action,
- explanation as a pervasive norm underpinning all social interaction and
- the self-explanatory nature of human conduct in the social world.
This workshop invites X-AI oriented HCI researchers with ethnomethodologists, conversation analysts, discursive psychologists, and researchers from other fields that center on human social action to present papers that explore the pragmatics of X-AI and help to unpack the social foundations of explanation. Alongside giving presentations, participants will be invited to co-author a position paper and to contribute to a proposal for a special issue of Transactions on Computer-Human Interaction (TOCHI) on the workshop theme.
Background
‘Explanation’ in X-AI usually refers to technical methods for auditing automated or augmented decision-makingprocesses to highlight and eliminate potential sources of bias. However, this focus on technology misses that explainingis crucially tied to human practices of actively formulating and doing explanation [Miller 2019]. X-AI has tended to focuson the development of transparent machine learning models that can be used to explain the complex, proprietary blackboxes used by ‘decision support’ systems to aid high-stakes decision-making in legal, financial, or diagnostic contexts[Rudin 2019]. However, as Rohlfing et al.[2021] point out, effective explanations (however technically accurate theymay be), always involve processes of co-construction and mutual comprehension that require ongoing contributionsfrom both the explainer and explainee. Explanations involve at least two parties: the system and the user interactingwith the system at a particular point in time. X-AI, in this sense, cannot offer context-free, one-size-fits-all explanations.It also involves research into—and the design of—explanation as a dynamic and context-sensitive situated social practice[Suchman 1987].If we accept that explanations are not simply stand-alone statements of causal relation, it can be hard to identifywhat should ‘count’ as an explanation in interaction [Ingram et al.2019]. Research into explanation in ordinary human-human conversation has shown that explanations can be achieved through various practices tied to the local contextof production [cf. Schegloff 1997]. Moreover, explanations don’t just appear anywhere in an interaction, but they arerecurrently produced as responsive actions to fit an interactional ‘slot’ where someone has been called to account forsomething [Antaki 1996]. Sometimes explanations may also be produced as ‘initial’ moves in a sequence of action. Insuch cases, they are often designed to anticipate resistance and deal with e.g., the routine contingencies that people citewhen refusing to comply with an instruction [Antaki and Kent 2012]. Explanations also perform and ‘talk into being’social and institutional relationships such as doctor/patient, or teacher/student [Heritage and Clayman 2010].HCI research has long been occupied with how ‘visibility’ plays a role in usability [Nielsen 1994], as well as howsystems may be made ‘accountable’ to social use [Button 2003]; so it seems natural that it would begin to examinehow it might approach X-AI [Ehsan et al.2021]. This workshop aims to bring a broader sociological orientation to thisinterest, to enhance those existing approaches, focussing on studying the interactional situations that X-AI will facein practice. This workshop aims to survey a wide variety of explanatory practices and settings, and to identify howexplanations are modulated by social/institutional roles, relationships, and constraints. If X-AI is to deal with real-worldsettings and fit into them, it is crucial for the designers to obtain relevant insights into the organization of these settingsand how explanations figure therein; understanding this might help to approach X-AI as a sociotechnical issue ratherthan a purely technical one.
Workshop objectives & themes
This workshop aims to expand and deepen our understanding of explainability as social phenomena to help broadenand support conceptualisation of and approaches to X-AI in HCI. Beyond current HCI researchers and practitionersengaged directly with X-AI, we want to encourage participation by those conducting work that is informed by empiricaland theoretical approaches which focus centrally on intersubjectivity and situated social action. These fields couldinclude (but are not limited to) Ethnomethodology and Conversation Analysis [Garfinkel 1967, 2002; Sacks et al.1974],Discursive Psychology [Edwards and Potter 1992; Wiggins 2016], and cognate fields like Distributed Cognition [Hutchins1995], and Enactivism [Di Paolo et al. 2018].
We invite participants to share presentations that deal with explainability in relation to the following key themes of joint action, self-explanatory situations, and interactional practices:
Explanations as joint actions
As evidenced by various studies on social interaction, meaning and understanding are co-constructed [see e.g., Clark1996; Goodwin 2017; Linell 2009]. For X-AI, this means that the explanation cannot be predefined by the AI designers, but it matters crucially how it is understood by those who interact with the AI in a specific context. This is directly connected to the ‘black-box’ trouble and the fact that the ongoing display of system capabilities is explainable in some form. But what precisely that form could be remains contingent on what counts as a reasonable explanation in various contexts. Furthermore, explanation can never be complete – it could always be elaborated [cf. Garfinkel 1967, pp. 73–75],and the relevant criteria for sufficient explanation are jointly established for each setting by its participants, often by reference to the tasks at hand.
The self-explanatory appearance of the social world
Explanations are not only explicitly formulated, but are an inherent feature of the social world. Even without giving an explicit explanation, the design of an object provides ‘implicit’ explanations. Gibson [2014]’s concept of affordances, often used in system development, highlights that specific design features make specific actions relevant, e.g., a button that should be pressed, a lever that should be pulled [Norman 1990]. Situated social actions are also inherently recognizable [Levinson 2013], even when mediated through AI such as in autonomous driving systems [Stayton 2020].A slowly driving car will be recognized as not from the area (see e.g., [Stayton 2020]) and moving in certain ways can be recognized as e.g. giving way to a pedestrian [Haddington and Rauniomaa 2014; Moore et al.2019]. Without necessarily doing an explicit explanation, an AI’s behaviours can become recognizable as doing a specific social action.
Miscommunication and repair as interactional explainability
Joint action, more broadly, depends on interactional repair – the methods we use to recognize and deal with miscommunication (problems of speaking, hearing and understanding), as they occur in everyday interaction [Schegloff et al.1977]. Repair practices such as other-initiated repair constitute practical explanations in that they provide methods identifying and dealing with breakdowns of mutual understanding. For example, when someone says “huh?” in response to a ‘trouble source’ turn in spoken conversation, the speaker usually repeats the entire prior turn [Dingemanse et al.2013], whereas if the recipient had said “where?”, their response might only have solicited a repeat or reformulation of one specific part of the prior turn, i.e., the misheard place reference. These practices range from from tacit displays of uncertainty and incomprehension, e.g. withholding acknowledgement tokens such as “mhmm” while another is speaking [Bavelas et al.2000; Dingemanse et al.2014], to explicit requests for clarification that solicit fully formed explanations and accounts in the following turns [Raymond and Sidnell 2019]. Miscommunication and repair thus provide participants in interaction with real time indications about their state of mutual (mis)understanding andmethods for re-establishing and securing at least a sufficient degree of mutual understanding to be able to proceed with their joint action.
Call for proposals
Explainability in AI is a crucial aspect of interaction with and through machines and a key concern in HCI. The goal of this workshop is to unpack the social foundations of explanation, both empirically and theoretically. We aim to bring together researchers focusing on human social interaction such as ethnomethodology, conversation analysis, discursive psychology and distributed cognition to explore explainability with a focus on intersubjectivity and situated action. The workshop will take place in a hybrid format.
Potential workshop participants are invited to submit a 4 to 8 page long position paper (including references) in the single column ACM Master Article Submission Templates format via the article submission portal linked from the workshop wiki (this page). The position paper should focus on explainability in relation to one or more of the three key themes:
- Explanations as joint actions
- The self-explanatory appearance of the social world
- Miscommunication and repair as interactional explainability
Empirical contributions that are based on audiovisual data that can be analyzed at the workshop are especially encouraged. The intended workshop outcomes include a collaborative position paper exploring the theme of explainability in relation to human social action, and a proposed special issue of Transactions on Computer-Human Interaction (TOCHI).
Submissions will be reviewed by the workshop organizers and external reviewers and will be selected based on relevance, quality and on the diversity of disciplinary, theoretical and empirical approaches. Accepted papers will be made available on the workshop webpage. For accepted position papers, at least one author must attend the workshop and all participants must register for both the workshop and at least one day of the conference.
If you have any questions, please email s.b.albert@lboro.ac.uk.