McIlvenny2020a
McIlvenny2020a | |
---|---|
BibType | ARTICLE |
Key | McIlvenny2020a |
Author(s) | Paul McIlvenny |
Title | The Future of ‘Video’ in Video-Based Qualitative Research Is Not ‘Dumb’ Flat Pixels! Exploring Volumetric Performance Capture and Immersive Performative Replay |
Editor(s) | |
Tag(s) | EMCA, Video research, Big Video, ethnomethodological conversation analysis, volumetric, virtual reality, immersive qualitative analytics |
Publisher | |
Year | 2020 |
Language | English |
City | |
Month | |
Journal | Qualitative Research |
Volume | 20 |
Number | 6 |
Pages | 800–818 |
URL | Link |
DOI | 10.1177/1468794120905460 |
ISBN | |
Organization | |
Institution | |
School | |
Type | |
Edition | |
Series | |
Howpublished | |
Book title | |
Chapter |
Abstract
Qualitative research that focuses on social interaction and talk has been increasingly based, for good reason, on collections of audiovisual recordings in which 2D flat-screen video and mono/stereo audio are the dominant recording media. This article argues that the future of ‘video’ in video-based qualitative studies will move away from ‘dumb’ flat pixels in a 2D screen. Instead, volumetric performance capture and immersive performative replay rely on a procedural camera/spectator-independent representation of a dynamic real or virtual volumetric space over time. It affords analytical practices of re-enactment – shadowing or redoing modes of seeing/listening as an active spectation for ‘another next first time’ – which play on the tense relationships between live performance, observability, spectatorship and documentation. Three examples illustrate how naturally occurring social interaction and settings can be captured volumetrically and re-enacted immersively in virtual reality (VR) and what this means for data integrity, evidential adequacy and qualitative analysis.
Notes