Difference between revisions of "McIlvenny2019"

From emcawiki
Jump to: navigation, search
(Created page with "{{BibEntry |BibType=ARTICLE |Author(s)=Paul McIlvenny; |Title=Inhabiting spatial video and audio data: Towards a scenographic turn in the analysis of social interaction |Tag(s...")
 
 
Line 7: Line 7:
 
|Year=2019
 
|Year=2019
 
|Language=English
 
|Language=English
|Journal=Social Interaction. Video-Based Studies of Human Sociality
+
|Journal=Social Interaction: Video-Based Studies of Human Sociality
 
|Volume=2
 
|Volume=2
 
|Number=1
 
|Number=1

Latest revision as of 10:12, 17 January 2020

McIlvenny2019
BibType ARTICLE
Key McIlvenny2019
Author(s) Paul McIlvenny
Title Inhabiting spatial video and audio data: Towards a scenographic turn in the analysis of social interaction
Editor(s)
Tag(s) EMCA, Spatial audio, Evidential adequacy, Visualisation, Virtual reality, Conversation Analysis
Publisher
Year 2019
Language English
City
Month
Journal Social Interaction: Video-Based Studies of Human Sociality
Volume 2
Number 1
Pages
URL Link
DOI 10.7146/si.v2i1.110409
ISBN
Organization
Institution
School
Type
Edition
Series
Howpublished
Book title
Chapter

Download BibTex

Abstract

Consumer versions of the passive 360° and stereoscopic omni-directional camera have recently come to market, generating new possibilities for qualitative video data collection. This paper discusses some of the methodological issues raised by collecting, manipulating and analysing complex video data recorded with 360° cameras and ambisonic microphones. It also reports on the development of a simple, yet powerful prototype to support focused engagement with such 360° recordings of a scene. The paper proposes that we ‘inhabit’ video through a tangible interface in virtual reality (VR) in order to explore complex spatial video and audio recordings of a single scene in which social interaction took place. The prototype is a software package called AVA360VR (‘Annotate, Visualise, Analyse 360° video in VR’). The paper is illustrated through a number of video clips, including a composite video of raw and semi-processed multi-cam recordings, a 360° video with spatial audio, a video comprising a sequence of static 360° screenshots of the AVA360VR interface, and a video comprising several screen capture clips of actual use of the tool. The paper discusses the prototype’s development and its analytical possibilities when inhabiting spatial video and audio footage as a complementary mode of re-presenting, engaging with, sharing and collaborating on interactional video data.

Notes