Big Video Sprint 2017

From emcawiki
Revision as of 18:06, 9 February 2017 by SaulAlbert (talk | contribs) (Created page with "{{Announcement |Announcement Type=Conference |Full title=Big Video Sprint 2017 |Short title=BVS2017 |Short summary=CFP DL: 30 May 2017 Big Video Sprint conference on ethnograp...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
BVS2017
Type Conference
Categories (tags) Uncategorized
Dates 2017/11/22 - 2017/11/24
Link http://www.bigvideo.aau.dk/conference/big-video-sprint-2017/
Address Aalborg, DK
Geolocation 57° 2' 56", 9° 55' 18"
Abstract due 2017/05/30
Submission deadline 2017/05/30
Final version due
Notification date
Tweet CFP DL: 30 May 2017 Big Video Sprint conference on ethnographic/#EMCA video-analysis 22-24 Nov. 2017, Aalborg, DK,
Export for iCalendar

Big Video Sprint 2017:


Details:

Big Video Sprint 2017


This mini-conference on BIG VIDEO will take place at Aalborg University, Denmark, on 22-24 November 2017.

The keynote speakers are:

  • Anne Harris, Monash University, Australia
  • Robert Willim, Lund University, Sweden
  • Adam Fouse, Aptima, USA
  • Paul McIlvenny & Jacob Davidsen, Aalborg University


Dates: 22-24 November 2017

Location: Aalborg University, Aalborg, Denmark


This mini-Conference is targeted at practitioners of qualitative video ethnography and ethnomethodological conversation analysis who are exploring new ways of collecting time-based records of social, material and embodied practices as live-action events in real or virtual worlds. They may also be critically revisiting established methods. Additionally, this approach will most likely involve crafting and sharing video data archives, as well as transcribing and visualising enhanced video data in order to collect analytically adequate recordings and to do analysis in new ways. We feel that our collective research endeavour is at a critical juncture - both a leap forward driven by new technologies that help collect richer and enhanced moving image and sound recordings in a variety of novel settings and a critical reflection on the nature of video data and the praxiology of doing video-based research.

With the complexity of video recording scenarios, and the increasing use of computational tools and resources for qualitative analysis, we can see the beginnings of a BIG VIDEO programme. We use this glib term to suggest an alternative to the hype about quantitative big data analytics. Big can mean both large datasets and more than just video. Thus, we argue that there is a need to develop an infrastructure for qualitative video analysis in four key areas: 1) capture, storage, archiving and access of enhanced digital video; 2) visualisation, transformation and presentation; 3) collaboration and sharing; and 4) software tools to support analysis.The mini-conference is organised as a series of keynotes, panel discussions, enhanced data sessions and method sprints aiming to elevate and ignite discussions of the future of Big Video.

With the development of new video recording and sensing technologies, fresh opportunities arise for data collection and analysis within the discourse and interaction studies paradigm. Technologies that have potential include high resolution and high speed video cameras, 360 cameras, stereoscopic 3D cameras, thermal cameras, virtual cameras, spatial and ambisonic audio, video stitching and annotation, GPS and local positioning systems, lightfields and 3D scanning, mobile biosensing data (eg. heart rate, galvanic skin response and EEG), motion/performance capture and mobile eye tracking – to name just a few. The opportunities these afford should be actively and critically explored. And so we envisage the following themes will be in focus in this mini-conference:

  • Enhanced qualitative video data collection methods
  • Complementary use of sensory data
  • Complementary use of spatial and environmental sensing data
  • Autonomous and manual drone video
  • Critical reflections on the ‘camera’, the ‘microphone’, the 'frame' and the 'shot' in data capture
  • Virtualisation of capture methods
  • ‘Found video’ and public video data archives
  • Re*sensing video and audio, eg. haptic visuality
  • Video data collection in extreme situations and complex settings
  • Footprint recordings, omniscient frames and six degrees of freedom
  • Virtual immersion and stereoscopic/holographic realism
  • Algorithmic normativity and bias in video recording software and hardware
  • Developing and standardising transcription conventions for complex qualitative data sets
  • Transcription software development
  • Novel ways to visualise and analyse complex qualitative data sets
  • Best practice for digitally anonymising voices, bodies, semiotic landscapes, settings and objects
  • Enhanced ‘data sessions’
  • Inhabiting data with augmented and virtual reality
  • Re*enactment, plausibility and epistemic adequacy
  • Modding game engines, APIs, VSTs, CODECs, platforms and apps for live data capture and editing (DAWs and NLEs)
  • Archiving, rendering and sharing video data corpora beyond the cloud, eg. fogs
  • Collaborative video repository and subversion issues
  • Design of software tools and practices to support collaboration on video data annotation and analysis
  • New modes for dissemination, presentation and publication of data and analysis
  • Aesthetics of video research methods
  • Emerging ethical and legacy issues
  • Theoretical and methodological reflections on data collection and transcription practices
  • Practical, methodological and theoretical perspectives on the relations between the concepts of the ‘Event’, the ‘Record’, ‘Data’, the‘Transcript’, the ‘Analysis’, and the ‘Publication

Please submit an abstract of 500 words to be considered for inclusion on the programme and to secure your participation in the conference. Deadline is 30 May 2017.

Full details can be found here: http://www.bigvideo.aau.dk/conference/big-video-sprint-2017/