Difference between revisions of "How to explain conversation analysis to quantitative researchers"

From emcawiki
Jump to: navigation, search
(In relation to specific disciplines)
(General questions)
Line 6: Line 6:
  
 
=== General questions ===
 
=== General questions ===
 
 
  
 
==== General points to bear in mind ====
 
==== General points to bear in mind ====
Line 29: Line 27:
  
 
If requested to cross-validate analytic results, it is probably best to focus on making the case that this approach does not necessarily make sense for EM/CA methods.
 
If requested to cross-validate analytic results, it is probably best to focus on making the case that this approach does not necessarily make sense for EM/CA methods.
 +
 +
==== Answering the question of how much data is required ====
 +
 +
* From a short conversation on [https://twitter.com/saritajoan/status/899422443575443456 Twitter]:
 +
 +
@saritajoan:  EMCA folk - looking for a good citation for non-CA people about quantity of data. Suggestions?
 +
 +
@saul: You mean how many cases are conventionally used to support general claims? Collections often cite Schegloff's (1968) famous 500 phone calls.
 +
Tbh there aren't rly standards & it depends on the kind of claim being made. @JPdeRuiter & I discuss related issues: https://t.co/7mYnxHmMBM
 +
& If I haven't misunderstood the question, I'd be interested to know if others have thoughts/refs on current conventions on collection sizes
 +
 
 +
@rolsi_journal Jeff Robinson warns against collections which mix up likely salient categories (e.g. keep psychiatrists' & GPs' interactions separate)
 +
 
 +
@joshraclaw: I don't think CA folks really use the term "saturation", but I think it fits a lot of the methodological descriptions of collection building
 +
  I want to say that Sidnell actually gave a number once--in a paper or a talk or a workshop--of cases that would make a reliable collection.
 +
  But quantification like that has issues. I think @ the CA summer intensive @ChaseWRaymond and the facilitators gave saturation-y definition.
 +
  @elliotthoey and @kobin talk about building collections in a recent intro to CA paper I saw. What are your four cents?
 +
 
 +
@kobin:        CA works best on large collections: Rossi on requests 500+ Hoey on lapses 500 K&D on recruitment 500+. Hard to find deviant cases otherwise
 +
Consider how much data Sacks et al. (1974) must have examined to develop their model of turn-taking. 60 cases? 500? Thousands surely.
  
 
=== On data/transcript issues ===
 
=== On data/transcript issues ===

Revision as of 07:02, 21 August 2017

There are many aspects of EM/CA research that may be difficult to explain to researchers and teams more used to dealing with quantitative methods and data. This page is a list of tips and resources to help explain CA/EM methods in multi-methods research contexts.

FAQs

Here are some of the questions frequently asked of EM/CA researchers in multi-methods contexts and some useful tips for formulating answers, and links to further reading and resources.

General questions

General points to bear in mind

  • Stress that qualitative research is descriptive and specific in nature.
  • Bear in mind vocabulary differences. For sociometrics and quantitative research paradigms 'data' are the controlled and already processed 'measurements', tables and statistical results, which are then interpreted. In these terms EM/CA transcripts are not 'data', but seem uncontrolled, messy: 'just talk'.
  • Clarify that quality standards of dependability and content validity cannot be transferred easily across paradigms.

More detailed/philosophical discussion points

  • What do 'validity' and 'verifiability' mean in an EM/CA context?
    • Helpful points/ideas include: Verstehen (Weber, Schutz) and how to distinguish between participants' concerns and scientists' concerns (Schutz, Garfinkel) and between the "natural attitude" and the "scientific attitude" towards the social world (Husserl, Schutz), between notions of Formal Analysis (FA) and "praxeological validity" (Garfinkel).
    • See Graham Button, (1991), "Ethnomethodology and the human sciences", Cambridge, Cambridge University Press. for relevant discussions.

Practical suggestions / boundaries for multi-methods teamwork

If asked to provide data protocols, cross-coder verification for transcripts or other data processing formalities associated with quantitative/coding-based research methods, rather than refuse, it can be more practical to comply to some extent:

  1. Have someone else transcribe some small portion of the same recording, then adjudicate any discrepancies and have a 'formally' validated transcript.
  2. Initiate a debate about the difference between paradigms and practicalities of research methods (using resources described here).

If requested to cross-validate analytic results, it is probably best to focus on making the case that this approach does not necessarily make sense for EM/CA methods.

Answering the question of how much data is required

  • From a short conversation on Twitter:
@saritajoan:   EMCA folk - looking for a good citation for non-CA people about quantity of data. Suggestions?

@saul: You mean how many cases are conventionally used to support general claims? Collections often cite Schegloff's (1968) famous 500 phone calls.
Tbh there aren't rly standards & it depends on the kind of claim being made. @JPdeRuiter & I discuss related issues: https://t.co/7mYnxHmMBM
& If I haven't misunderstood the question, I'd be interested to know if others have thoughts/refs on current conventions on collection sizes
 
@rolsi_journal Jeff Robinson warns against collections which mix up likely salient categories (e.g. keep psychiatrists' & GPs' interactions separate)
  
@joshraclaw: I don't think CA folks really use the term "saturation", but I think it fits a lot of the methodological descriptions of collection building
  I want to say that Sidnell actually gave a number once--in a paper or a talk or a workshop--of cases that would make a reliable collection.
 But quantification like that has issues. I think @ the CA summer intensive @ChaseWRaymond and the facilitators gave saturation-y definition.
 @elliotthoey and @kobin talk about building collections in a recent intro to CA paper I saw. What are your four cents?
 
@kobin:         CA works best on large collections: Rossi on requests 500+ Hoey on lapses 500 K&D on recruitment 500+. Hard to find deviant cases otherwise
Consider how much data Sacks et al. (1974) must have examined to develop their model of turn-taking. 60 cases? 500? Thousands surely.

On data/transcript issues

  • How can EM/CA researchers guarantee that transcripts are an accurate representation of the recordings?
    • eg: in some research contexts, inter-transcriber agreement might require that 5% of the data is transcribed a second time, and compare versions of the transcript.

Possible answers

  • CA's 'quality control' is rooted in having data sessions and discussion amongst colleagues.
  • Silverman suggests (in Interpreting Qualitative Data 1993 p 149) that as the transcript follows standard conventions (which you can append) this ensures proper documentation. Re CA, he suggests that "we should not delude ourselves into seeking a perfect transcript" as this is illusory. You need a transcript which is adequate for the task in hand, ie. the level of analysis and the kind of techniques you are using. Wherever possible the data should be subjected to group data sessions which he compares to inter-rater comparison.

FAQs on data analysis

  • How an EM/CA researchers guarantee that the analysis is independently repeatable?
    • eg: can a independent party repeat the analysis on the video, to see if they come up with the same results?

Possible answers

  • Samples are not intended to be representative of a population, examples used in extracts are illustrative for reader to understand the phenomenon in question, not results in and of themselves.

Recommended Reading

On data collection

  • Lorenza Mondada, (2013), "[Mondada2013c| The Conversation Analytic Approach to Data Collection]", In The Handbook of Conversation Analysis (Jack Sidnell, Tanya Stivers, eds.), Wiley-Blackwell, no. 698, pp. 32–56.

On reliability / validity / representativeness / transcription

  • Peräkylä, A. (2004). Reliability and validity in research based on naturally occurring social interaction. In D. Silverman (Ed.), Qualitative research: Theory, method and practice (2nd ed., pp. 283-304). London: Sage. (or the 3rd edition)
  • Roberts, F., & Robinson, J. D. (2004). Inter-observer agreement on first-stage conversation analytic transcription. Human Communication Research, 30(3), 376-410.
  • Jack Bilmes, " Preference and the conversation analytic endeavor," (Journal of Pragmatics, 64, 2014: 52-71).
  • Lynch, M. (1993). "Scientific Practice and Ordinary Action: Ethnomethodology and social studies of science." Cambridge, MA: Cambridge University Press.
  • Alexa Hepburn, Galina B. Bolden, (2012), "The Conversation Analytic Approach to Transcription", In The Handbook of Conversation Analysis (Jack Sidnell, Tanya Stivers, eds.), John Wiley & Sons, Ltd, pp. 57–76.
  • Jacobs, S. (1987). Commentary on Zimmerman: Evidence and inference in conversation analysis. Communication Yearbook, 11, 433-443.
  • Jacobs, S. (1990). On the especially nice fit between qualitative analysis and the known properties of conversation. Communications Monographs, 57(3), 243-249.
  • Robinson, J. D. (2007). The role of numbers and statistics within conversation analysis. Communication Methods and Measures, 1, 65-75.


On generalisation / synthesis / systematic review

In relation to specific disciplines

Related links / resources

Credits

This page is based on an original thread thread initiated by Mario Veen on the Languse mailing list, and responses by Ruth Parry, Israel Berger, Galina Bolden, Jacob Bilmes, Mats Andrén, Dennis Day, Sima Sadeghi, Julie Wilkes, Christian Nelson, Emo Gotsbachner, R.E.sanders, David Woods, Jeffrey Robinson, Saul Albert and Rebecca Barnes (see the threaded list of replies in the list archives).