Difference between revisions of "Sandlund2020"

From emcawiki
Jump to: navigation, search
(Created page with "{{BibEntry |BibType=ARTICLE |Author(s)=Erica Sandlund; Tim Greer; |Title=How do raters understand rubrics for assessing L2 interactional engagement? A comparative study of CA-...")
 
 
Line 10: Line 10:
 
|Volume=9
 
|Volume=9
 
|Number=1
 
|Number=1
 +
|Pages=128-163
 
|URL=http://www.altaanz.org/uploads/5/9/0/8/5908292/2020_9_1__5_sandlund_greer.pdf
 
|URL=http://www.altaanz.org/uploads/5/9/0/8/5908292/2020_9_1__5_sandlund_greer.pdf
 
|Abstract=While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of videorecorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test?
 
|Abstract=While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of videorecorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test?
 
This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters’ interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.
 
This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters’ interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.
 
}}
 
}}

Latest revision as of 22:08, 30 November 2020

Sandlund2020
BibType ARTICLE
Key Sandlund2020
Author(s) Erica Sandlund, Tim Greer
Title How do raters understand rubrics for assessing L2 interactional engagement? A comparative study of CA- and non-CA-formulated performance descriptors
Editor(s)
Tag(s) EMCA, engagement, conversation analysis, paired discussion tests, interactional competence, English as a foreign language (EFL)
Publisher
Year 2020
Language English
City
Month
Journal Papers in Language Testing and Assessment
Volume 9
Number 1
Pages 128-163
URL Link
DOI
ISBN
Organization
Institution
School
Type
Edition
Series
Howpublished
Book title
Chapter

Download BibTex

Abstract

While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of videorecorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test? This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters’ interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.

Notes