This study examines how native English speaking (NES) and Korean non-native English speaking (KNES) teachers assess L2 writing performance. More specifically, this study aims to investigate whether these two groups of raters evaluate writing samples differently when using different rating scales (holistic vs. analytic) and different task types (narrative vs. argumentative). Four NES and four KNES raters evaluated 78 narrative and 78 argumentative essays written by Korean EFL university students using both holistic and analytic rating rubrics. The comparison between the two rater groups indicated that the scores given by the two groups were statistically significantly different for both holistic and analytic ratings regardless of the two task types investigated. Overall, KNES teachers rated the essays more harshly than their NES counterparts, irrespective of task type and rating scale. Multiple regression analysis of five analytic sub-criteria revealed that the two rater groups demonstrated similar patterns in assessing argumentative essays, while for narrative essays, the relative influence of each analytic sub-criterion on overall writing quality differed for the two rater groups. Implications for L2 writing assessment are included.
This study explored how instructors teaching in the same EFL program rated content in compositions. Twenty-one writing samples written on three different topics were rated by five teacher-raters. The transcribed comments made by the teacher-raters showed that 1) their rating behaviors varied to a great extent as to the ways of commenting on and viewing the content of the compositions, 2) when a composition failed to address the topic given, it negatively affected the assessment of the quality of writing, and 3) the teacher-raters made fewer comments on the content of the formal essays than on the content of e-mails. A couple of suggestions are presented with respect to the necessity of establishing written guides for rating criteria for students" essays and of rater training.