검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 1

        1.
        2017.12 구독 인증기관 무료, 개인회원 유료
        Linguistic features that are indicative of higher writing proficiency levels can inform many aspects of lanauage assesment such as scoring rubrics, test items, and automated essay scoring(AES). The recent advancement of computer algorithms that automatically calculate indicates based on various linguistic features has made it possible to examine the relationship between linguistic features and writing proficiency on a larger scale. While the ability to use appropriate n-grams - recurring sequences of contiguous words - has been identified as a characteristic differentiating between proficiency levels in the literature, few studies have examined this relationship using computational indices. To this end, this study utilized the Tool for the Automatic Analysis of Lexical Sophistication(TAALES;Kyle&Crossley, 2015) to calcualte eight indices based on n-grams from a stratified corpus consisting of 360 argumentative essays written by Korean college-level learners. First, the indices from the training set of 240 essays were used to design a multinomial logistic regression model in order to identify indices that are significant predictors of writing proficiency levels. Subsequently, the regression model was applied to a test set of 120 essays to examine whether the model could be used to predict the proficiency levels of unseen essays. The results revealed that the mean bigram T, mean bigram Delta P, mean bigram-to-unigram Delta P, and proportion of 30,000 most frequent trigrams indices were significant predictors of proficiency levels. Furthermore, the regression model based on eight indices correctly classfied 52.5% of essays in the test set, demonstrating above-chance level accuarcy.
        5,500원