검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 3

        1.
        2021.06 구독 인증기관 무료, 개인회원 유료
        4,000원
        2.
        2016.09 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The current study aims to examine whether topic familiarity can be associated with, and can also substantially contribute to, the testlet effect often found among items sharing a common passage in reading comprehension tests, particularly by using the Multidimensional Item Response Theory (MIRT) bifactor model as the main data analytic scheme. Data analyzed in this study were item scores collected from 830 Korean high school students who took a 8-passage, 32-item reading comprehension test (4 items per passage) and a 40-item self-reported topic familiarity questionnaire (5 items per passage). The results of analysis revealed that (a) the bifactor model best fit the test data; (b) the self-reported topic familiarity score was not correlated with the testlet factor scores at a statistically significant level; and (c) the overall topic familiarity score had a positive correlation with the general reading ability score, however. Implications of these major findings are discussed in terms of the usefulness of the bifactor model and the nature of testlet effects in reading tests, along with avenues for further investigation.
        7,700원
        3.
        2016.02 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Automated error detection and feedback systems are becoming an important component of online writing practice services for ESL/EFL (English as a second/foreign language) learners. The main purposes of the study are to: (a) collect samples of essays written by ESL learners with different native language (or L1) backgrounds that are error-coded by an early version of an automated error-detection system (CritiqueTM) and trained human coders; and (b) identify some unique patterns of writing errors for different first language (L1) groups. Data analyzed in this study included 18, 439 TOEFL◯R CBT essays error-coded by CritiqueTM and a much smaller, combined sample of 480 TOEFL◯R CBT/TOEFL iBT◯R essays error-coded by trained human coders. A comparison of error rates across five different language groups showed some unique patterns: (a) the Arabic and Spanish groups were the highest on both spelling and punctuation errors; (b) the Korean and Japanese groups had the highest article error frequency; and (c) the Chinese group had the highest number of errors related to verb conjugations or adjective and noun inflections. The implications of these findings are discussed in terms of understanding the nature of L1-related writing errors and enhancing the automated error detection and feedback systems.
        5,800원