검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 13

        2.
        2014.02 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This study aims to compare the effects of different types of test-items using the same text and test-takers' text familiarity on the scores of English reading comprehension. A total of 312 third-grade Korean high school students took three different English tests, adapting types of test-items from the Korean College Scholastic Ability Test (CSAT). Each test included items using the same texts, presented in exam preparation books published by EBS in Korea, but different question types. To estimate levels of their topic familiarity, two surveys were carried out. Repeated measures ANOVAs reveal that there were significant differences in scores among the items using the same text with different types of test-items. However, there was no statistically significant effect of topic familiarity. The results indicate a greater effect of different types of test-items than topic familiarity on the scores of a reading comprehension test using the same text. Based on the findings, it is suggested that English teachers in Korea need to manage their reading classes by using various types of questions in order to develop students' reading ability and to prepare their students for the CSAT.
        6,300원
        3.
        2010.12 KCI 등재 SCOPUS 구독 인증기관 무료, 개인회원 유료
        The researchers’ experiential knowledge demonstrates that the task of predicting and controlling the difficulty level of the multiple-choice items of the College Scholastic Ability Test (CSAT) for English is substantially left to the subjective judgment of experienced item writers. The present study accordingly recognizes a need to identify item difficulty predictors and build an item difficulty prediction model to handle this pertinent issue. While taking separate interest in constructing a model for the multiplechoice reading subset of the CSAT, the study was conducted by identifying item difficulty predictor variables from previous research, and by validating the candidate predictors via questionnaires by highly experienced teacher-raters when asked to analyze reading items from the English reading subset of the preliminary CSAT (i.e., yun-hap-hak-ryuk-pyung-ga) administered in March 2009. Using multiple regression technique and maximum likelihood estimation, an item difficulty prediction model was generated. In order to check validity and applicability of the prediction model, the hypothetical model was finally tested on a subsequent version of the test administered in September 2009. This type of model building is expected to guide test developers design an item pool in accordance with special needs, such as to construct multiple test forms, which have similar mean difficulties.
        6,400원
        4.
        2010.12 KCI 등재 SCOPUS 구독 인증기관 무료, 개인회원 유료
        The purpose of this study is to present underlying principles and criteria for designing and developing National English Ability Test (NEAT). To this end, this study presents 5 principles (i.e., Connection, Authenticity, Interactiveness, IBT Compatibility, Washback Effectiveness) which play guiding and controlling roles in developing and selecting item types appropriate for the listening and the reading test of NEAT derived from test usefulness (Bachman & Palmer, 1996). Also, it proposes 4 criteria (i.e., Complementarity, Integration, Pivotality, and Hierarchicality) which function levering roles in designing and developing the two tests of NEAT on the basis of the empirical data of the item analyses of Foreign (English) domain of College Scholastic Ability Tests (2005~2010 school year). Finally, it discusses some implications and applications of the principles and the criteria to a theoretical blueprint for establishing an item developing system and a test implementation system, setting up the criteria of validating item types, designing a modular type of test specifications, and specifying a complementary type of test specifications for the two ability-level listening and reading tests (the 2nd and the 3rd rank) of NEAT.
        7,000원
        5.
        2009.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        6,300원
        6.
        2006.09 KCI 등재 SCOPUS 구독 인증기관 무료, 개인회원 유료
        6,400원
        7.
        2004.12 KCI 등재 SCOPUS 구독 인증기관 무료, 개인회원 유료
        5,800원
        8.
        2004.06 KCI 등재 SCOPUS 구독 인증기관 무료, 개인회원 유료
        4,600원
        9.
        2004.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The purpose of this study was to develop a statistical model of predicting item difficulty of the English reading test of College Scholastic Ability Test(CSAT). At the initial phase, the study investigated variables that were significantly correlated to item difficulty of English reading test. Using the correlated variables an instrument was designed to gather the data on item difficulty of each item of the English reading tests of 2002 and 2003 CSAT. Correlation analysis was done to get models which could predict item difficulty of the tests. To validate the results of the analysis, the models were applied to the September Tests of 2003 and 2004 CSAT. It was found that the 2003 linear regression equation model showed the acceptable level of stability and predictability. The study also revealed that statistically significant predictors were grammar, inference and options. It was also found that the proportion of variance accounted by the predictors was different. Some implications for CSAT design and further research were drawn from the findings.
        5,500원
        10.
        2002.12 KCI 등재 SCOPUS 구독 인증기관 무료, 개인회원 유료
        5,400원