This study investigated if the difficulty of instructional classroom English in primary teacher-guidebooks of English is adequately manipulated by learner-proficiency. Corpora of classroom English were compiled from 20 guidebooks from five publishers, approved following the 2015 Revised National Curriculum of Korea. Extracted materials from grades three and four were compared with those of grades five and six to observe variations in the difficulty. Coh-Metrix, a software application that computes an extensive range of measures on cohesion and language, was used for analyses. With evidence-based assessments on (psycho)linguistic features and patterns of classroom English, we report results both congruent and incongruent with the prospect that the difficulty should increase as learners become more proficient. Overall, although partial difficulty manipulations between the two levels were noted, inconsistent results and invariances were also observed, disclosing much room for improvement of classroom English in the guidebooks. Some implications toward teacher-guidebook development, particularly in its classroom English, are suggested
While learners may have access to reference tools during second language (L2) writing, the latest developments in machine translation (MT), such as Google Translate requires examination as to how using the tool may factor into the second language learners’ writing products. To this end, the purpose of this study was to examine how MT may have an effect on L2 learners’ writing products relative to when writers wrote directly in L2, or translated a text to English from Korean. EFL university learners were asked to write for prompts that were counterbalanced for three writing modes and three writing topics. The learners’ writing products were analyzed with Coh-Metrix to provide information on text characteristics at the multilevel. The results indicate that MT could facilitate the learners to improve fluency and cohesion, produce syntactically complex sentences, and write concrete words to express their target messages. Pedagogical implications are provided for how MT can be used to improve the quality of the L2 learners’ writing products.
Since the introduction of criterion-referenced evaluation into the English Section of the 2018 College Scholastic Aptitude Test (CSAT), it has become more important than ever to maintain the same level of its difficulty each year. This study was designed to examine whether 2017 and 2018 CSAT reading passages do not differ in terms of text difficulty. Both unidimensional and multidimensional indices Coh-Metrix (Version 3.0) provides were used in this study. Descriptive indices involved the respective count and length of sentences and words. As unidimensional indices, the Coh-Metrix L2 Readability score as well as the Flesch Reading Ease and Flesch-Kincaid Grade Level scores was measured. Indices for text easibility at different discourse levels (the surface structure, the textbase, the situation model, and the genre and rhetorical structure) were also included. Results showed that there were no significant differences between 2017 and 2018 passages. Compared with other easibility components (narrativity, syntactic simplicity, word concreteness, and deep cohesion), referential cohesion was rather low in both 2017 and 2018. Some pedagogical implications are also provided.
This study aimed to identify the continuity between 6th grade elementary school English textbooks and 1st grade middle school English textbooks using Coh-Meu'ix, an automated web-based program designed to analyze and calculate the coherence oftexts on a wide range of measures. The measured value of text types was compared and classified into the surface linguistic features (the basic count, word rrequency, readabi li ty, connective information, pronoun information, word information) and the deep linguistic features (co-referential cohesion and semantic cohesion, lexical diversity, syntactic complexity). The findings were as follows: First, the basic counts and words before the main verb had a significant different value between two levels of textbooks. The results were remarkably different in the written language. Second, FKGL (Flesch-Kincaid Grade Level) and the pronoun ratio were significantly different only in the written language. In addition, type-token ratios in written language showed more significant differences than spoken language. Third, other language features showed only a mild and gradual difference. Finally, the resu lt indicated there were no statistical differences of discourse aspects.