This study explores a discussion held among raters for a writing assessment in a Korean university. It investigates rater behaviors that influence their decision-making processes as revealed in their interaction during discussion. Four raters independently assessed student writing samples using CEFR scales and then held a discussion session to agree on a single score for each sample. Observation and analysis of the rater discussion showed that there were differences in the degree to which individual raters’ initial judgments were reflected in the final decisions and that each rater’s argument style affected the degree. Raters’ personality dynamics, appreciation of student effort, and comprehension of students’ intended meaning were found to be prominent factors that influenced the process of score decisions. These findings have important implications for the use of discussion in performance assessment and for the rating process in general.