The deficiency of competent native English speaker raters and the inherent problem with intra-rater and inter-rater reliability of the oral proficiency interview (OPI) has precluded the full-fledged implementation of English performance testing, inevitably ushering in the computer- based oral proficiency interview (COPI) as its viable alternative with the help of automatic speech recognition (ASR). The plausibility and feasibility of implementing ASR-based COPI has recently been investigated with favorable results, which warrants more sophisticated research focusing on development of desirable test methods that will meet the rigorous criteria required by high-stakes language tests. In this respect, employing varied statistical methods as correlational, regression analyses, and ANOVA, the present study attempts to explore strengths and limitations of test method facets and to identify valid test methods to maximize the validity and reliability of ASR-based COPⅠ. Within the theoretical framework of communicative language components to be measured, the statistical findings reveal that some test methods prove to be more effective than others in producing COPI test results with better discriminability and reliability. The survey of students and teachers also suggest their favorable attitudes toward utilizing the COPI for in-class evaluation. Both findings strongly corroborates potential of the COPI in question as a valid performance testing tool to measure overall communicative competence. The current research is expected not only to shed light on advancement of performance testing, but also to serve the purpose of enhancing communicative English teaching.