This study investigated the feasibility of adopting an automatic scoring system (ASS) in a domestic English-speaking education context. Scope, test items, assessment criteria, scoring methods, and reporting strategies of six overseas English-speaking tests utilizing ASSs were examined. Moreover, a comparative analysis was conducted to identify disparities between ASS-based and non-ASS-based speaking tests. Findings were: 1) some ASS-based tests utilized ASS technology throughout the assessment, while others adopted a hybrid scoring system involving human raters; 2) compared to non-ASS-based tests, ASS-based tests used more test items targeting low-level skills such as sound and forms but fewer test items targeting conversation and discourse level skills; 3) pronunciation, fluency, and vocabulary were widely employed as evaluation criteria with sparse use of organization, content, and task completion in most ASS-based tests; 4) differences were minimal in assessment criteria application and score calculation between ASS-based and non-ASS-based tests; and 5) some ASS-based tests provided criteria-specific results and feedback with total scores and proficiency levels.