검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 3

        2.
        2023.07 구독 인증기관·개인회원 무료
        While machine learning has gained popularity in choice behavior modeling, most machine learning models are often complex, difficult to interpret, and even considered as black box. This study investigates machine learning methods for choice behavior modeling that provide interpretability of models’ output. We explore various approaches including (1) explicitly descriptive models such as tree-based models, (2) interpretation of predictive models through feature importance measures, and (3) recent advancements in prediction explanation methods such as LIME and SHAP (Shapley Additive exPlanations). We demonstrate the methods on consumers’ airport choice behavior in Seoul metropolitan area. Through the comparative analysis with traditional discrete choice models, we discuss advantages as well as limitations of machine learning models in consumer choice behavior modeling.
        3.
        2022.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        PURPOSES : In this study, model-agnostic methods are applied for interpreting machine learning models, such as the feature global effect, the importance of a feature, the joint effects of features, and explaining individual predictions. METHODS : Model-agnostic global interpretation techniques, such as partial dependence plot (PDP), accumulated local effect (ALE), feature interaction (H-statistics), and permutation feature importance, were applied to describe the average behavior of a machine learning model. Moreover, local model-agnostic interpretation methods, individual conditional expectation curves (ICE), local surrogate models (LIME), and Shapley values were used to explain individual predictions. RESULTS : As global interpretations, PDP and ALE-Plot demonstrated the relationship between a feature and the prediction of a machine learning model, where the feature interaction estimated whether one feature depended on the other feature, and the permutation feature importance measured the importance of a feature. For local interpretations, ICE exhibited how changing a feature changes the interested instance’s prediction, LIME explained the relationship between a feature and the instance’s prediction by replacing the machine model with a locally interpretable model, and Shapley values presented how to fairly contribute to the instance’s prediction among the features. CONCLUSIONS : Model-agnostic methods contribute to understanding the general relationship between features and a prediction or debut a model from the global and/or local perspective, securing the reliability of the learning model.
        4,500원