Explainable AI (XAI) is an approach that leverages artificial intelligence to support human decision-making. Recently, governments of several countries including Korea are attempting objective evidence-based analyses of R&D investments with returns by analyzing quantitative data. Over the past decade, governments have invested in relevant researches, allowing government officials to gain insights to help them evaluate past performances and discuss future policy directions. Compared to the size that has not been used yet, the utilization of the text information (accumulated in national DBs) so far is low level. The current study utilizes a text mining strategy for monitoring innovations along with a case study of smart-farms in the Honam region.
Recently, not only traditional statistical techniques but also machine learning algorithms have been used to make more accurate bankruptcy predictions. But the insolvency rate of companies dealing with financial institutions is very low, resulting in a data imbalance problem. In particular, since data imbalance negatively affects the performance of artificial intelligence models, it is necessary to first perform the data imbalance process. In additional, as artificial intelligence algorithms are advanced for precise decision-making, regulatory pressure related to securing transparency of Artificial Intelligence models is gradually increasing, such as mandating the installation of explanation functions for Artificial Intelligence models. Therefore, this study aims to present guidelines for eXplainable Artificial Intelligence-based corporate bankruptcy prediction methodology applying SMOTE techniques and LIME algorithms to solve a data imbalance problem and model transparency problem in predicting corporate bankruptcy. The implications of this study are as follows. First, it was confirmed that SMOTE can effectively solve the data imbalance issue, a problem that can be easily overlooked in predicting corporate bankruptcy. Second, through the LIME algorithm, the basis for predicting bankruptcy of the machine learning model was visualized, and derive improvement priorities of financial variables that increase the possibility of bankruptcy of companies. Third, the scope of application of the algorithm in future research was expanded by confirming the possibility of using SMOTE and LIME through case application.
Predictive maintenance has been one of important applications of data science technology that creates a predictive model by collecting numerous data related to management targeted equipment. It does not predict equipment failure with just one or two signs, but quantifies and models numerous symptoms and historical data of actual failure. Statistical methods were used a lot in the past as this predictive maintenance method, but recently, many machine learning-based methods have been proposed. Such proposed machine learning-based methods are preferable in that they show more accurate prediction performance. However, with the exception of some learning models such as decision tree-based models, it is very difficult to explicitly know the structure of learning models (Black-Box Model) and to explain to what extent certain attributes (features or variables) of the learning model affected the prediction results. To overcome this problem, a recently proposed study is an explainable artificial intelligence (AI). It is a methodology that makes it easy for users to understand and trust the results of machine learning-based learning models. In this paper, we propose an explainable AI method to further enhance the explanatory power of the existing learning model by targeting the previously proposedpredictive model [5] that learned data from a core facility (Hyper Compressor) of a domestic chemical plant that produces polyethylene. The ensemble prediction model, which is a black box model, wasconverted to a white box model using the Explainable AI. The proposed methodology explains the direction of control for the major features in the failure prediction results through the Explainable AI. Through this methodology, it is possible to flexibly replace the timing of maintenance of the machine and supply and demand of parts, and to improve the efficiency of the facility operation through proper pre-control.
악성 게시글 및 댓글의 위험성에 대한 사회적 경각심이 높아지고 있는 상황 속에서, 인터넷 포 털 사이트와 SNS 등은 악성 게시글 및 댓글 등 을 AI를 통하여 필터링하는 기능을 도입하고 있 다. 그 과정 속에서 AI가 어떠한 기준에 따라 필 터링을 하는지 구체적인 내용이 공개되지 않아 이용자들의 반발이 계속되고 있다. AI 필터링의 확산은 불가피하게 이용자의 표 현의 자유, 알 권리를 제약하는 결과로 이어진다. 특히 AI 필터링은 인간에 의한 필터링과 달리 필 터링 결과에 이른 근거가 무엇인지 인간이 이해 할 수 있는 방법으로 인간에게 전달하는 것이 어 렵다는 점에 특징이 있다. 설명가능 인공지능(XAI)은 이용자에게 시스 템의 개별 의사결정에 대한 설명을 제공하고, 이 용자가 AI 시스템의 전반적인 강점 및 약점을 이 해하도록 도와주는 기술이며, 미국 방위고등연구 계획국(DARPA)의 주도하에 연구가 진행되고 있다. XAI는 다양한 분야에서 이용자로부터 신뢰 를 얻고 사회적 수용을 위한 공감대를 형성하는 수단이 될 것으로 예상된다. EU의 일반개인정보보호규정(GDPR)은 정보 주체들이 인공지능 알고리듬이 어떻게 결과를 도 출하는지에 대한 설명을 요구하는 근거규범을 포 함하고 있다. 설명요구권과 자동화된 의사결정을 제한할 권리를 규정함으로써 정보주체의 기본 권 리를 보장하기 위한 규제 메카니즘을 구축하였다. 이로써 XAI 개발과 설계를 위한 노력이 긴요한 과제가 되었다. GDPR의 AI 규제 메카니즘 구축이 기술 발전 을 저해하는 효과를 낳을 수 있다는 부정론도 제 기되고 있다. 하지만 AI 알고리듬의 오류 발생 가 능성이 상존하므로, AI 필터링의 신뢰도를 확보 하기 위한 근거 설명의 필요성이 크다는 점에서 AI의 알고리듬 도출 결과의 근거를 요구할 수 있 는 입법이 긴요하다고 볼 수 있다.