검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 174

        61.
        2022.10 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The SLA 3d printer is the first of the commercial 3D printer. The 3D printed output is printed hanging on the bed that move to the upper position. Sandblasted bed is used to prevent layer shift. If sandblasting is wrong, the 3D printed output is layer shifted. For this reason, 3D printer manufacturing companies inspect the bed surface. However, the sandblasted surface has variety of irregular shapes and craters, so it is difficult to establish a quality control standard. To solve problems, this paper presents a standardized sandblasting histogram and threshold. We present a filter that can increase the classification rate.
        4,000원
        62.
        2022.10 KCI 등재 구독 인증기관 무료, 개인회원 유료
        For a plastic diffusion lens to uniformly diffuse light, it is important to minimize deformation that may occur during injection molding and to minimize deformation. It is essential to control the injection molding condition precisely. In addition, as the number of meshes increases, there is a limitation in that the time required for analysis increases. Therefore, We applied machine learning algorithms for faster and more precise control of molding conditions. This study attempts to predict the deformation of a plastic diffusion lens using the Decision Tree regression algorithm. As the variables of injection molding, melt temperature, packing pressure, packing time, and ram speed were set as variables, and the dependent variable was set as the deformation value. A total of 256 injection molding analyses were conducted. We evaluated the prediction model's performance after learning the Decision Tree regression model based on the result data of 256 injection molding analyses. In addition, We confirmed the prediction model's reliability by comparing the injection molding analysis results.
        4,000원
        63.
        2022.10 구독 인증기관·개인회원 무료
        The sorption/adsorption behavior of radionuclides, usually occurring at the solid-water interface, is considered to be one of the primary reactions that can hinder the migration of radiotoxic elements contained in the spent nuclear fuel. In general, various physicochemical properties such as surface area, cation exchange capacity, type of radionuclides, solid-to-liquid ratio, aqueous concentration, etc. are known to provide a significant influence on the sorption/adsorption characteristics of target radionuclides onto the mineral surfaces. Therefore, the distribution coefficient, Kd, inherently shows a conditiondependent behavior according to those highly complicated chemical reactions at the solid-water interfaces. Even though a comprehensive understanding of the sorption behavior of radionuclides is significantly required for reliable safety assessment modeling, the number of the chemical thermodynamic model that can precisely predict the sorption/adsorption behavior of radionuclides is very limited. The machine-learning based approaches such as random forest, artificial neural networks, etc. provide an alternative way to understand and estimate complicated chemical reactions under arbitrarily given conditions. In this respect, the objective of this study is to predict the sorption characteristics of various radionuclides onto major bentonite minerals, as backfill materials for the HLW repository, in terms of the distribution coefficient by using a machine-learning based computational approach. As a background dataset, the sorption database previously established by the JAEA was employed for random forest machine learning calculation. Moreover, the hyperparameters such as the number of decision trees, the number of variables to divide each node, and random seed numbers were controlled to assess the coefficient of determination, R2, and the final calculation result. The result obtained in this study indicates that the distribution coefficients of various radionuclides onto bentonite minerals can be reliably predicted by using the machine learning model and sorption database.
        64.
        2022.10 구독 인증기관·개인회원 무료
        Since radon was detected in mattresses of famous bed furniture brands in 2018, the nuclear safety and security commission (NSSC) announced the radiation safety management act in April 2021 to protect the public health and environment. This act stipulates the safety management of radiation that can be encountered in the natural environment such as the notification of radioactivity concentration of source materials, process by-products, the installation and operation of radioactive monitors. In this study, a model was established to predict radioactive exposure dose from radioactive materials such as radon and uranium detected in consumer products such as bed mattresses, pillows, shower, bracelets and masks in order to identify major radioactive substances that largely affect the exposure dose. A period of seven years from 2014 to 2020 was investigated for the source materials and exposure doses of consumer products containing naturally occurring radioactive materials (NORMs). We analyzed these using machine learning models such as classification and regression tree (CART), Random Forest and TreeNet. Index development and verification were performed to evaluate the predictive performance of the models. Overall, predictive performance was highest when Random Forest or TreeNet was used for each consumer product. Thoron had a great influence on the internal exposure dose of bedding, clothing and mats. Uranium had a great influence on the internal exposure dose of other consumer products except whetstones. When the number of data is very small or the missing value rate is high, it is difficult to expect accurate predictive performance even with machine learning techniques. If we significantly reduce the missing value rate of data or use the limit of detection value instead of missing values, we can build a model with more accurate predictive performance.
        65.
        2022.10 구독 인증기관·개인회원 무료
        The spent fuel safety information delivered from the consignor to the disposal facility operator directly affects the operation and safety of the disposal facility. Therefore, the operator of a disposal facility must perform data quality management to increase data reliability, and anomaly detection is a representative method among quality control methods. We propose a quality control method to detect anomalies using XGBoost, known for its excellent performance, prevention of overfitting, and fast training speed. First, we select significant variables such as release burnup, enrichment, and amount U from the spent fuel safety information and train models for each variable using only normal data. A model trained using only normal data generates a small error for a normal pattern and a large error for an abnormal pattern. Then, when the data error exceeds a set threshold, the data is determined as an anomaly. In this paper, we implement the XGBoost models using virtual spent fuel information and optimize the hyperparameter of XGBoost using a simulated annealing method for high accuracy. The optimized XGBoost models show high accuracy in a normal input and provide a stable prediction value even in an abnormal input. In addition, we perform anomaly detection by including defect input in the data to validate the presented method. The proposed method shows the result of effectively classifying normal values and anomalies.
        67.
        2022.10 KCI 등재 구독 인증기관 무료, 개인회원 유료
        PURPOSES : In this study, the main factors affecting the severity of traffic accidents among elderly drivers were reviewed, and accident factors with a high accident risk were analyzed. This provided basic data for preparing a traffic safety system for elderly drivers and establishing policies. METHODS : Based on machine learning, the major factors influencing accident severity (from the analysis of traffic accident data for elderly drivers) were analyzed and compared with existing statistical analysis results. The machine learning algorithm used the Scikit-learn library and Python 3.8. A hyperparameter optimization process was performed to improve the safety and accuracy of the model. To establish the optimal state of the model, the hyperparameters were set (K = 5) using K-fold cross-validation. The hyperparameter search applied the most widely utilized grid search method, and the performance evaluation derived the optimal hyperparameter value using neutral squared error indicators. RESULTS : The traffic laws, road sections of traffic accidents, and time zones of accidents were analyzed for accidents involving elderly drivers in Daejeon Metropolitan City, and the importance of the variables was examined. For the analysis, a linear regression model, machine learning-based decision tree, and random forest model were used, and based on the root mean square error, the random forest accuracy performance was found to be the best. Ultimately, 18 variables were analyzed, including traffic violations, accident time zones, and road types. The variables influencing the accident severity were the speed, signal violation, intersection section, late-night driving, and pedestrian protection violation, with the relative importance of the variables in the order of speed (0.3490966), signal violation (0.285967), and late-night driving (0.173108). These can be seen as variables related to the expansion of life damage owing to physical aging and reduced judgment abilities arising from decreases in cognitive function. CONCLUSIONS : Restricting the driving of the elderly on the expressway and at night is reasonable, but specific standards for driving restrictions should be prepared based on individual driving capabilities.
        4,000원
        69.
        2022.08 KCI 등재 구독 인증기관 무료, 개인회원 유료
        In this paper, to improve the optical quality of aspherical plastic lenses for mobile use, the optimal molding conditions that can minimize the phase difference are derived using injection molding simulation, design of experiments, and machine learning. First, factors affecting the phase difference were derived using the design of the experiment method, and a data set was created using the derived factors, followed by the machine learning process. After predicting the model trained using the generated training data as test data and verifying it with the performance evaluation index, the model with the best predictive performance was the random forest model. Therefore, to derive the optimal molding conditions, random forests were used to predict 10,000 random pieces of data. As a result of applying the derived optimal molding conditions to the injection molding simulation, the phase difference of the lens could be reduced by 8.2%.
        4,000원
        70.
        2022.08 KCI 등재 구독 인증기관 무료, 개인회원 유료
        In order to reduce environmental pollution, it is necessary to increase the recycling rate of waste. For this, the separation of recyclables is of utmost importance. The paper conducted a study to automatically discriminate containers by material for beverage containers among recyclables. We developed an algorithm that automatically recognizes containers by four materials: metal, glass, plastic, and paper by measuring the vibration signal generated when the beverage container collides with the bottom plate of the collection box. The amplitude distribution, time series information, and frequency information of the vibration signal were used to extract the characteristics indicating the characteristic difference of the vibration signal for each material, and a classifier was developed through machine learning using these characteristics.
        4,000원
        71.
        2022.08 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The prediction of algal bloom is an important field of study in algal bloom management, and chlorophyll-a concentration(Chl-a) is commonly used to represent the status of algal bloom. In, recent years advanced machine learning algorithms are increasingly used for the prediction of algal bloom. In this study, XGBoost(XGB), an ensemble machine learning algorithm, was used to develop a model to predict Chl-a in a reservoir. The daily observation of water quality data and climate data was used for the training and testing of the model. In the first step of the study, the input variables were clustered into two groups(low and high value groups) based on the observed value of water temperature(TEMP), total organic carbon concentration(TOC), total nitrogen concentration(TN) and total phosphorus concentration(TP). For each of the four water quality items, two XGB models were developed using only the data in each clustered group(Model 1). The results were compared to the prediction of an XGB model developed by using the entire data before clustering(Model 2). The model performance was evaluated using three indices including root mean squared error-observation standard deviation ratio(RSR). The model performance was improved using Model 1 for TEMP, TN, TP as the RSR of each model was 0.503, 0.477 and 0.493, respectively, while the RSR of Model 2 was 0.521. On the other hand, Model 2 shows better performance than Model 1 for TOC, where the RSR was 0.532. Explainable artificial intelligence(XAI) is an ongoing field of research in machine learning study. Shapley value analysis, a novel XAI algorithm, was also used for the quantitative interpretation of the XGB model performance developed in this study.
        4,000원
        72.
        2022.08 KCI 등재 구독 인증기관 무료, 개인회원 유료
        일반적으로 콘크리트는 골재, 모래, 시멘트, 담수, 혼합재 등 다양한 재료로 구성되어있으며 재령에 따라서 강도가 증 가한다. 콘크리트에 필요한 각 재료의 비율은 혼합 설계를 통해 결정되지만, 콘크리트의 강도는 실험적으로 측정되기 전까지는 알 수 없다. 이러한 한계를 극복하기 위해 실험을 통해 얻은 데이터를 이용하여 콘크리트의 압축 강도를 예측하기 위해 통계수 학과 기계학습 알고리즘을 이용한 많은 연구가 시도되었다. 이전의 연구는 콘크리트 압축 강도 예측에 신경망 기법이 가장 적 합하다고 제안하였다. 그러나 신경망 기법은 다른 기계학습과 비교하여 모델 학습에 계산 비용이 많이 들어 실제로 적용하기 어려운 문제점이 있다. 최근 몇 년 동안 다양한 회귀 분석 모델이 개발되었으므로 본 연구에서는 신경망 대신 최신 회귀 분석 모델을 이용하여 콘크리트 강도 예측모델을 제시하였다. 이를 위해 최근 개발된 회귀 분석 모델에 대한 교차검증을 시행하여 최적의 모델을 선정하였다. 그리드 검색을 통하여 선정된 각 모델의 하이퍼 파라미터를 최적화하고, 국내외 데이터를 활용하여 기계학습 모델을 훈련하고 검증하였다. 이들 중 CatBoost, LGBMR, RFR, XGBoost 회귀모델이 높은 성능을 보여주었다. 특히 그 중에서 XGBoost 회귀 분석 모델이 가장 작은 오차와 높은 정확도를 보여주었다. 이들 중 오류가 가장 큰 LGBMR 모델도 이전 연구에서 제안된 신경망 및 앙상블 모델보다 성능이 우수하였다. 현장 레미콘 콘크리트에 대한 압축 강도 예측을 시행하여 학 습된 모델의 현장 적용 가능성을 확인하였다.
        4,300원
        73.
        2022.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        본 연구는 데이터를 기반으로 한 인공지능 기계학습 기법을 활용하여 온실 내부온도 예측 시뮬레이션 모델을 개발을 수행 하였다. 온실 시스템의 내부온도 예측을 위해서 다양한 방법 이 연구됐지만, 가외 변인으로 인하여 기존 시뮬레이션 분석 방법은 낮은 정밀도의 문제점을 지니고 있다. 이러한 한계점 을 극복하기 위하여 최근 개발되고 있는 데이터 기반의 기계 학습을 활용하여 온실 내부온도 예측 모델 개발을 수행하였 다. 기계학습모델은 데이터 수집, 특성 분석, 학습을 통하여 개 발되며 매개변수와 학습방법에 따라 모델의 정확도가 크게 변 화된다. 따라서 데이터 특성에 따른 최적의 모델 도출방법이 필요하다. 모델 개발 결과 숨은층 증가에 따라 모델 정확도가 상승하였으며 최종적으로 GRU 알고리즘과 숨은층 6에서 r2 0.9848과 RMSE 0.5857℃로 최적 모델이 도출되었다. 본 연 구를 통하여 온실 외부 데이터를 활용하여 온실 내부온도 예 측 모델 개발이 가능함을 검증하였으며, 추후 다양한 온실데이 터에 적용 및 비교분석이 수행되어야 한다. 이후 한 단계 더 나아 가 기계학습모델 예측(predicted) 결과를 예보(forecasting)단 계로 개선하기 위해서 데이터 시간 길이(sequence length)에 따른 특성 분석 및 계절별 기후변화와 작물에 따른 사례별로 개발 모델을 관리하는 등의 다양한 추가 연구가 수행되어야 한다.
        4,200원
        74.
        2022.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This study was conducted to estimate the damage of Whole Crop Maize (WCM) according to abnormal climate using machine learning and present the damage through mapping. The collected WCM data was 3,232. The climate data was collected from the Korea Meteorological Administration's meteorological data open portal. Deep Crossing is used for the machine learning model. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The damage was calculated by difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of WCM data (1978~2017). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization(WMO) standard. The DMYnormal was ranged from 13,845~19,347 kg/ha. The damage of WCM was differed according to region and level of abnormal climate and ranged from -305 to 310, -54 to 89, and -610 to 813 kg/ha bnormal temperature, precipitation, and wind speed, respectively. The maximum damage was 310 kg/ha when the abnormal temperature was +2 level (+1.42 ℃), 89 kg/ha when the abnormal precipitation was -2 level (-0.12 mm) and 813 kg/ha when the abnormal wind speed was -2 level (-1.60 ㎧). The damage calculated through the WMO method was presented as an mapping using QGIS. When calculating the damage of WCM due to abnormal climate, there was some blank area because there was no data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).
        4,000원
        78.
        2022.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        PURPOSES : In this study, model-agnostic methods are applied for interpreting machine learning models, such as the feature global effect, the importance of a feature, the joint effects of features, and explaining individual predictions. METHODS : Model-agnostic global interpretation techniques, such as partial dependence plot (PDP), accumulated local effect (ALE), feature interaction (H-statistics), and permutation feature importance, were applied to describe the average behavior of a machine learning model. Moreover, local model-agnostic interpretation methods, individual conditional expectation curves (ICE), local surrogate models (LIME), and Shapley values were used to explain individual predictions. RESULTS : As global interpretations, PDP and ALE-Plot demonstrated the relationship between a feature and the prediction of a machine learning model, where the feature interaction estimated whether one feature depended on the other feature, and the permutation feature importance measured the importance of a feature. For local interpretations, ICE exhibited how changing a feature changes the interested instance’s prediction, LIME explained the relationship between a feature and the instance’s prediction by replacing the machine model with a locally interpretable model, and Shapley values presented how to fairly contribute to the instance’s prediction among the features. CONCLUSIONS : Model-agnostic methods contribute to understanding the general relationship between features and a prediction or debut a model from the global and/or local perspective, securing the reliability of the learning model.
        4,500원
        79.
        2022.05 구독 인증기관·개인회원 무료
        Molten salt reactors and pyroprocessing are widely considered for various nuclear applications. The main challenges for monitoring these systems are high temperature and strong radiation. Two harsh environments make the monitoring system needs to measure nuclides at a long distance with sufficient resolution for discriminating many different elements simultaneously. Among available methodologies, laser-induced breakdown spectroscopy (LIBS) has been the most studied. The LIBS method can provide the required stand-off and desired multi-elemental measurable ability. However, the change of the level for molten salts induces uncertainty in measuring the concentration of the nuclides for LIBS analysis. The spectra could change by focusing points due to the different laser fluence and plasma shape. In this study, to prepare for such uncertainties, we evaluated a LIBS monitoring system with machine learning technology. While the machine learning technology cannot use academic knowledge of the atomic spectrum, this technique finds the new variable as a vector from any data including the noise, target spectrum, standard deviation, etc. Herein, the partial least squares (PLS) and artificial neural network (ANN) were studied because these methods represent linear and nonlinear machine learning methods respectively. The Sr (580–7200 ppm) and Mo (480–4700 ppm) as fission products were investigated for constructing the prediction model. For acquiring the data, the experiments were conducted at 550°C in LiCl-KCl using a glassy carbon crucible. The LIBS technique was used for accumulating spectra data. In these works, we successfully obtained a reasonable prediction model and compared each other. The high linearities of the prediction model were recorded. The R2 values are over 0.98. In addition, the root means square of the calibration and cross-validation were used for evaluating the prediction model quantitatively.
        80.
        2022.05 구독 인증기관·개인회원 무료
        Radioactivity of radiostrontiums, Sr-89 and Sr-90, which are both pure beta-emitters, are generally measured via Cherenkov counting. However, the determination of Cherenkov counting efficiencies of radiostrontiums requires a complicated procedure due to the presence of Y-90 (also a pure betaemitter) which is the daughter nuclide of Sr-90. In this study, we have developed a machine learning approach using a linear regression model which allows an easier and simultaneous determination of the Cherenkov counting efficiencies of the radiostrontiums. The linear regression model was employed because total net Cherenkov count (Ct) from the three beta-emitters at time t after the separation of Y- 90, can be expressed as a linear combination of their respective time-varying radioactivities with their respective coefficients (parameters) being their counting efficiencies: Ct = εSr-90[ASr-90·exp(–λSr-90·t)] + εSr-89[ASr-89·exp(–λSr-89·t)] + εY-90[ASr-90·exp(1–λSr-90·t)], where ε is a counting efficiency, A is an initial activity, λ is a decay constant and t is time after the separation of Y-90, Thus, if we train the model with multiple Cherenkov counts measured from the three beta emitters, then we can obtain their estimates for counting efficiencies (so-called parameters) straightforward. For this, the model has been trained by two methods: Ordinary Least Squares (OLS) and Bayesian linear regression (BLR), for which two software packages, PyMC3 and Stan were employed to compare their performances. The results showed that the accuracy of the OLS was worse than that of the BLR. Particularly, the counting efficiency of Sr-90 was estimated to be smaller than 0, which is an unrealistic value. On the other hand, the estimates of the BLR gave realistic values which are close to the true values. Additionally, the BLR was able to provide a distribution for each counting efficiency (so-called “posterior”) from which various types of inference can be made including median and credible interval in the Bayesian statistics which is analogous to, but different from confidence interval in the Frequentist statistics. In the results of the BLR, the Stan package gave more accurate estimates than the PyMC3 package. Therefore, it is expected that counting efficiencies of the radiostrontiums including radioyttrium can be determined at the same time, more easily and accurately, by using the BLR with the Stan package and that the activities of radiostrontium also can be determined more easily by using the BLR if we know their counting efficiencies in advance. It is worth noting that the usage of the linear regression model in this study was different from the usual one where the trained model is used to predict a response value (count) from a set of unseen regressor values (activities).
        1 2 3 4 5