검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 185

        81.
        2021.09 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The sensory stimulation of a cosmetic product has been deemed to be an ancillary aspect until a decade ago. That point of view has drastically changed on different levels in just a decade. Nowadays cosmetic formulators should unavoidably meet the needs of consumers who want sensory satisfaction, although they do not have much time for new product development. The selection of new products from candidate products largely depend on the panel of human sensory experts. As new product development cycle time decreases, the formulators wanted to find systematic tools that are required to filter candidate products into a short list. Traditional statistical analysis on most physical property tests for the products including tribology tests and rheology tests, do not give any sound foundation for filtering candidate products. In this paper, we suggest a deep learning-based analysis method to identify hand cream products by raw electric signals from tribological sliding test. We compare the result of the deep learning-based method using raw data as input with the results of several machine learning-based analysis methods using manually extracted features as input. Among them, ResNet that is a deep learning model proved to be the best method to identify hand cream used in the test. According to our search in the scientific reported papers, this is the first attempt for predicting test cosmetic product with only raw time-series friction data without any manual feature extraction. Automatic product identification capability without manually extracted features can be used to narrow down the list of the newly developed candidate products.
        4,000원
        82.
        2021.09 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This research examines deep learning based image recognition models for beef sirloin classification. The sirloin of beef can be classified as the upper sirloin, the lower sirloin, and the ribeye, whereas during the distribution process they are often simply unified into the sirloin region. In this work, for detailed classification of beef sirloin regions we develop a model that can learn image information in a reasonable computation time using the MobileNet algorithm. In addition, to increase the accuracy of the model we introduce data augmentation methods as well, which amplifies the image data collected during the distribution process. This data augmentation enables to consider a larger size of training data set by which the accuracy of the model can be significantly improved. The data generated during the data proliferation process was tested using the MobileNet algorithm, where the test data set was obtained from the distribution processes in the real-world practice. Through the computational experiences we confirm that the accuracy of the suggested model is up to 83%. We expect that the classification model of this study can contribute to providing a more accurate and detailed information exchange between suppliers and consumers during the distribution process of beef sirloin.
        4,000원
        83.
        2021.08 KCI 등재 구독 인증기관 무료, 개인회원 유료
        해운 시황을 예측하는 것은 중요한 문제이다. 투자 방식의 결정, 선대 편성 방법, 운임 등을 결정하기 위한 판단 근거가 되며 이는 기업의 이익과 생존에 큰 영향을 미치기 때문이다. 이를 위해 본 연구에서는 기계학습 모델인 장단기 메모리 및 간소화된 장단기 메모리 구조의 Gated Recurrent Units를 활용하여 컨테이너선의 해상운임 예측 모델을 제안한다. 운임 예측 대상은 중국 컨테이너 운임지수 (CCFI)이며, 2003년 3월부터 2020년 5월까지의 CCFI 데이터를 학습에 사용하였다. 각 모델에 따라 2020년 6월 이후의 CCFI를 예측한 후 실 제 CCFI와 비교, 분석하였다. 실험 모델은 하이퍼 파라메터의 설정에 따라 총 6개의 모델을 설계하였다. 또한 전통적인 분석 방법과의 성 능을 비교하기 위해 ARIMA 모델도 실험에 추가하였다. 최적 모델은 두 가지 방법에 따라 선정하였다. 첫 번째 방법으로 각 모델을 10회 반복 실험하여 얻은 RMSE의 평균값이 가장 작은 모델을 선정하는 것이다. 두 번째 방법으로는 모든 실험에서 가장 낮은 RMSE를 기록한 모델을 선정하는 것이다. 실험 결과 전통적 시계열 예측모델인 ARIMA 모델과 비교하여 딥러닝 모델의 정확도를 입증하였으며, 정확한 예측모델을 통해 운임 변동의 위험관리 능력을 제고시키는데 기여했다. 반면 코로나19와 같은 외부 효과에 따른 운임의 급격한 변화상황이 발생한 경우, 예측모델의 정확도가 감소하는 한계점을 나타냈다. 제안된 모델 중 GRU1 모델이 두 가지 평가 방법 모두에서 가장 낮은 RMSE(69.55, 49.35)를 기록하며 최적 모델로 선정되었다.
        4,000원
        84.
        2021.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        In the following years, technology has progressed in so many ways that it has provided the cyber society with a resource that only computers can excel at, such as the art of counterfeit of media, which was before unavailable. Deepfakes are a term used to describe this kind of deception. The majority of well-documented Deep Fakes are produced using Generative Adversarial Network (GAN) Models, which are essentially two distinct Machine Learning Models that perform the roles of attack and defence. These models create and identify deepfakes until they reach a point where the morphing no longer detects the deepfakes anymore. Using this algorithm/model, it is possible to discover and create new media that has a similar demographic to the training set, resulting in the development of the ideal Deep Fake media. Because the alterations are carried out utilising advanced characteristics, they cannot be seen with the human eye. However, it is completely feasible to develop an algorithm that can automatically identify this kind of tampering carried out via the internet. This not only enables us to broaden the scope of our search beyond a single media item, but also beyond a large library of mixed media. The more it learns, the better it becomes as artificial intelligence takes over in full force with automation. In order to create better deep fakes, new models are being developed all the time, making it more difficult to distinguish between genuine and morphing material.
        4,500원
        85.
        2021.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        본 연구에서는 게임 영상과 같은 생성된 영상으로부터 물체를 인식하는 심층 학습 기반 모델의 성능을 향상 시키는 방법을 제시한다. 특히, 실제 영상으로 훈련된 물체 인식 모델에 대해서 게임 영상으로 추가 훈련을 수행함으로써 물체 인식 성능이 향상됨을 검증한다. 본 연구에서는 심층 학습 기반의 물체 인식 모델들 중에서 가장 널리 사용되는 YoloV2 모델을 이용한다. 이 모델에 대해서 8 종류의 다양한 게임에서 샘플링한 160장의 게임 영상을 적용해서 물체 인식 모델을 다시 훈련하고, IoU와 정확도를 측정해서 본 연구에서 주장하는 게임 영상을 이용한 훈련이 효과적임을 입증한다.
        4,000원
        86.
        2021.06 KCI 등재 구독 인증기관 무료, 개인회원 유료
        MRI는 연부조직에 대한 고해상도의 영상을 제공하며 진단적 가치가 매우 높은 영상 검사이며, 디지털 데이터를 이용하여 딥러닝 기술을 통해 컴퓨터 보조 진단 역할을 수행할 수 있다. 본 연구는 딥러닝 기반 YOLOv3를 이용하여 뇌종양 분류 성능을 확인해 보고자 한다. 253장의 오픈 MRI 영상을 이용하여 딥러닝 학습을 진행하고 학습 평가지표는 평균손실(average loss)와 region 82와 region 94를 사용하였으며, 뇌종양 분류 모델 검증을 위해 학습에 사용되지 않은 영상을 이용하여 검출 성능을 평가하였다. 평균손실은 2248 epochs 시 0.1107, region 82와 region 94의 24079 반복학습 시 average IoU, class, .5R, .75R은 각각 0.89와 0.81, 1.00과 1.00, 1.00과 1.00, 1.00과 1.00의 결과값을 도출하였다. 뇌종양 분류 모델 검증 결과 정상 뇌와 뇌종양 각각 95.00%, 75.36%의 정확도로 분류할 수 있었다. 본 연구 결과를 통해 MRI 영상을 활용한 딥러닝 연구 및 임상에 기초자료로 사용될 것이라 사료된다.
        4,000원
        87.
        2021.05 구독 인증기관 무료, 개인회원 유료
        Deep learning, which has recently shown excellent performance, has a problem that the amount of computation and required memory are large. Model compression is very useful because it saves memory and reduces storage size while maintaining model performance. Model compression methods reduce the number of edges by pruning weights that are deemed unnecessary in the calculation. Existing weight pruning methods using ADMM construct an optimization problem by a layer-by-layer addition of pre-defined removal-ratio constraints. Decomposing into two subproblems through the ADMM process, one can solve them through gradient descent and projection. However, the layer-by-layer removal ratios must be structurally specified, causing a sharp increase in training time due to a large number of parameters, and hardly feasible to use for large models that actually require weight pruning. Our proposed method performs weight pruning, producing similar performance, by setting a global removal ratio for the entire model without prior knowledge of structural characteristics in order to solve the shortcomings of the existing ADMM weight-pruning methods. To effectively avoid performance degradation, the method removes a relatively small number of previous layers in charge of feature extraction. Experiments show high-quality performance, not necessarily setting layer-by-layer removal ratios. Additionally, experiments increasing layers yield an insight for feature extraction in pruned layers. The experiment of the proposed method to the LeNet-5 model using MNIST data results in a higher compression ratio of 99.3% outperforming those of other existing algorithms. We also demonstrate the effectiveness of the proposed method in YOLOv4, an object detection model requiring substantial computation.
        4,000원
        94.
        2021.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Background: Deep learning related research works on website medical images have been actively conducted in the field of health care, however, articles related to the musculoskeletal system have been introduced insufficiently, deep learning-based studies on classifying orthopedic manual therapy images would also just be entered. Objectives: To create a deep learning model that categorizes cervical mobilization images and establish a web application to find out its clinical utility. Design: Research and development. Methods: Three types of cervical mobilization images (central posteroanterior (CPA) mobilization, unilateral posteroanterior (UPA) mobilization, and anteroposterior (AP) mobilization) were obtained using functions of ‘Download All Images’ and a web crawler. Unnecessary images were filtered from 'Auslogics Duplicate File Finder' to obtain the final 144 data (CPA=62, UPA=46, AP=36). Training classified into 3 classes was conducted in Teachable Machine. The next procedures, the trained model source was uploaded to the web application cloud integrated development environment (https://ide.goorm.io/) and the frame was built. The trained model was tested in three environments: Teachable Machine File Upload (TMFU), Teachable Machine Webcam (TMW), and Web Service webcam (WSW). Results: In three environments (TMFU, TMW, WSW), the accuracy of CPA mobilization images was 81-96%. The accuracy of the UPA mobilization image was 43~94%, and the accuracy deviation was greater than that of CPA. The accuracy of the AP mobilization image was 65-75%, and the deviation was not large compared to the other groups. In the three environments, the average accuracy of CPA was 92%, and the accuracy of UPA and AP was similar up to 70%. Conclusion: This study suggests that training of images of orthopedic manual therapy using machine learning open software is possible, and that web applications made using this training model can be used clinically.
        4,000원
        95.
        2021.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        본 논문에서는 생성 모델로 색칠하기 게임에서 사용 가능하도록 임의의 선화와 원하는 컬러링 스타일을 입력하면 자동으로 컬러링 영상을 생성하는 신경망 모델인 FillingGAN을 제안한다. 제안된 모델은 스타일 영상의 특징을 추출하는 오토 인코더 구조의 모듈과 추출된 스타일 영상의 특징을 선화에 적용해서 이미지를 생성하는 GAN 모델로 구성된다. GAN 모델은 선화에서 추출된 구조와 스타일 영상에서 추출된 색 정보를 이용해서 채색 영상을 생성하는 과정을 수행하며, 이를 위해서 선화의 구조와 스타일 영상의 색 정보를 유지하는 손실 함수를 설계한다. 우리의 모델은 선화의 고유한 특징을 보존하며 스타일이 적용된 이미지를 생성한다.
        4,000원
        96.
        2021.03 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This paper presents a real-time, false-pick filter based on deep learning to reduce false alarms of an onsite Earthquake Early Warning (EEW) system. Most onsite EEW systems use P-wave to predict S-wave. Therefore, it is essential to properly distinguish P-waves from noises or other seismic phases to avoid false alarms. To reduce false-picks causing false alarms, this study made the EEWNet Part 1 'False-Pick Filter' model based on Convolutional Neural Network (CNN). Specifically, it modified the Pick_FP (Lomax et al.) to generate input data such as the amplitude, velocity, and displacement of three components from 2 seconds ahead and 2 seconds after the P-wave arrival following one-second time steps. This model extracts log-mel power spectrum features from this input data, then classifies P-waves and others using these features. The dataset consisted of 3,189,583 samples: 81,394 samples from event data (727 events in the Korean Peninsula, 103 teleseismic events, and 1,734 events in Taiwan) and 3,108,189 samples from continuous data (recorded by seismic stations in South Korea for 27 months from 2018 to 2020). This model was trained with 1,826,357 samples through balancing, then tested on continuous data samples of the year 2019, filtering more than 99% of strong false-picks that could trigger false alarms. This model was developed as a module for USGS Earthworm and is written in C language to operate with minimal computing resources.
        4,200원
        97.
        2021.02 KCI 등재 구독 인증기관 무료, 개인회원 유료
        PURPOSES : This study uses deep learning image classification models and vehicle-mounted cameras to detect types of pavement distress — such as potholes, spalling, punch-outs, and patching damage — which require urgent maintenance. METHODS : For the automatic detection of pavement distress, the optimal mount location on a vehicle for a regular action camera was first determined. Using the orthogonal projection of obliquely captured surface images, morphological operations, and multi-blob image processing, candidate distressed pavement images were extracted from road surface images of a 16,036 km in-lane distance. Next, the distressed pavement images classified by experts were trained and tested for evaluation by three deep learning convolutional neural network (CNN) models: GoogLeNet, AlexNet, and VGGNet. The CNN models were image classification tools used to identify and extract the combined features of the target images via deep layers. Here, a data augmentation technique was applied to produce big distress data for training. Third, the dimensions of the detected distressed pavement patches were computed to estimate the quantity of repair materials needed. RESULTS : It was found that installing cameras 1.8 m above the ground on the exterior rear of the vehicle could provide clear pavement surface images with a resolution of 1 cm per pixel. The sensitivity analysis results of the trained GoogLeNet, AlexNet, and VGGNet models were 93 %, 86 %, and 72 %, respectively, compared to 62.7 % for the dimensional computation. Following readjustment of the image categories in the GoogLeNet model, distress detection sensitivity increased to 94.6 %. CONCLUSIONS : These findings support urgent maintenance by sending the detected distressed pavement images with the dimensions of the distressed patches and GPS coordinates to local maintenance offices in real-time.
        4,000원
        98.
        2021.02 KCI 등재 구독 인증기관 무료, 개인회원 유료
        본 연구는 화재진압 및 피난활동을 지원하는 딥러닝 기반의 알고리즘 개발에 관한 기초 연구로 선박 화재 시 연기감지기가 작동하기 전에 검출된 연기 데이터를 분석 및 활용하여 원격지까지 연기가 확산 되기 전에 연기 확산거리를 예측하는 것이 목적이다. 다음과 같은 절차에 따라 제안 알고리즘을 검토하였다. 첫 번째 단계로, 딥러닝 기반 객체 검출 알고리즘인 YOLO(You Only Look Once)모델에 화재시뮬레이션을 통하여 얻은 연기 영상을 적용하여 학습을 진행하였다. 학습된 YOLO모델의 mAP(mean Average Precision)은 98.71%로 측정되었으며, 9 FPS(Frames Per Second)의 처리 속도로 연기를 검출하였다. 두 번째 단계로 YOLO로부터 연기 형상이 추출된 경계 상자의 좌표값을 통해 연기 확산거리를 추정하였으며 이를 시계열 예측 알고리즘인 LSTM(Long Short-Term Memory)에 적용하여 학습을 진행하였다. 그 결과, 화재시뮬레이션으로부터 얻은 Fast 화재의 연기영상에서 경계 상자의 좌표값으로부터 추정한 화재발생~30초까지의 연기 확산거리 데이터를 LSTM 학습모델에 입력하여 31초~90초까지의 연기 확산거리 데이터를 예측하였다. 그리고 추정한 연기 확산거리와 예측한 연기 확산거리의 평균제곱근 오차는 2.74로 나타났다.
        4,000원
        99.
        2021.02 KCI 등재 구독 인증기관 무료, 개인회원 유료
        Forward osmosis (FO) process is a chemical potential driven process, where highly concentrated draw solution (DS) is used to take water through semi-permeable membrane from feed solution (FS) with lower concentration. Recently, commercial FO membrane modules have been developed so that full-scale FO process can be applied to seawater desalination or water reuse. In order to design a real-scale FO plant, the performance prediction of FO membrane modules installed in the plant is essential. Especially, the flux prediction is the most important task because the amount of diluted draw solution and concentrate solution flowing out of FO modules can be expected from the flux. Through a previous study, a theoretical based FO module model to predict flux was developed. However it needs an intensive numerical calculation work and a fitting process to reflect a complex module geometry. The idea of this work is to introduce deep learning to predict flux of FO membrane modules using 116 experimental data set, which include six input variables (flow rate, pressure, and ion concentration of DS and FS) and one output variable (flux). The procedure of optimizing a deep learning model to minimize prediction error and overfitting problem was developed and tested. The optimized deep learning model (error of 3.87%) was found to predict flux better than the theoretical based FO module model (error of 10.13%) in the data set which were not used in machine learning.
        4,000원
        100.
        2021.02 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The increased turbidity in rivers during flood events has various effects on water environmental management, including drinking water supply systems. Thus, prediction of turbid water is essential for water environmental management. Recently, various advanced machine learning algorithms have been increasingly used in water environmental management. Ensemble machine learning algorithms such as random forest (RF) and gradient boosting decision tree (GBDT) are some of the most popular machine learning algorithms used for water environmental management, along with deep learning algorithms such as recurrent neural networks. In this study GBDT, an ensemble machine learning algorithm, and gated recurrent unit (GRU), a recurrent neural networks algorithm, are used for model development to predict turbidity in a river. The observation frequencies of input data used for the model were 2, 4, 8, 24, 48, 120 and 168 h. The root-mean-square error-observations standard deviation ratio (RSR) of GRU and GBDT ranges between 0.182~0.766 and 0.400~0.683, respectively. Both models show similar prediction accuracy with RSR of 0.682 for GRU and 0.683 for GBDT. The GRU shows better prediction accuracy when the observation frequency is relatively short (i.e., 2, 4, and 8 h) where GBDT shows better prediction accuracy when the observation frequency is relatively long (i.e. 48, 120, 160 h). The results suggest that the characteristics of input data should be considered to develop an appropriate model to predict turbidity.
        4,000원
        1 2 3 4 5