This study aimed to investigate the difference that convolutional neural network(CNN) shows in the determining osteoporosis on panoramic radiograph by performing a paired test by inputting the original image and the limited image including the cortical bone of the posterior border of the mandible used by radiologists. On panoramic radiographs of a total of 661 subjects (mean age 66.3 years ± 11.42), the area including the cortical bone of the posterior part of the mandible was divided into the left and right sides, and the ROI was set, and the remaining area was masked in black to form the limited image. For training of VGG-16, panoramic radiographs of 243 osteoporosis subjects (mean age 72.67 years ± 7.97) and 222 normal subjects (mean age 53.21 years ± 2.46) were used, and testing 1 and testing 2 were performed on the original and limited images, respectively, using panoramic radiographs of 51 osteoporosis subjects (mean age 72.78 years ± 8.3) and 47 normal subjects (mean age 53.32 years ± 2.81). The accuracy of VGG-16 for determining osteoporosis was 97%, in the testing 1 and 100% in the testing 2. When determining osteoporosis on the original image, CNN showed sensitivity in a wide range of areas including not only the inferior cortical bone of the mandible but also the maxillary and mandibular cancellous bone, cervical spine, and zygomatic bone. When the same ROI including the lower inferior cortical border of the mandible of the osteoporosis group was applied and the sensitive region was compared between the original image and the limited image, the original image showed wider sensitive region in cancellous bone and cortical bone than on the limited image (p<.05). Since osteoporosis is a disease that affects throughout the skeletal system, this finding seems very valid.
본 연구는 추가적인 장비 없이 UAV만을 사용한 균열폭 측정 및 균열의 3차원 재구성 방법을 제안한다. UAV 사진측량법 및 CNN을이용한 균열의 3차원 재구성 및 균열폭 측정 검증을 위해 5곳의 균열이 존재하는 벽면을 대상으로 균열의 3차원 재구성을 하였 으며 UAV와 균열 사이의 거리 4가지에 대해 균열폭을 측정하고 균열 현미경 측정값과 비교하여 정확성을 검증하였다. 대부분의 균열 에서 균열폭을 정확히 측정하였으나 균열폭이 0.5mm보다 작은 경우와 벽면이 심하게 그늘져 어두운 곳에서는 측정 유효성이 떨어지 는 결과를 보였다.
미세구조 특성의 불확실성은 재료 특성에 많은 영향을 준다. 시멘트 기반 재료의 공극 분포 특성은 재료의 역학적 특성에 큰 영향을 미치며, 재료에 랜덤하게 분포되어 있는 많은 공극은 재료의 물성 예측을 어렵게 한다. 공극의 특성 분석과 재료 응답 간의 상관관계 규명에 대한 기존 연구는 통계적 관계 분석에 국한되어 있으며, 그 상관관계가 아직 명확히 규명되어 있지 않다. 본 연구에서는 합성곱 신경망(CNN, convolutional neural network)을 활용한 이미지 기반 데이터 접근법을 통해 시멘트 기반 재료의 역학적 응답을 예측하 고, 공극분포와 재료 응답의 상관관계를 분석하였다. 머신러닝을 위한 데이터는 고해상도 마이크로-CT 이미지와 시멘트 기반 재료의 물성(인장강도)로 구성하였다. 재료의 메시 구조 특성을 분석하였으며, 재료의 응답은 상장균열모델(phase-field fracture model)에 기 반을 둔 2D 직접 인장(direct tension) 유한요소해석 시뮬레이션을 활용하여 평가하였다. 입력 이미지 영역의 기여도를 분석하여 시편 에서 재료 응답 예측에 가장 큰 영향을 미치는 영역을 CNN을 통하여 식별하였다. CNN 과정 중 활성 영역과 공극분포를 비교 분석하 여 공극분포특성과 재료 응답의 상관관계를 분석하여 제시하였다.
Determining the size or area of a plant's leaves is an important factor in predicting plant growth and improving the productivity of indoor farms. In this study, we developed a convolutional neural network (CNN)-based model to accurately predict the length and width of lettuce leaves using photographs of the leaves. A callback function was applied to overcome data limitations and overfitting problems, and K-fold cross-validation was used to improve the generalization ability of the model. In addition, ImageDataGenerator function was used to increase the diversity of training data through data augmentation. To compare model performance, we evaluated pre-trained models such as VGG16, Resnet152, and NASNetMobile. As a result, NASNetMobile showed the highest performance, especially in width prediction, with an R_squared value of 0.9436, and RMSE of 0.5659. In length prediction, the R_squared value was 0.9537, and RMSE of 0.8713. The optimized model adopted the NASNetMobile architecture, the RMSprop optimization tool, the MSE loss functions, and the ELU activation functions. The training time of the model averaged 73 minutes per Epoch, and it took the model an average of 0.29 seconds to process a single lettuce leaf photo. In this study, we developed a CNN-based model to predict the leaf length and leaf width of plants in indoor farms, which is expected to enable rapid and accurate assessment of plant growth status by simply taking images. It is also expected to contribute to increasing the productivity and resource efficiency of farms by taking appropriate agricultural measures such as adjusting nutrient solution in real time.
Synthetic Aperture Radar (SAR) images are affected by noise called speckle, which is very severe and may hinder image exploitation. Despeckling is an important task that aims to remove such noise so as to improve the accuracy of all downstream image processing tasks. Many different schemes have been proposed for the restoration of SAR images. Among the different possible approaches, methods based on convolutional neural networks(CNNs) have recently shown to reach state-of-the-art performance for SAR image restoration. DnCNN(DeNoising Convolutional Neural Network) is one of the most widely used neural network architecture embedded in baseline SAR image despeckling methods. In military applications of SAR satellite image, fast processing is the most critical factor except the precision rate of the recognition. In this paper, we propose an improved DnCNN architecture for faster SAR image despeckling. The experimental results on real-world SAR images show that our proposed method takes faster processing time than the original DnCNN architecture without despeckling performance downgrade. Subjective visual inspection demonstrates that the proposed method has great potential in preserving the image signal details and suppressing speckle noise.
Source localization technique using acoustic emission (AE) has been widely used to track the accurate location of the damaged structure. The principle of localization is based on signal velocity and the time difference of arrival (TDOF) obtained from different signals for the specific source. However, signal velocity changes depending on the frequency domain of signals. In addition, the TDOF is dependent on the signal threshold which affects the prediction accuracy. In this study, a convolutional neural network (CNN)-based approach is used to overcome the existing problem. The concrete block corresponding to 1.3×1.3×1.3 m size is prepared according to the mixing ratio of Wolseong low-to-intermediate level radioactive waste disposal concrete materials. The source is excited using an impact hammer, and signals were acquired through eight AE sensors attached to the concrete block and a multi-channel AE measurement system. The different signals for a specific source are time-synchronized to obtain TDOF information and are transformed into a time-frequency domain using continuous wavelet transform (CWT) for consideration of various frequencies. The developed CNN model is compared with the conventional TDOF-based method using the testing dataset. The result suggests that the CNN-based method can contribute to the improvement of localization performance.
Recently, transfer learning techniques with a base convolutional neural network (CNN) model have widely gained acceptance in early detection and classification of crop diseases to increase agricultural productivity with reducing disease spread. The transfer learning techniques based classifiers generally achieve over 90% of classification accuracy for crop diseases using dataset of crop leaf images (e.g., PlantVillage dataset), but they have ability to classify only the pre-trained diseases. This paper provides with an evaluation scheme on selecting an effective base CNN model for crop disease transfer learning with regard to the accuracy of trained target crops as well as of untrained target crops. First, we present transfer learning models called CDC (crop disease classification) architecture including widely used base (pre-trained) CNN models. We evaluate each performance of seven base CNN models for four untrained crops. The results of performance evaluation show that the DenseNet201 is one of the best base CNN models.
To apply CNN to a fluid problem, we need a method to effectively convert the physical quantities of fluid into an image. The performance of CNN was evaluated using the image transformation method using the minimum and maximum values of the pressure distribution data and the image transformation methods using the normal distribution of the pressure distribution data. Through the performance evaluation of the learned CNN, the image transformation methods of Method 4 and Method 5, which applied the normal distribution of representative pressure distribution data, were very effective. In particular, Method 5 includes the initial and final pressure distribution data to include overall pressure distribution data, thereby improving the resolution of the color map to improve classification performance.
This study was conducted as part of a series of studies to introduce the Convolutional Neural Network(CNN) into the diagnostic field of osteoporosis. The purpose of this study was to compare the results when testing Digital Radiography(DR) and Computed Radiography(CR) panoramic radiographs by CNN that were trained by DR panoramic radiographs. The digital panoramic radiographs of females who visited for the purpose of diagnosis and treatment at Chonnam National University Dental Hospital were taken. Two Oral and Maxillofacial Radiologists were selected for the study to compare the panoramic radiographs with normal and osteoporosis images. Among them, 1068 panoramic radiographs of females{Mean [± standard deviation] age: 49.19 ± 21.91 years} obtained by DR method were used for training of CNN. 200 panoramic radiographs of females{Mean [± standard deviation] age: 63.95 ± 6.45 years} obtained by DR method and 202 panoramic radiographs of females{Mean [± standard deviation] age: 62.00 ± 6.86 years} obtained by CR method were used for testing of CNN. When the DR panoramic radiographs were tested, the Accuracy was 92.5%. When the CR panoramic radiographs were tested, the Accuracy was 76.2%. It can be seen that the CNN trained by DR panoramic radiographs is suitable to be tested with the same DR panoramic radiographs.
The incidence of stomach cancer has been found to be gradually decreasing; however, it remains one of the most frequently occurring malignant cancers in Korea. According to statistics of 2017, stomach cancer is the top cancer in men and the fourth most important cancer in women, necessitating methods for its early detection and treatment. Considerable research in the field of bioinformatics has been conducted in cancer studies, and bioinformatics approaches might help develop methods and models for its early prediction. We aimed to develop a classification method based on deep learning and demonstrate its application to gene expression data obtained from patients with stomach cancer. Data of 60,483 genes from 334 patients with stomach cancer in The Cancer Genome Atlas were evaluated by principal component analysis, heatmaps, and the convolutional neural network (CNN) algorithm. We combined the RNA-seq gene expression data with clinical data, searched candidate genes, and analyzed them using the CNN deep learning algorithm. We performed learning using the sample type and vital status of patients with stomach cancer and verified the results. We obtained an accuracy of 95.96% for sample type and 50.51% for vital status. Despite overfitting owing to the limited number of patients, relatively accurate results for sample type were obtained. This approach can be used to predict the prognosis of stomach cancer, which has many types and underlying causes.
After CNN basic structure was introduced by LeCun in 1989, there has not been a major structure change except for more deep network until recently. The deep network enhances the expression power due to improve the abstraction ability of the network, and can learn complex problems by increasing non linearity. However, the learning of a deep network means that it has vanishing gradient or longer learning time. In this study, we proposes a CNN structure with MLP layer. The proposed CNNs are superior to the general CNN in their classification performance. It is confirmed that classification accuracy is high due to include MLP layer which improves non linearity by experiment. In order to increase the performance without making a deep network, it is confirmed that the performance is improved by increasing the non linearity of the network.
딥러닝을 위한 CNN 기술은 의학, 농업, 항공 및 자동차 산업 전반에 걸쳐 연구, 개발되고 있다. 또한 콘크리트 균열이나 강철 용접 결함과 같은 건설 분야에도 적용할 수 있다. 본 연구에서는 건설분야에 적용하기에 앞서, 이전 연구를 발전시키고 CNN 기법을 사용하여 손으로 쓴 이미지의 분류를 분석하였다. 딥러닝은 일반적으로 학습층의 깊이가 깊을수록 정확도가 높아지지만 분석 시간이 오래 걸리는 단점이 있다. 또한 훈련 옵션에 따라 많은 변화가 발생할 수 있다. 따라서, 많은 파라미터 연구를 수행했고 학습 계층이 더욱더 깊어질 때 분석을 수행하였다.