안면 부 MRI 검사는 주변 조직에 대한 높은 대조도 와 해상력으로 해부학적 구조 파악과 질환 진단에 이용되고 있다. 최근 검사 시간을 줄이는 동시에 영상의 질을 향상하는 딥러닝이 주목받고 있다. 본 연구는 안면 부 MRI 검사에서 딥러닝 의 유용성을 알아보기 위해 34명의 환자를 대상으로 딥러닝 T2 강조 영상과 고식적인 T2 강조 영상의 축상면, 관상면 영상을 각각 획득하여, 무참조 영상 품질평가 기법인 NIQE와 NIMA를 통하여 정량적 평가하였고, 리커트 4점 척도를 통해 정성적 평가하였다. NIQE 결과에서 딥러닝 T2 강조 영상은 고식적인 T2 강조 영상보다 영상 품질이 우수하였고, NIMA 결과에서는 딥러닝 T2 강조 영상의 축상면은 통계적으로 유의한 차이가 없었고, 딥러닝 T2 강조 영상의 관상면에서는 통계 적으로 유의한 차이가 있었다. 정성적 평가지표에서는 입 인두, 후두 인두에서 질적인 이득이 있었다. 연구 결과를 통해 안면 부 영역 중 무의식적인 움직임이 많은 영역에서 딥러닝을 적용함으로써 고식적인 T2 강조 영상보다 높은 영상의 품질 을 제공하고, 상대적으로 움직임이 덜한 구조물에서도 품질을 유지하며 검사 시간을 2분 이상 단축하여 움직임에 의한 인공 물을 감소시킴으로써 응급 환자 및 비협조 환자의 진단에 유용하게 활용될 것으로 사료 된다.
This research examines deep learning based image recognition models for beef sirloin classification. The sirloin of beef can be classified as the upper sirloin, the lower sirloin, and the ribeye, whereas during the distribution process they are often simply unified into the sirloin region. In this work, for detailed classification of beef sirloin regions we develop a model that can learn image information in a reasonable computation time using the MobileNet algorithm. In addition, to increase the accuracy of the model we introduce data augmentation methods as well, which amplifies the image data collected during the distribution process. This data augmentation enables to consider a larger size of training data set by which the accuracy of the model can be significantly improved. The data generated during the data proliferation process was tested using the MobileNet algorithm, where the test data set was obtained from the distribution processes in the real-world practice. Through the computational experiences we confirm that the accuracy of the suggested model is up to 83%. We expect that the classification model of this study can contribute to providing a more accurate and detailed information exchange between suppliers and consumers during the distribution process of beef sirloin.
There has been considerable recent interest in deep learning techniques for structural analysis and design. However, despite newer algorithms and more precise methods have been developed in the field of computer science, the recent effective deep learning techniques have not been applied to the damage detection topics. In this study, we have explored the structural damage detection method of truss structures using the state-of-the-art deep learning techniques. The deep neural networks are used to train knowledge of the patterns in the response of the undamaged and the damaged structures. A 31-bar planar truss are considered to show the capabilities of the deep learning techniques for identifying the single or multiple-structural damage. The frequency responses and the elasticity moduli of individual elements are used as input and output datasets, respectively. In all considered cases, the neural network can assess damage conditions with very good accuracy.
최근 사회기반시설(SOC)의 증가와 노후화에 따라 기존의 인력중심의 육안검사를 기반으로 한 안전점검은 경제성과 안전성, 효율성 면에서 한계를 가지고 있다. 본 연구에서는 육안점검의 한계를 개선하기 위해 딥러닝 모델 기반 물체를 탐지하는 기술을 활용하여 터널 콘크리트 균열을 자동으로 탐지하는 기술을 개발하였으며, 이를 실제 터널 영상에 적용하여 그 성능을 검증하였다.
This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in ‘tfrecords’ format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.
Recently, road maintenance is important for road performance and longevity in accordance with the increase of road infrastructure, so research on efficient damage detection has been actively carried out. In this study, we developed a technology to automatically detect the cracks on the highway road surface by using the road surface image of UAV and deep learning based object detection technology, which was applied to the actual highway image to verify its performance.
The damage detection method of blade systems largely depends on the personal ability of an inspector using a camera. Thus, this paper proposes a deep learning-based detection method that can rapidly and reliably identify and evaluate the damages on the blades.
This paper proposes a deep learning-based underground object classification technique incorporated with phase analysis of ground penetrating radar (GPR) for enhancing the underground object classification capability. Deep convolutional neural network (CNN) using the combination of the B- and C-scan images has recently emerged for automated underground object classification. However, it often leads to misclassification because arbitrary underground objects may have similar signal features. To overcome the drawback, the combination of B- and C-scan images as well as phase information of GPR are simultaneously used for CNN in this study, enabling to have more distinguishable signal features among various underground objects. The proposed technique is validated using in-situ GPR data obtained from urban roads in Seoul, South Korea. The validation results show that the false alarm is significantly reduced compared to the CNN results using only B- and C-scan images.
For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot’s understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.