Along with the current rapid development of technology, object classification is being researched, developed, and applied to security systems, autonomous driving, and other applications. A common technique is to use vision cameras to collect data of objects in the surrounding environment. Along with many other methods, LiDAR sensors are being used to collect data in space to detect and classify objects. By using the LiDAR sensors, some disadvantages of image sensors with the negative influence on the image quality by weather and light condition will be covered. In this study, a volumetric image descriptor in 3D shape is developed to handle 3D object data in the urban environment obtained from LiDAR sensors, and convert it into image data before using deep learning algorithms in the process of object classification. The study showed the potential possibility of the proposal and its further application.
Background: Only 2% of falls in older adults result in serious injuries (i.e., hip fracture). Therefore, it is important to differentiate injurious versus non-injurious falls, which is critical to develop effective interventions for injury prevention.
Objects: The purpose of this study was to a. extract the best features of surface electromyography (sEMG) for classification of injurious falls, and b. find a best model provided by data mining techniques using the extracted features.
Methods: Twenty young adults self-initiated falls and landed sideways. Falling trials were consisted of three initial fall directions (forward, sideways, or backward) and three knee positions at the time of hip impact (the impacting-side knee contacted the other knee (“knee together”) or the mat (“knee on mat”), or neither the other knee nor the mat was contacted by the impacting-side knee (“free knee”). Falls involved “backward initial fall direction” or “free knee” were defined as “injurious falls” as suggested from previous studies. Nine features were extracted from sEMG signals of four hip muscles during a fall, including integral of absolute value (IAV), Wilson amplitude (WAMP), zero crossing (ZC), number of turns (NT), mean of amplitude (MA), root mean square (RMS), average amplitude change (AAC), difference absolute standard deviation value (DASDV). The decision tree and support vector machine (SVM) were used to classify the injurious falls.
Results: For the initial fall direction, accuracy of the best model (SVM with a DASDV) was 48%. For the knee position, accuracy of the best model (SVM with an AAC) was 49%. Furthermore, there was no model that has sensitivity and specificity of 80% or greater.
Conclusion: Our results suggest that the classification model built upon the sEMG features of the four hip muscles are not effective to classify injurious falls. Future studies should consider other data mining techniques with different muscles.
머신러닝 기법의 발달과 함께 기계에서 발생하는 다양한 종류(진동, 온도, 유량 등)의 데이터를 활용하여 기계의 상태를 진단하고 이상 탐지 및 비정상 분류 연구도 활발히 진행되고 있다. 특히 진동 데이터를 활용한 회전 기계의 상태 진단은 전통적인 기계 상태 모니터링 분야로 오랜 기간 동안 연구가 진행되었고, 연구 방법 또한 매우 다양하다. 본 연구에서는 가정용 에어컨에 사용되는 로터리 압축기에 가속도계를 직접 설치하여 진동 데이터를 수집하는 실험을 진행하였다. 데이터 부족 문제를 해결하기 위해 데이터 분할을 수행하였으며, 시간 영역에서의 진동 데이터로부터 통계적, 물리적 특징들을 추출한 후, Chi-square 검증을 통해 고장 분류 모델의 주요 특징을 추출하였다. SVM(Support Vector Machine) 모델은 압축기의 정상 혹은 이상 유무를 분류하기 위해 개발되었으며, 파라미터 최적화를 통해 분류 정확도를 개선하였다.
선박이 접안할 때 발생하는 접안에너지에 가장 영향력이 큰 요소는 접안속도이며, 과도한 경우 사고로 이어질 수 있다. 접안속도의 결정에 영향을 미치는 요소는 다양하지만 기존 연구에서는 일반적으로 선박 크기에 제한하여 분석하였다. 따라서 본 연구에서는 다양한 선박 접안속도의 영향요소를 반영하여 분석하고 그에 따른 중요도를 도출하고자 한다. 분석에 활용한 데이터는 국내 한 탱커부두의 선박 접안속도를 실측한 것을 바탕으로 하였다. 수집된 데이터를 활용하여 머신러닝 분류 알고리즘인 의사결정나무(Decision Tree), 랜덤포 레스트(Random Forest), 로지스틱회귀(Logistic Regression), 퍼셉트론(Perceptron)을 비교분석하였다. 알고리즘 평가 방법으로는 혼동 행렬에 따른 모델성능 평가지표를 사용하였다. 분석 결과, 가장 성능이 좋은 알고리즘으로는 퍼셉트론이 채택되었으며 그에 따른 접안속도 영향 요인의 중요도는 선박 크기(DWT), 부두 위치(Jetty No.), 재화상태(State) 순으로 나타났다. 이에 따라 선박 접안 시, 선박의 크기를 비롯하여 부두 위치, 재화 상태 등 다양한 요인을 고려하여 접안속도를 설계하여야 한다.
This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.
Outdoor mobile robots are faced with various terrain types having different characteristics. To run safely and carry out the mission, mobile robot should recognize terrain types, physical and geometric characteristics and so on. It is essential to control appropriate motion for each terrain characteristics. One way to determine the terrain types is to use non‐contact sensor data such as vision and laser sensor. Another way is to use contact sensor data such as slope of body, vibration and current of motor that are reaction data from the ground to the tire. In this paper, we presented experimental results on terrain classification using contact sensor data. We made a mobile robot for collecting contact sensor data and collected data from four terrains we chose for experimental terrains. Through analysis of the collecting data, we suggested a new method of terrain feature extraction considering physical characteristics and confirmed that the proposed method can classify the four terrains that we chose for experimental terrains. We can also be confirmed that terrain feature extraction method using Fast Fourier Transform (FFT) typically used in previous studies and the proposed method have similar classification performance through back propagation learning algorithm. However, both methods differ in the amount of data including terrain feature information. So we defined an index determined by the amount of terrain feature information and classification error rate. And the index can evaluate classification efficiency. We compared the results of each method through the index. The comparison showed that our method is more efficient than the existing method.