This paper presents the novel observation model, called Modified Spherical Signature Descriptor(MSSD), capable of representing 2D image generated from 3D point cloud data. The Modified Spherical Signature Descriptor has a uniform mesh grid to accumulate the occupancy evidence caused by neighbor point cloud data. According to a kind of area such as wall, road, tree, car, and so on, the evidence pattern of 2D image looks so different each other. For the parameter learning of Convolutional Neural Network(CNN) layers, these 2D images were applied as the input layer. The Convolutional Neural Network, one of the deep learning methods and familiar with the image analysis, was utilized for the urban structure classification. The case study on CNN practice was introduced in detail in this paper. The simulation results shows that the classification accuracy of CNN with 2D images of the proposed MSSD was improved more than the traditional methods' one.
We observed an area of 10 d e g 2 of the Large Magellanic Cloud using the Infrared Camera (IRC) onboard AKARI. The observations were carried out using five imaging filters (3, 7, 11, 15, and 24 μm ) and the prism disperser ( 2 − 5 μm , λ/Δλ ∼ 20 ) equipped in the IRC. This paper presents an outline of the survey project and also describes very briefly the newly compiled near- to mid-infrared point source catalog. The 10σ limiting magnitudes are 17.9, 13.8, 12.4, 9.9, and 8.6 mag at 3.2, 7, 11, 15 and 24 μm , respectively. The photometric accuracy is estimated to be about 0.1 mag at 3.2 μm and 0.06 - 0.07 mag in the other bands. The position accuracy is 0.3" at 3.2, 7 and 11 μm and 1.0" at 15 and 24 μm . The sensitivities at 3.2, 7, and 24 μm are roughly comparable to those of the Spitzer SAGE LMC point source catalog, while the AKARI catalog provides the data at 11 and 15 μm , covering the near- to mid-infrared spectral range continuously.
This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.
A corrugated steel culvert is well used as underpass structure because of its advantages. But many deformed culverts were observed recently. In this paper, we analyzed structural safety of this culvert using 3D laser scanning geometry data, which is called point cloud data. All surface geometry can be built and transformed into the analysis model based on actual geometrical conditions from this data.