In this study, the multi-lane detection problem is expressed as a CNN-based regression problem, and the lane boundary coordinates are selected as outputs. In addition, we described lanes as fifth-order polynomials and distinguished the ego lane and the side lanes so that we could make the prediction lanes accurately. By eliminating the network branch arrangement and the lane boundary coordinate vector outside the image proposed by Chougule’s method, it was possible to eradicate meaningless data learning in CNN and increase the fast training and performance speed. And we confirmed that the average prediction error was small in the performance evaluation even though the proposed method compared with Chougule’s method under harsher conditions. In addition, even in a specific image with many errors, the predicted lanes did not deviate significantly, meaningful results were derived, and we confirmed robust performance.
This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.
As drones gain more popularity these days, drone detection becomes more important part of the drone systems for safety, privacy, crime prevention and etc. However, existing drone detection systems are expensive and heavy so that they are only suitable for industrial or military purpose. This paper proposes a novel approach for training Convolutional Neural Networks to detect drones from images that can be used in embedded systems. Unlike previous works that consider the class probability of the image areas where the class object exists, the proposed approach takes account of all areas in the image for robust classification and object detection. Moreover, a novel loss function is proposed for the CNN to learn more effectively from limited amount of training data. The experimental results with various drone images show that the proposed approach performs efficiently in real drone detection scenarios.