Various underwater studies using underwater sonar sensors are actively in progress. However, unlike the ground, the underwater has a lot of noise. So it is difficult to accurately recognize the underwater environment. The final purpose of this study is to improve the efficiency of the underwater environment recognition using the underwater sonar sensor by developing a filtering algorithm that removes noise and expresses the object from the underwater sonar image captured by the underwater sonar sensor. To develop a filtering algorithm, convolutional calculations were used with three types of filters. This paper is about the case study that conducted to set the parameters of ‘Gabor Filter’ suitable for underwater sonar image during the design process of filtering algorithm. As a result, it was possible to find the most suitable ‘Gabor Filter’ parameters for underwater sonar images. And it showed high accuracy with a binary map of obstacles created by hand using the naked eye. Through this study, it can be utilized not only as a binary map of real-time obstacles, but also as an algorithm for generating object masks in underwater sonar images for deep learning.
Collision avoidance is a fundamental and important task of an autonomous mobile robot for safe navigation in real environments with high uncertainty. Obstacles are classified into static and dynamic obstacles. It is difficult to avoid dynamic obstacles because the positions of dynamic obstacles are likely to change at any time. This paper proposes a scheme for vision-based avoidance of dynamic obstacles. This approach extracts object candidates that can be considered moving objects based on the labeling algorithm using depth information. Then it detects moving objects among object candidates using motion vectors. In case the motion vectors are not extracted, it can still detect the moving objects stably through their color information. A robot avoids the dynamic obstacle using the dynamic window approach (DWA) with the object path estimated from the information of the detected obstacles. The DWA is a well known technique for reactive collision avoidance. This paper also proposes an algorithm which autonomously registers the obstacle color. Therefore, a robot can navigate more safely and efficiently with the proposed scheme.
In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are lane angle, distance error and real position of other vehicles, we should calculate the reference steering angle.