자율운항선박이 상용화되어 연안을 항해하기 위해서는 해상의 장애물을 탐지할 수 있어야 한다. 연안에서 가장 많이 볼 수 있 는 장애물 중의 하나는 양식장의 부표이다. 이에 본 연구에서는 YOLO 알고리즘을 이용하여 해상의 부표를 탐지하고, 카메라 영상의 기하 학적 해석을 통해 선박으로부터 떨어진 부표의 거리와 방위를 계산하여 장애물을 시각화하는 해상물체탐지시스템을 개발하였다. 1,224장 의 양식장 부표 사진으로 해양물체탐지모델을 훈련시킨 결과, 모델의 Precision은 89.0 %, Recall은 95.0 % 그리고 F1-score는 92.0 %이었다. 얻 어진 영상좌표를 이용하여 카메라로부터 떨어진 물체의 거리와 방위를 계산하기 위해 카메라 캘리브레이션을 실시하고 해상물체탐지시 스템의 성능을 검증하기 위해 Experiment A, B를 설계하였다. 해상물체탐지시스템의 성능을 검증한 결과 해상물체탐지시스템이 레이더보 다 근거리 탐지 능력이 뛰어나서 레이더와 더불어 항행보조장비로 사용이 가능할 것으로 판단된다.
Future autonomous vehicles need to recognize the ego lanes required for lane change and the side left and right lanes differently. Therefore, multi-lane recognition is needed. In this study, using the YOLO network, mainly used for object recognition, the proposed method recognizes the ego, left and right side lanes as different objects and identifies the correct lanes. As a result of the performance evaluation on the TuSimple test data, the proposed method recognized the ego lanes and the left and right side lanes differently. It showed very stable lane recognition results. And by detecting lanes that do not exist in the ground truth of TuSimple data, the proposed method is very robust in lanes detection. Nevertheless, studies related to learning data reinforcement in which lanes are located in the center or at the left and right edges of the image and accurate network learning for lanes are needed.
We need data such as the number of lanes for lane change on the road as well as environmental and object recognition of the road for the autonomous vehicle of the future. This study proposed an algorithm that recognizes the left and right lanes and the center lane while driving differently from the black box image taken from a car. In general, deep learning does not recognize lanes individually but recognizes all lanes as only one lane. Therefore, using YOLO's object recognition function, the left and right lanes and the center lane were detected as different lanes, and a heuristic method was applied to recognize multi-lanes as more correct lanes. As a result of the performance evaluation, we confirmed that the proposed method detects the lane more accurately than Fast R-CNN and only YOLOv2.