간행물

로봇학회논문지 KCI 등재 The Journal of Korea Robotics Society

권호리스트/논문검색
이 간행물 논문 검색

권호

제11권 제1호 (통권 제39호) (2016년 3월) 5

1.
2016.03 서비스 종료(열람 제한)
In this paper, we propose a new algorithm of the guidance line extraction for autonomous agricultural robot based on vision camera in paddy field. It is the important process for guidance line extraction which finds the central point or area of rice row. We are trying to use the central region data of crop that the direction of rice leaves have convergence to central area of rice row in order to improve accuracy of the guidance line. The guidance line is extracted from the intersection points of extended virtual lines using the modified robust regression. The extended virtual lines are represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.
2.
2016.03 서비스 종료(열람 제한)
A rectangle-based relative localization method is proposed for a mobile robot based on a novel geometric formulation. In an artificial environment where a mobile robot navigates, rectangular shapes are ubiquitous. When a scene rectangle is captured using a camera attached to a mobile robot, localization can be performed and described in the relative coordinates of the scene rectangle. Especially, our method works with a single image for a scene rectangle whose aspect ratio is not known. Moreover, a camera calibration is unnecessary with an assumption of the pinhole camera model. The proposed method is largely based on the theory of coupled line cameras (CLC), which provides a basis for efficient computation with analytic solutions and intuitive geometric interpretation. We introduce the fundamentals of CLC and describe the proposed method with some experimental results in simulation environment.
3.
2016.03 서비스 종료(열람 제한)
In this paper, we propose a new method for improving the accuracy of localizing a robot to find the position of a robot in indoor environment. The proposed method uses visible light for indoor localization with a reference receiver to estimate optical power of individual LED in order to reduce localization errors which are caused by aging of LED components and different optical power for each individual LED, etc. We evaluate the performance of the proposed method by comparing it with the performance of traditional model. In several simulations, probability density functions and cumulative distribution functions of localization errors are also obtained. Results indicate that the proposed method is able to reduce localization errors from 7.3 cm to 1.6 cm with a precision of 95%.
4.
2016.03 서비스 종료(열람 제한)
A variable release torque-based compliance spring-clutch (VCSC) is presented. VCSC is a safe joint to reduce the impact of collisions between humans and robots. It is composed of four functional plates, balls, springs to make some functions in compliant movement, release mechanism, gravity compensation during its work. Also, it can estimate torque applied to a joint by using distance sensor and parameters of cam profile. The measured variable torque of prototype is 4.3~7.6 Nm and release torque is 4.3 Nm. In our future studies, a calibration for torque estimation will be conducted.
5.
2016.03 서비스 종료(열람 제한)
In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.