검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 9

        1.
        2024.10 구독 인증기관·개인회원 무료
        본 연구는 보행로에서 주행하는 자율주행로봇의 경로 최적화를 위한 D*알고리즘 수정에 중점을 두고 있다. 기존의 D*알고리즘은 자율주행 로봇이 장애물을 인식하고 회피하는 방식으로 설계되었지만, 실제 보행환경에서는 보행로를 통행하는 사람들이 로봇을 인지 하고 스스로 회피하는 경향이 관찰되었다. 라이다 센서를 통해 수집된 사람들의 궤적 데이터를 분석하여, 사람들이 자율주행 로봇을 회피하기 시작하는 평균 거리와 회피 각도를 파악하였다. 이를 바탕으로, 사람들이 로봇을 회피할 의사가 있을 때 로봇이 기존 최적경 로를 유지하도록 하고, 그렇지 않은 경우에만 회피 경로를 채택하는 수정된 D*알고리즘을 제안하였다. 실험 결과, 수정된 D*알고리즘 을 적용한 자율주행 로봇은 운행 효율과 주행 시간 측면에서 기존 방식 대비 우수한 성능을 보였다. 이러한 연구는 제한된 배터리 용 량 하에서도 효율적인 주행이 가능하도록 하여 자율주행 로봇의 보행로 사용을 최적화하는 데 기여할 것으로 기대된다.
        2.
        2024.09 KCI 등재 구독 인증기관 무료, 개인회원 유료
        The diversity of smart EV(electric vehicle)-related industries is increasing due to the growth of battery-based eco-friendly electric vehicle component material technology, and labor-intensive industries such as logistics, manufacturing, food, agriculture, and service have invested in and studied automation for a long time. Accordingly, various types of robots such as autonomous mobile robots and collaborative robots are being utilized for each process to improve industrial engineering such as optimization, productivity management, and work management. The technology that should accompany this unmanned automobile industry is unmanned automatic charging technology, and if autonomous mobile robots are manually charged, the utility of autonomous mobile robots will not be maximized. In this paper, we conducted a study on the technology of unmanned charging of autonomous mobile robots using charging terminal docking and undocking technology using an unmanned charging system composed of hardware such as a monocular camera, multi-joint robot, gripper, and server. In an experiment to evaluate the performance of the system, the average charging terminal recognition rate was 98%, and the average charging terminal recognition speed was 0.0099 seconds. In addition, an experiment was conducted to evaluate the docking and undocking success rate of the charging terminal, and the experimental results showed an average success rate of 99%.
        4,000원
        3.
        2019.03 KCI 등재 서비스 종료(열람 제한)
        For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot’s understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.
        4.
        2016.03 KCI 등재 서비스 종료(열람 제한)
        In this paper, we propose a new algorithm of the guidance line extraction for autonomous agricultural robot based on vision camera in paddy field. It is the important process for guidance line extraction which finds the central point or area of rice row. We are trying to use the central region data of crop that the direction of rice leaves have convergence to central area of rice row in order to improve accuracy of the guidance line. The guidance line is extracted from the intersection points of extended virtual lines using the modified robust regression. The extended virtual lines are represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.
        5.
        2011.08 KCI 등재 서비스 종료(열람 제한)
        This paper studies how to combine devices such as monocular/stereo cameras, motors for panning/tilting, fisheye lens and convex mirrors, in order to solve vision-based robotic problems. To overcome the well-known trade-offs between optical properties, we present two mixed versions of the new systems. The first system is the robot photographer with a conventional pan/tilt perspective camera and fisheye lens. The second system is the omnidirectional detector for a complete 360-degree field-of-view surveillance system. We build an original device that combines a stereo-catadioptric camera and a pan/tilt stereo-perspective camera, and also apply it in the real environment. Compared to the previous systems, we show benefits of two proposed systems in aspects of maintaining both high-speed and high resolution with collaborative moving cameras and having enormous search space with hybrid configuration. The experimental results are provided to show the effectiveness of the mixing collaborative and hybrid systems.
        6.
        2011.08 KCI 등재 서비스 종료(열람 제한)
        This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.
        7.
        2008.08 KCI 등재 서비스 종료(열람 제한)
        Collision avoidance is a fundamental and important task of an autonomous mobile robot for safe navigation in real environments with high uncertainty. Obstacles are classified into static and dynamic obstacles. It is difficult to avoid dynamic obstacles because the positions of dynamic obstacles are likely to change at any time. This paper proposes a scheme for vision-based avoidance of dynamic obstacles. This approach extracts object candidates that can be considered moving objects based on the labeling algorithm using depth information. Then it detects moving objects among object candidates using motion vectors. In case the motion vectors are not extracted, it can still detect the moving objects stably through their color information. A robot avoids the dynamic obstacle using the dynamic window approach (DWA) with the object path estimated from the information of the detected obstacles. The DWA is a well known technique for reactive collision avoidance. This paper also proposes an algorithm which autonomously registers the obstacle color. Therefore, a robot can navigate more safely and efficiently with the proposed scheme.
        8.
        2008.08 KCI 등재 서비스 종료(열람 제한)
        This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.
        9.
        2007.06 KCI 등재 서비스 종료(열람 제한)
        This paper deals with a tangible interface system that introduces robot as remote avatar. It is focused on a new method which makes a robot imitate human arm motions captured from a remote space. Our method is functionally divided into two parts: capturing human motion and adapting it to robot. In the capturing part, we especially propose a modified potential function of metaballs for the real-time performance and high accuracy. In the adapting part, we suggest a geometric scaling method for solving the structural difference between a human and a robot. With our method, we have implemented a tangible interface and showed its speed and accuracy test.