검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 25

        21.
        2009.11 KCI 등재 서비스 종료(열람 제한)
        This paper proposes a novel face detection method that finds tiny faces located at a long range even with low-resolution input images captured by a mobile robot. The proposed approach can locate extremely small-sized face regions of 12x12 pixels. We solve a tiny face detection problem by organizing a system that consists of multiple detectors including a mean-shift color tracker, short- and long-rage face detectors, and an omega shape detector. The proposed method adopts the long-range face detector that is well trained enough to detect tiny faces at a long range, and limiting its operation to only within a search region that is automatically determined by the mean-shift color tracker and the omega shape detector. By focusing on limiting the face search region as much as possible, the proposed method can accurately detect tiny faces at a long distance even with a low-resolution image, and decrease false positives sharply. According to the experimental results on realistic databases, the performance of the proposed approach is at a sufficiently practical level for various robot applications such as face recognition of non-cooperative users, human-following, and gesture recognition for long-range interaction.
        22.
        2009.02 KCI 등재 서비스 종료(열람 제한)
        One of the main problems of topological localization in a real indoor environment is variations in the environment caused by dynamic objects and changes in illumination. Another problem arises from the sense of topological localization itself. Thus, a robot must be able to recognize observations at slightly different positions and angles within a certain topological location as identical in terms of topological localization. In this paper, a possible solution to these problems is addressed in the domain of global topological localization for mobile robots, in which environments are represented by their visual appearance. Our approach is formulated on the basis of a probabilistic model called the Bayes filter. Here, marginalization of dynamics in the environment, marginalization of viewpoint changes in a topological location, and fusion of multiple visual features are employed to measure observations reliably, and action-based view transition model and action-associated topological map are used to predict the next state. We performed experiments to demonstrate the validity of our proposed approach among several standard approaches in the field of topological localization. The results clearly demonstrated the value of our approach.
        23.
        2007.09 KCI 등재 서비스 종료(열람 제한)
        We propose a optimal fusion method for localization of multiple robots utilizing correlation between GPS on each robot in common workspace. Each mobile robot in group collects position data from each odometer and GPS receiver and shares the position data with other robots. Then each robot utilizes position data of other robot for obtaining more precise estimation of own position. Because GPS data errors in common workspace have a close correlation, they contribute to improve localization accuracy of all robots in group. In this paper, we simulate proposed optimal fusion method of odometer and GPS through virtual robots and position data.
        24.
        2006.12 KCI 등재 서비스 종료(열람 제한)
        We propose a planning algorithm to automatically generate a robust behavior plan(RBP)with which mobile robots can achive their task goal from any initial states under dynamically changing environments. For this, task description space(TDS)is formulated, where a redundant task configuration space and simulation model of physical space are employed. Successful task episodes are collected, where A algorithm is employed. Interesting TDS state vectors are extracted, where occurrence frequency is used. Clusters of TDS state vectors are found by using state transition tuples and features of state transition tuples. From these operations, characteristics of successfully performed tasks by a simulator are abstracted and generalized. Then, a robust behavior plan is constructed as an ordered tree structure, where nodes of the tree are represented by attentive TDS state vector of each cluster. The validity of our method is tested by real robot's experimentation for a box-pushing-into-a-goal task.
        25.
        2006.09 KCI 등재 서비스 종료(열람 제한)
        In this paper, we introduce visual contexts in terms of types and utilization methods for robust object recognition with intelligent mobile robots. One of the core technologies for intelligent robots is visual object recognition. Robust techniques are strongly required since there are many sources of visual variations such as geometric, photometric, and noise. For such requirements, we define spatial context, hierarchical context, and temporal context. According to object recognition domain, we can select such visual contextx. We also propose a unified framework which can utilize the whole contexts and validates it in real working environment. Finally, we also discuss the furture research directions of object recognition technologies for intelligent robots.
        1 2