While increasing demand of the service for the disabled and the elderly people, assistive technologies have been developed rapidly. The natural signal of human such as voice or gesture has been applied to the system for assisting the disabled and the elderly people. As an example of such kind of human robot interface, the Soft Remote Control System has been developed by HWRS-ERC in KAIST[1]. This system is a vision-based hand gesture recognition system for controlling home appliances such as television, lamp and curtain. One of the most important technologies of the system is the hand gesture recognition algorithm. The frequently occurred problems which lower the recognition rate of hand gesture are inter-person variation and intra-person variation. Intra-person variation can be handled by inducing fuzzy concept. In this paper, we propose multivariate fuzzy decision tree(MFDT) learning and classification algorithm for hand motion recognition. To recognize hand gesture of a new user, the most proper recognition model among several well trained models is selected using model selection algorithm and incrementally adapted to the user’s hand gesture. For the general performance of MFDT as a classifier, we show classification rate using the benchmark data of the UCI repository. For the performance of hand gesture recognition, we tested using hand gesture data which is collected from 10 people for 15 days. The experimental results show that the classification and user adaptation performance of proposed algorithm is better than general fuzzy decision tree.
This paper describes the recognition method of moving objects in mobile robot with an omnidirectional camera. The moving object is detected using the specific pattern of an optical flow in omnidirectional image. This paper consists of two parts. In the first part, the pattern of an optical flow is investigated in omnidirectional image. The optical flow in omnidirectional image is influenced on the geometry characteristic of an omnidirectional camera. The pattern of an optical flow is theoretically and experimentally investigated. In the second part, the detection of moving objects is presented from the estimated optical flow. The moving object is extracted through the relative evaluation of optical flows which is derived from the pattern of optical flow. In particular, Focus-Of-Expansion (FOE) and Focus-Of-Contraction (FOC) vectors are defined from the estimated optical flow. They are used as reference vectors for the relative evaluation of optical flows. The proposed algorithm is performed in four motions of a mobile robot such as straight forward, left turn, right turn and rotation. Experimental results using real movie show the effectiveness of the proposed method.
To achieve autonomous mobile robot navigation, accurate localization technique is the fundamental issue that should be addressed. In augmented reality, the position of a user is required for location-based services. This paper presents indoor localization using infrared reflective artificial landmarks. In order to minimize the disturbance to the user and to provide the ease of installation, the passive landmarks are used. The landmarks are made of coated film which reflects the infrared light efficiently. Infrared light is not visible, but the camera can capture the reflected infrared light. Once the artificial landmark is identified, the camera’s relative position/orientation is estimated with respect to the landmark. In order to reduce the number of the required artificial landmarks for a given environment, the pan/tilt mechanism is developed together with the distortion correction algorithm.
This paper presents a decentralized coordination for a small-scale mobile robot teams performing a task through cooperation. Robot teams are required to generate and maintain various geometric patterns adapting to an environment and/or a task in many cooperative applications. In particular, all robots must continue to strive toward achieving the team’s mission even if some members fail to perform their role. Toward this end, given the number of robots in a team, an effective coordination is investigated for decentralized formation control strategies. Specifically, all members are required first to reach agreement on their coordinate system and have an identifier (ID) for role assignment in a self-organizing way. Then, employing IDs on individual robots within a common coordinate system, a decentralized neighbor-referenced formation control is realized to generate, keep, and switch between different geometric shapes. This approach is verified using an in-house simulator and physical mobile robots. We detail and evaluate the formation control approach, whose common features include self-organization, robustness, and flexibility.
This paper presents URS (Ubiquitous Robotic Space) Modeling and service technique for the robotic security service while bridging between virtual space and physical space. First, this paper introduces a concept of virtual URS and responsive virtual URS. Second, this paper addresses modeling of URS which covers modeling of indoor geometry and environment sensor. Third, this paper describes virtual URS services including interactive virual-physical bridging service.
This paper presents a multi-robot localization based on multidimensional scaling (MDS) in spite of the existence of incomplete and noisy data. While the traditional algorithms for MDS work on the full-rank distance matrix, there might be many missing data in the real world due to occlusions. Moreover, it has no considerations to dealing with the uncertainty due to noisy observations. We propose a robust MDS to handle both the incomplete and noisy data, which is applied to solve the multi-robot localization problem. To deal with the incomplete data, we use the Nyström approximation which approximates the full distance matrix. To deal with the uncertainty, we formulate a Bayesian framework for MDS which finds the posterior of coordinates of objects by means of statistical inference. We not only verify the performance of MDS-based multi-robot localization by computer simulations, but also implement a real world localization of multi-robot team. Using extensive empirical results, we show that the accuracy of the proposed method is almost similar to that of Monte Carlo Localization(MCL).
In this paper, we present a grasp planning method using grasp taxonomy and object primitives. Our grasp taxonomy includes newly defined grasp methods such as thumb supported pinch and palm supported pinch, to enhance grasp robustness. On the target surface, locations of finger-print that will be contacted by the robot fingers are sampled. The sampling is made to be consistent to the grasp taxonomy, called preformed grasps, matched to the target object. We perform simulations to examine the validity and the efficacy of the proposed grasp planning method.
This paper proposes a method to extract the personal information using a microphone array. Useful personal information, particularly customers, are age and gender. On the basis of these information, service applications for robots can satisfy users by offering services adaptive to the special needs of specific user groups that may include adults and children as well as females and males. We applied Gaussian Mixture Model (GMM) as a classifier and Mel Frequency Cepstral coefficients (MFCCs) as a voice feature. The major aim of this paper is to discover the voice source parameters of age and gender and to classify these two characteristics simultaneously. For the ubiquitous environment, voices obtained by the selected channels in a microphone array are useful to reduce background noise.
We present the initial results of on-going research for building a novel Mobile Haptic Interface (MHI) that can provide an unlimited haptic workspace in large immersive virtual environments. When a user explores a large virtual environment, the MHI can sense the position and orientation of the user, place itself to an appropriate configuration, and deliver force feedback, thereby enabling a virtually limitless workspace. Our MHI (PoMHI v0.5) features with omnidirectional mobility, a collision-free motion planning algorithm, and force feedback for general environment models. We also provide experimental results that show the fidelity of our mobile haptic interface.
본 논문에서는 인간 운동 제어 이론과 기계학습을 기반으로 하여 로봇의 접촉 작업 수행을 위한 새로운 운동 학습 전략을 제시하였다. 성공적인 접촉 작업 수행을 위한 본 연구의 전략은 강화학습 기법을 통하여 최적의 작업 수행을 위한 임피던스 매개 변수를 찾는 것이다. 본 연구에서는 최적의 임피던스 매개 변수를 결정하기 위하여 Recursive Least-Square (RLS) 필터 기반 episodic Natural Actor-Critic 알고리즘이 적용되
This paper presents the concept for the development of a pet-type robot with an emotion engine. The pet-type robot named KOBIE (KOala roBot with Intelligent Emotion) is able to interact with a person through touch. KOBIE is equipped with tactile sensors on the body for interaction with a person through recognition of his/her touching behaviors such as “Stroke”,“Tickle”,“Hit”. We have covered KOBIE with synthetic fur fabric in order to can make him/her feel affection as well. KOBIE is able to also express an emotional status that varies according to the circumstances under which it is presented. The emotion engine of KOBIE's emotion expression system generates an emotional status in an emotion vector space which is associated with a predefined needs and mood models. In order to examine the feasibility of our emotion expression system, we verified a changing emotional status in our emotion vector space by a touching behavior. We specially examined the reaction of children who have interacted with three kind of pet-type robots: KOBIE, PARO, AIBO for roughly 10 minutes to investigate the children's preference for pet-type robots.