Finding a head of a person in a scene is very important for taking a well composed picture by a robot photographer because it depends on the position of the head. So in this paper, we propose a robust head tracking algorithm using a hybrid of an omega shape tracker and local binary pattern (LBP) AdaBoost face detector for the robot photographer to take a fine picture automatically. Face detection algorithms have good performance in terms of finding frontal faces, but it is not the same for rotated faces. In addition, when the face is occluded by a hat or hands, it has a hard time finding the face. In order to solve this problem, the omega shape tracker based on active shape model (ASM) is presented. The omega shape tracker is robust to occlusion and illumination change. However, when the environment is dynamic, such as when people move fast and when there is a complex background, its performance is unsatisfactory. Therefore, a method combining the face detection algorithm and the omega shape tracker by probabilistic method using histograms of oriented gradient (HOG) descriptor is proposed in this paper, in order to robustly find human head. A robot photographer was also implemented to abide by the 'rule of thirds' and to take photos when people smile.
This paper is concerned with face recognition for human-robot interaction (HRI) in robot environments. For this purpose, we use Tensor Subspace Analysis (TSA) to recognize the user's face through robot camera when robot performs various services in home environments. Thus, the spatial correlation between the pixels in an image can be naturally characterized by TSA. Here we utilizes face database collected in u-robot test bed environments in ETRI. The presented method can be used as a core technique in conjunction with HRI that can naturally interact between human and robots in home robot applications. The experimental results on face database revealed that the presented method showed a good performance in comparison with the well-known methods such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) in distant-varying environments.
The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn’t have a living-being like face such as human’s or animal’s, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost low. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. In addition, by adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.
This study developed a surveillance robot for a ship. The developed robot consists of ultrasonic sensors, an actuator, a lighting fixture and a camera. The ultrasonic sensors are used to avoid collision with obstacles in the environment. The actuator is a servo motor system. The developed robot has four drive wheels for driving. The lighting fixture is used to guide the robot in a dark environment. To transmit an image, a camera with a pan moving and a tilt moving is equipped on the upper part of the robot. AdaBoost algorithm trained with 15 features, is used for face recognition. In order to evaluate the face recognition of the developed robot, experiments were performed.