Recently, information technology has been developed rapidly over than everyone thinks. The computer technology with hardware and software development of artificial intelligent and machine learning would be more interested in the interactive computer technology. In this paper, we propose a new paradigm to implement the system which recognizes human’s emotion and reacts with the emotion, and this will be human interactive and applied in every information technical field. This system implements an intelligent system that analyzes human’s faces following with the recognition of emotion, which will be the intelligent system that reacts in accordance with the emotion. The proposed real-time intelligent system will develop the system that recognizes human’s emotion, with the emotion, and reacts the actions in the field of marketing of enterprises, intelligent games, and intelligent robots.
Recently, the field of emotional ICT which recognizes human's emotion is a rapidly growing interest. For example, various products applying emotion are being released and Softbank's robot, Pepper, is the one of those. This robot can recognize human's emotion through facial expressions and have conversations accordingly. By reading emotion through facial expressions, communication with humans become more natural. In addition, the emotional interface between machines and humans in various areas are applied to show a more intimate interface such as emotional application games that respond differently based on the emotion. In this paper, a system applying special effects on images based on recognition of six emotions from the facial expressions is proposed. A more friendly content can be produced by applying an appropriate emotional effect on the image loaded by the user with the user's facial expression. The result of this paper can be very appropriate to game scenarios and developing game program stages with the recognition of human emotion.
인간의 감정을 인식하는 기술은 많은 응용분야가 있음에도 불구하고 감정 인식의 어려움으로 인해 쉽게 해결 되지 않는 문제로 남아 있다. 인간의 감정 은 크게 영상과 음성을 이용하여 인식이 가능하다. 감정 인식 기술은 영상을 기반으로 하는 방법과 음성을 이용하는 방법 그리고 두 가지를 모두 이용하는 방법으로 많은 연구가 진행 중에 있다. 이 중에 특히 인간의 감정을 가장 보편적으로 표현되는 방식이 얼굴 영상을 이용한 감정 인식 기법에 대한 연구가 활발히 진행 중이다. 그러나 지금까지 사용자의 환경과 이용자 적응에 따라 많은 차이와 오류를 접하게 된다. 본 논문에서는 감정인식률을 향상시키기 위해서는 이용자의 내면적 성향을 이해하고 분석하여 이에 따라 적절한 감정인식의 정확도에 도움을 주어서 감정인식률을 향상시키는 메카니즘을 제안하였으며 본 연구는 이러한 이용자의 내면적 성향을 분석하여 감정 인식 시스템에 적용함으로 얼굴 표정에 따른 감정인식에 대한 오류를 줄이고 향상 시킬 수 있다. 특히 얼굴표정 미약한 이용자와 감정표현에 인색한 이용자에게 좀 더 향 상된 감정인식률을 제공 할 수 있는 방법을 제안하였다.
In this paper, we propose and examine the feasibility of the robot-assisted behavioral intervention system so as to strengthen positive response of the children with autism spectrum disorder (ASD) for learning social skills. Based on well-known behavioral treatment protocols, the robot offers therapeutic training elements of eye contact and emotion reading respectively in child-robot interaction, and it subsequently accomplishes pre-allocated meaningful acts by estimating the level of children’s reactivity from reliable recognition modules, as a coping strategy. Furthermore, for the purpose of labor saving and attracting children’s interest, we implemented the robotic stimulation configuration with semi-autonomous actions capable of inducing intimacy and tension to children in instructional trials. From these configurations, by evaluating the ability of recognizing human activity as well as by showing improved reactivity for social training, we verified that the proposed system has some positive effects on social development, targeted for preschoolers who have a high functioning level.
Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".