Judging from the studies carried out by Dr. Jo, Yong Jin on the Koreans' faces, Koreans divided into two constitutions according to their facial features and heritages. The one population is the Northern lineage whose ancestor migrated from Siberia in ice age. In order to survive in cold climate, they have developed a high level of metabolic heat production. Cold adaptation for preventing heat loss results in a reduction in the facial surface area with small eyes, nose and lips. The other population is the Southern lineage who is the descent of native in Korean peninsular. They have big eyes with double edged eyelids, broad nose and thick lips. It is generally believed that both genetic and environmetal factors influence eating behaviors. Although we can't recognized their heritage that may contribute to the metabolism and eating behavior, we commonly recognize their physiological heritage acceding to their facial features. In order to investigate the relationship among the size and shape of facial feature, the eating behavior, anthropometric measurement in female college students, the eating behaviors was measured during an instant-noodle lunch eaten in a laboratory setting at the ambient temperature of 23℃. The anterior surface area of left eye and length of right eye were positively correlated with the difference between the peak postprandial and the meal-start core temperature. The surface area of lower lip also negatively correlated with the meal-start core temperature and meal duration. In addition, the total lips' area was positively correlated with the difference between the peak postprandial and the meal-start core temperature and negatively correlated with the meal duration. However anthropometric measurements were not related with the size of facial features.
본 논문은 얼굴인식 분야에 있어서 필수 과정인 얼굴 및 얼굴의 주요소인 눈과 입의 추출에 관한 방법을 제시한다. 얼굴 영역 추출은 복잡한 배경하에서 움직임 정보나 색상정보를 사용하지 않고 통계적인 모델에 기반한 일종의 형찬정합 방법을 사용하였다. 통계적인 모델은 입력된 얼굴 영상들의 Hotelling변환 과정에서 생성되는 고유 얼굴로, 복잡한 얼굴 영상을 몇 개의 주성분 갑으로 나타낼 수 있게 한다. 얼굴의 크기, 영상의 명암, 얼굴의 위치에 무관하게 얼굴을 추출하기 위해서, 단계적인 크기를 가지는 탐색 윈도우를 이용하여 영상을 검색하고 영상 강화 기법을 적용한 후, 영상을 고유얼굴 공간으로 투영하고 복원하는 과정을 통해 얼굴을 추출한다. 얼굴 요소의 추출은 각 요소별 특성을 고려한 엣지 추출과 이진화에 따른 프로젝션 히스토그램 분석에 의하여 눈과 입의 경계영역을 추출한다. 얼굴 영상에 관련된 윤곽선 추출에 관한 기존의 연구에서 주로 기하학적인 모양을 갖는 눈과 입의 경우에는 주로 가변 템플릿(Deformable Template)방법을 사용하여 특징을 추출하고, 비교적 다양한 모양을 갖는 눈썹, 얼굴 윤곽선 추출에는 스네이크(Snakes: Active Contour Model)를 이용하는 연구들이 이루어지고 있는데, 본 논문에서는 이러한 기존의 연구와는 달리 스네이크를 이용하여 적절한 파라미터의 선택과 에너지함수를 정의하여 눈과 입의 윤곽선 추출을 실험하였다. 복잡한 배경하에서 얼굴 영역의 추출, 추출된 얼굴 영역에서 눈과 입의 영역 추출 및 윤곽선 추출이 비교적 좋은 결과를 보이고 있다.
Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".
For natural human-robot interaction, we need to know location and shape of facial feature in real environment. In order to track facial feature robustly, we can use the method combining particle filter and active appearance model. However, processing speed of this method is too slow. In this paper, we propose two ideas to improve efficiency of this method. The first idea is changing the number of particles situationally. And the second idea is switching the prediction model situationally. Experimental results is presented to show that the proposed method is about three times faster than the method combining particle filter and active appearance model, whereas the performance of the proposed method is maintained.