This paper presents a new behavior planning scheme for autonomous robots, allowing them to handle various objects used in our daily lives. The key idea underlying the proposed scheme is to use affordance concepts that provide a robot with action possibilities triggered by a relation between the robot and objects around it. Specifically, the robot attempts to find the affordances and to determine the most adequate action among them. Through a series of the perception processes, robot motions can be planned and performed to complete assigned tasks. What is of particular importance from the practical point of view is a decision making capability to determine the best choice by comparing the human’s body characteristics and behavioral patterns as criteria with action possibilities as alternatives. For this, the analytic hierarchy process (AHP) technique is employed to systematically evaluate the correlation between the criteria and the alternatives. Moreover, the alternatives arranged in order of priority through the decision making process enable the robot to have redundant solutions for the assigned task, resulting in flexible motion generation. The effectiveness and validity of the proposed scheme are verified by performing extensive simulations using objects of our daily use.
본 논문에서는 동기 기반의 계층적 행동 계획 시스템을 제안한다. 가상 시뮬레이션 게임 환경에서 에이전트는 행동 계획 시스템을 통해 적합한 행동을 선택하게 된다. 행동 선택 시스템은 동기를 추출하고 목표를 선택하고 행동을 생성하고 최적화를 수행한다. 동기를 평가할 때 갑작스럽게 발생하거나 누적된 이벤트에 대해 계산한다. 동기를 선택할 때는 확률 분포를 사용하여 무작위로 선택한다. 계층적 목표 트리를 탐색한 후에 목표를 실행할 수 있다. 행동들을 비교한 후 가장 적합한 행동을 선택하게 된다. 선택을 할 때 안전도 값과 만족도 값을 비교하여 최적화된 행동을 선택한다. 본 연구에서 제안한 시스템을 식당경영 게임에 적용했다.
We propose a planning algorithm to automatically generate a robust behavior plan(RBP)with which mobile robots can achive their task goal from any initial states under dynamically changing environments. For this, task description space(TDS)is formulated, where a redundant task configuration space and simulation model of physical space are employed. Successful task episodes are collected, where A algorithm is employed. Interesting TDS state vectors are extracted, where occurrence frequency is used. Clusters of TDS state vectors are found by using state transition tuples and features of state transition tuples. From these operations, characteristics of successfully performed tasks by a simulator are abstracted and generalized. Then, a robust behavior plan is constructed as an ordered tree structure, where nodes of the tree are represented by attentive TDS state vector of each cluster. The validity of our method is tested by real robot's experimentation for a box-pushing-into-a-goal task.