우리의 실생활과 산업 현장에서 인공지능·로봇이 일상화되기 시작하면 서 인공지능·로봇에 의한 인간 노동의 대체가 증가하고 있으며, 그 속도 또한 한층 빨라지고 있다. 이에 논 논문은 인공지능·로봇의 상용화에 의 해 발생할 수 있는 인간 노동 및 일자리의 대체에 따른 소득 불평등의 심화와 이에 대한 해결방안으로 제기되고 있는 로봇세 도입의 쟁점과 동 향 등을 분석하는 것을 목적으로 한다. 최근 인공지능·로봇의 상용화와 자동화 시스템의 도입으로 노동과 일자리의 대체가 빠르게 진행되고 있 는 가운데 로봇세 도입 논란이 가열되고 있다. 로봇세 도입에 반대하는 사람은 로봇세의 도입이 조세 형평의 원칙 위배와 기업의 경영 악화로 이어져 자본주의 경제질서의 근간을 저해할 수 있다고 주장한다. 반면에 로봇세 도입에 찬성하는 사람은 인공지능·로봇의 상용화에 따라 일자리 대체와 실업·빈곤 문제가 심각하게 나타나고 있으므로 인간의 일자리 상 실에 대한 보전과 생존권 보장, 그리고 재교육 등을 위한 세수 확보 수 단으로 로봇세의 도입이 필요하다고 주장한다. 이처럼 인공지능·로봇에 의한 인간 노동 및 일자리의 대체 경향이 강하게 나타날수록 로봇세 도 입을 둘러싼 논란은 앞으로 훨씬 더 증폭될 것으로 예상된다.
최근 자동화생산 및 장치산업에서 로봇시스템의 사용이 증가하고 있다. 그러나 로봇 오퍼레이터, 프로그래머, 공무요원 등 간의 커뮤니케이션 부족으로 휴먼에러가 발생하여 사람의 안전재해사고 뿐 아니라 로봇시스템 가동율에서도 나뱉 영향을 주고 있다. 따라서 본 연구에서는 여유컴포넌트 마코프모형을 기초로 휴먼에러를 고려한 마코프모형을 제안한다.
This work presents a design and control method for a flexible robot arm operated by a wire drive that follows human gestures. When moving the robot arm to a desired position, the necessary wire moving length is calculated and the motors are rotated accordingly to the length. A robotic arm is composed of a total of two module-formed mechanism similar to real human motion. Two wires are used as a closed loop in one module, and universal joints are attached to each disk to create up, down, left, and right movements. In order to control the motor, the anti-windup PID was applied to limit the sudden change usually caused by accumulated error in the integral control term. In addition, master/ slave communication protocol and operation program for linking 6 motors to MYO sensor and IMU sensor output were developed at the same time. This makes it possible to receive the image information of the camera attached to the robot arm and simultaneously send the control command to the robot at high speed.
For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot’s understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.
This paper presents about design efforts of a human-sized quadruped robot leg for high energy efficiency, and verifications. One of the representative index of the energy efficiency is the Cost of Transport (COT), but increased in the energy or work done is not calculated in COT. In this reason, the input to the output energy efficiency should be also considered as a very important term. By designing the robot with customized motor housing, small rotational inertia, and low gear ratio to reduce friction, high energy efficiency was achieved. Squatting motion of one leg was performed and simulation results were compared to the experimental results for validation. The developed 50 kg robot can lift the weight up to 200 kg, and during squatting, it showed high energy efficiency. The robot showed 71% input to output energy efficiency in positive work. Peak current during squatting only appears to be 0.3 A.
Robot arms are being increasingly used in various fields with special attention given to unmanned systems. In this research, we developed a high payload dual-arm robot, in which the forearm module is replaceable to meet the assigned task, such as object handling or lifting humans in a rescue operation. With each forearm module specialized for an assigned task (e.g. safety for rescue and redundant joints for object handling task), the robot can conduct various tasks more effectively than could be done previously. In this paper, the design of the high payload dual-arm robot with replaceable forearm function is described in detail. Two forearms are developed here. Each of forearm has quite a different goal. One of the forearms is specialized for human rescue in human familiar flat aspect and compliance parts. Other is for general heavy objects, more than 30 kg, handling with high degree of freedom more than 7.
This paper present a novel approach to control the lower body power assistive exoskeleton system of a HEXAR-CR35 aimed at improving a muscular strength. More specifically the control of based on the human intention is crucial of importance to ensure intuitive and dexterous motion with the human. In this contribution, we proposed the detection algorithm of the human intention using the MCRS which are developed to measure the contraction of the muscle with variation of the circumference. The proposed algorithm provides a joint motion of exoskeleton corresponding the relate muscles. The main advantages of the algorithm are its simplicity, computational efficiency to control one joint of the HEXAR-CR35 which are consisted knee-active type exoskeleton (the other joints are consisted with the passive or quasi-passive joints that can be arranged by analyzing of the human joint functions). As a consequence, the motion of exoskeleton is generated according to the gait phase: swing and stance phase which are determined by the foot insole sensors. The experimental evaluation of the proposed algorithm is achieved in walking with the exoskeleton while carrying the external mass in the back side.
This paper proposes combination of a cognitive agent architecture named Soar (State, operator, and result) and ROS (Robot Operating System), which can be a basic framework for a robot agent to interact and cope with its environment more intelligently and appropriately. The proposed Soar-ROS human-robot interaction (HRI) agent understands a set of human’s commands by voice recognition and chooses to properly react to the command according to the symbol detected by image recognition, implemented on a humanoid robot. The robotic agent is allowed to refuse to follow an inappropriate command like “go” after it has seen the symbol ‘X’ which represents that an abnormal or immoral situation has occurred. This simple but meaningful HRI task is successfully experimented on the proposed Soar-ROS platform with a small humanoid robot, which implies that extending the present hybrid platform to artificial moral agent is possible.
In this study, we have developed the humanoid joint modules which provide a variety of service while living with people in the future home life. The most important requirement is ensuring the safety for humans of the robot system for collaboration with people and providing physical service in dynamic changing environment. Therefore we should construct the mechanism and control system that each joint of the robot should response sensitively and rapidly to fulfill that. In this study, we have analyzed the characteristic of the joint which based on the target constituting the humanoid motion, developed the optimal actuator system which can be controlled based on each joint characteristic, and developed the control system which can control an multi-joint system at a high speed. In particular, in the design of the joint, we have defined back-drivability at the safety perspective and developed an actuator unit to maximize. Therefore we establish a foundation element technology for future commercialization of intelligent service robots.
In this paper, a robot vision technique is presented to detect obstacles, particularly approaching humans, in the images acquired by a mobile robot that autonomously navigates in a narrow building corridor. A single low-cost color camera is attached to the robot, and a trapezoidal area is set as a region of interest (ROI) in front of the robot in the camera image. The lower parts of a human such as feet and legs are first detected in the ROI from their appearances in real time as the distance between the robot and the human becomes smaller. Then, the human detection is confirmed by detecting his/her face within a small search region specified above the part detected in the trapezoidal ROI. To increase the credibility of detection, a final decision about human detection is made when a face is detected in two consecutive image frames. We tested the proposed method using images of various people in corridor scenes, and could get promising results. This method can be used for a vision-guided mobile robot to make a detour for avoiding collision with a human during its indoor navigation.
This paper presents interaction force control between a balancing robot and a human operator. The balancing robot has two wheels to generate movements on the plane. Since the balancing robot is based on position control, the robot tries to maintain a desired angle to be zero when an external force is applied. This leads to the instability of the system. Thus a hybrid force control method is employed to react the external force from the operator to guide the balancing robot to the desired position by a human operator. Therefore, when an operator applies a force to the robot, desired balancing angles should be modified to maintain stable balance. To maintain stable balance under an external force, suitable desired balancing angles are determined along with force magnitudes applied by the operator through experimental studies. Experimental studies confirm the functionality of the proposed method.
As robots are no longer just working labors in the industrial fields, but stepping into the human's daily lives, interaction and communication between human and robot is becoming essential. For this social interaction with humans, emotion generation of a robot has become necessary, which is a result of very complicated process. Concept of mood has been considered in psychology society as a factor that effects on emotion generation, which is similar to emotion but not the same. In this paper, mood factors for robot considering not only the conditions of the robot itself but also the circumstances of the robot are listed, chosen and finally considered as elements defining a 2-dimensional mood space. Moreover, architecture that combines the proposed mood model and a emotion generation module is given at the end.
This paper introduces a design of multi-dimensional complex emotional model for various complex emotional expression. It is a novel approach to design an emotional model by comparison with conventional emotional model which used a three-dimensional emotional space with some problems; the discontinuity of emotions, the simple emotional expression, and the necessity of re-designing the emotional model for each robot. To solve these problems, we have designed an emotional model. It uses a multi-dimensional emotional space for the continuity of emotion. A linear model design is used for reusability of the emotional model. It has the personality for various emotional results although it gets same inputs. To demonstrate the effectiveness of our model, we have tested with a human friendly robot.
For an advanced intelligent service, the need of HRI technology has recently been increasing and the technology has been also improved. However, HRI components have been evaluated under stable and controlled laboratory environments and there are no evaluation results of performance in real environments. Therefore, robot service providers and users have not been getting sufficient information on the level of current HRI technology. In this paper, we provide the evaluation results of the performance of the HRI components on the robot platforms providing actual services in pilot service sites. For the evaluation, we select face detection component, speaker gender classification component and sound localization component as representative HRI components closing to the commercialization. The goal of this paper is to provide valuable information and reference performance on appling the HRI components to real robot environments.
The paper proposes a human-following behavior of mobile robot and an intelligent space (ISpace) is used in order to achieve these goals. An ISpace is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.
This paper deals with a tangible interface system that introduces robot as remote avatar. It is focused on a new method which makes a robot imitate human arm motions captured from a remote space. Our method is functionally divided into two parts: capturing human motion and adapting it to robot. In the capturing part, we especially propose a modified potential function of metaballs for the real-time performance and high accuracy. In the adapting part, we suggest a geometric scaling method for solving the structural difference between a human and a robot. With our method, we have implemented a tangible interface and showed its speed and accuracy test.