Robots are used in various industrial sites, but traditional methods of operating a robot are limited at some kind of tasks. In order for a robot to accomplish a task, it is needed to find and solve accurate formula between a robot and environment and that is complicated work. Accordingly, reinforcement learning of robots is actively studied to overcome this difficulties. This study describes the process and results of learning and solving which applied reinforcement learning. The mission that the robot is going to learn is bottle flipping. Bottle flipping is an activity that involves throwing a plastic bottle in an attempt to land it upright on its bottom. Complexity of movement of liquid in the bottle when it thrown in the air, makes this task difficult to solve in traditional ways. Reinforcement learning process makes it easier. After 3-DOF robotic arm being instructed how to throwing the bottle, the robot find the better motion that make successful with the task. Two reward functions are designed and compared the result of learning. Finite difference method is used to obtain policy gradient. This paper focuses on the process of designing an efficient reward function to improve bottle flipping motion.
This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.
The optimal grasping point of the object varies depending on the shape of the object, such as the weight, the material, the grasping contact with the robot hand, and the grasping force. In order to derive the optimal grasping points for each object by a three fingered robot hand, optimal point and posture have been derived based on the geometry of the object and the hand using the artificial neural network. The optimal grasping cost function has been derived by constructing the cost function based on the probability density function of the normal distribution. Considering the characteristics of the object and the robot hand, the optimum height and width have been set to grasp the object by the robot hand. The resultant force between the contact area of the robot finger and the object has been estimated from the grasping force of the robot finger and the gravitational force of the object. In addition to these, the geometrical and gravitational center points of the object have been considered in obtaining the optimum grasping position of the robot finger and the object using the artificial neural network. To show the effectiveness of the proposed algorithm, the friction cone for the stable grasping operation has been modeled through the grasping experiments.
Object manipulation in cluttered environments remains an open hard problem. In cluttered environments, grasping objects often fails for various reasons. This paper proposes a novel task and motion planning scheme to grasp objects obstructed by other objects in cluttered environments. Task and motion planning (TAMP) aims to generate a sequence of task-level actions where its feasibility is verified in the motion space. The proposed scheme contains an open-loop consisting of three distinct phases: 1) Generation of a task-level skeleton plan with pose references, 2) Instantiation of pose references by motion-level search, and 3) Re-planning task based on the updated state description. By conducting experiments with simulated robots, we show the high efficiency of our scheme.
This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in ‘tfrecords’ format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.
Collecting a rich but meaningful training data plays a key role in machine learning and deep learning researches for a self-driving vehicle. This paper introduces a detailed overview of existing open-source simulators which could be used for training self-driving vehicles. After reviewing the simulators, we propose a new effective approach to make a synthetic autonomous vehicle simulation platform suitable for learning and training artificial intelligence algorithms. Specially, we develop a synthetic simulator with various realistic situations and weather conditions which make the autonomous shuttle to learn more realistic situations and handle some unexpected events. The virtual environment is the mimics of the activity of a genuine shuttle vehicle on a physical world. Instead of doing the whole experiment of training in the real physical world, scenarios in 3D virtual worlds are made to calculate the parameters and training the model. From the simulator, the user can obtain data for the various situation and utilize it for the training purpose. Flexible options are available to choose sensors, monitor the output and implement any autonomous driving algorithm. Finally, we verify the effectiveness of the developed simulator by implementing an end-to-end CNN algorithm for training a self-driving shuttle.
In this study, we developed an FSEA(Force-sensing Series Elastic Actuator) composed of a spring and an actuator has been developed to compensate for external disturbance forced. The FSEA has a simple structure in which the spring and the actuator are connected in series, and the external force can be easily measured through the displacement of the spring. And the characteristic of the spring absorbs the shock to the small disturbance and increases the sense of stability. It is designed and constructed to control the stiffness of such springs more flexibly according to the situation. The conventional FSEA uses a fixed stiffness spring and the actuator is not compensated properly when it receives large or small external force. Through this experiment, it is confirmed that FSEA compensates the external force through the proposed algorithm that the variable stiffness compensates well for large and small external forces.
In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.
Haptic systems have been widely used for both virtual reality and augmented reality application including game, entertainment, education and medical sectors. Clothing designers and retailers initiated using AR and VR technologies to help the consumers find style with the perfect fit. Most of the developed augmented reality shopping is implemented by overlapping the image of the clothes on the customer so that he/she can find the fit. However, those are only visual information and the customer cannot experience the real size and the stiffness of the clothes. In this paper, we present the haptic upper garment which provides the haptic feedback to the user using cables. By controlling the length of the cable, the size of the clothes is set and by stiffness control, the compliance of the fabric is implemented. The haptic garment is modeled for precise control and the distributed controller architecture is described. With the haptic upper garment, the user’s experience of the virtual clothes is greatly enhanced.