This paper deals with the stability of industrial robot arms with six axes and six degrees of freedom. The robot arm used was IRB120, a product of ABB company, which is used in the real industry, by using the commercial “DAFUL” which is a simulation program that can analyze the dynamic behavior. DAFUL was applied to the robot arm to control the motion by applying the load to the robot arm and then the structural analysis of the robot arm was performed during the analysis time. As a result of the analysis of the robot arm, the stress and displacement acting on the elliptic model and the acting torque and force were analyzed. Based on the analysis results, stability was checked with reference to IRB120 product catalog.
This paper proposes a low-cost robotic surgery system composed of a general purpose robotic arm, an interface for daVinci surgical robot tools and a modular haptic controller utilizing smart actuators. The 7 degree of freedom (DOF) haptic controller is suspended in the air using the gravity compensation, and the 3D position and orientation of the controller endpoint is calculated from the joint readings and the forward kinematics of the haptic controller. Then the joint angles for a general purpose robotic arm is calculated using the analytic inverse kinematics so that that the tooltip reaches the target position through a small incision. Finally, the surgical tool wrist joints angles are calculated to make the tooltip correctly face the desired orientation. The suggested system is implemented and validated using the physical UR5e robotic arm.
Robots are used in various industrial sites, but traditional methods of operating a robot are limited at some kind of tasks. In order for a robot to accomplish a task, it is needed to find and solve accurate formula between a robot and environment and that is complicated work. Accordingly, reinforcement learning of robots is actively studied to overcome this difficulties. This study describes the process and results of learning and solving which applied reinforcement learning. The mission that the robot is going to learn is bottle flipping. Bottle flipping is an activity that involves throwing a plastic bottle in an attempt to land it upright on its bottom. Complexity of movement of liquid in the bottle when it thrown in the air, makes this task difficult to solve in traditional ways. Reinforcement learning process makes it easier. After 3-DOF robotic arm being instructed how to throwing the bottle, the robot find the better motion that make successful with the task. Two reward functions are designed and compared the result of learning. Finite difference method is used to obtain policy gradient. This paper focuses on the process of designing an efficient reward function to improve bottle flipping motion.
Reinforcement learning has been applied to various problems in robotics. However, it was still hard to train complex robotic manipulation tasks since there is a few models which can be applicable to general tasks. Such general models require a lot of training episodes. In these reasons, deep neural networks which have shown to be good function approximators have not been actively used for robot manipulation task. Recently, some of these challenges are solved by a set of methods, such as Guided Policy Search, which guide or limit search directions while training of a deep neural network based policy model. These frameworks are already applied to a humanoid robot, PR2. However, in robotics, it is not trivial to adjust existing algorithms designed for one robot to another robot. In this paper, we present our implementation of Guided Policy Search to the robotic arms of the Baxter Research Robot. To meet the goals and needs of the project, we build on an existing implementation of Baxter Agent class for the Guided Policy Search algorithm code using the built-in Python interface. This work is expected to play an important role in popularizing robot manipulation reinforcement learning methods on cost-effective robot platforms.
Peg-in-hole assembly is the most representative task for a robot to perform under contact conditions. Various strategies for accomplishing the peg-in-hole task with a robot exist, but the existing strategies are not sufficiently practical to be used for various assembly tasks in a human environment because they require additional sensors or exclusive tools. In this paper, the peg-in-hole assembly experiment is performed with anthropomorphic hand arm robot without extra sensors or devices using “intuitive peg-in-hole strategy”. From this work, the probability of applying the peg-in-hole strategy to a common assembly task is verified.