Robots for a wide range of purposes have been developed along with the rapid industrialization. On the basis of higher convenience, the robots have been creating new industrial environment. The robots are generally classified into service robots and industrial robots. Robots in various shapes have been developed on the basis of the autonomous mobile robots. The autonomous mobile robots have the possibility to crash against any object in their moving range. This paper suggests a collision avoidance method to prevent collision of robots. The collision avoidance method analyzes the road context data and makes a robot move to a safe area. The collision avoidance method proposed in this paper converts the road context data into the information value. The collision avoidance method analyzes the present risk on the basis of the converted information value. The collision avoidance method makes a robot move to a safe area when crash is estimated by the information analysis.
Autonomous mobile robots based on the Web have been already used in public places such as museums. There are many kinds of problems to be solved because of the limitation of Web and the dynamically changing environment. We present a methodology for intelligent mobile robot that demonstrates a certain degree of autonomy in navigation applications. In this paper, we focus on a mobile robot navigator equipped with neuro-fuzzy controller which perceives the environment, make decisions, and take actions. The neuro-fuzzy controller equipped with collision avoidance behavior and target trace behavior enables the mobile robot to navigate in dynamic environment from the start location to goal location. Most telerobotics system workable on the Web have used standard Internet techniques such as HTTP, CGI and Scripting languages. However, for mobile robot navigations, these tools have significant limitations. In our study, C# and ASP.NET are used for both the client and the server side programs because of their interactivity and quick responsibility. Two kinds of simulations are performed to verify our proposed method. Our approach is verified through computer simulations of collision avoidance and target trace.
The aims of this paper is to develop a modular agricultural robot and its autonomous driving algorithm that can be used in field farming. Actually, it is difficult to develop a controller for autonomous agricultural robot that transforming their dynamic characteristics by installation of machine modules. So we develop for the model based control algorithm of rotary machine connected to agricultural robot. Autonomous control algorithm of agricultural robot consists of the path control, velocity control, orientation control. To verify the developed algorithm, we used to analytical techniques that have the advantage of reducing development time and risks. The model is formulated based on the multibody dynamics methods for high accuracy. Their model parameters get from the design parameter and real constructed data. Then we developed the co-simulation that is combined between the multibody dynamics model and control model using the ADAMS and Matlab simulink programs. Using the developed model, we carried out various dynamics simulation in the several rotation speed of blades.
This paper presents a control and operation system for a remotely operated vehicle (ROV). The ROV used in the study was equipped with a manipulator and is being developed for underwater exploration and autonomous underwater working. Precision position and attitude control ability is essential for underwater operation using a manipulator. For propulsion, the ROV is equipped with eight thrusters, the number of those are more than six degrees-of-freedom. Four of them are in charge of surge, sway, and yaw motion, and the other four are responsible for heave, roll, and pitch motion. Therefore, it is more efficient to integrate the management of the thrusters rather than control them individually. In this paper, a thrust allocation method for thruster management is presented, and the design of a feedback controller using sensor data is described. The software for the ROV operation consists of a robot operating system that can efficiently process data between multiple hardware platforms. Through experimental analysis, the validity of the control system performance was verified.
In this paper, we propose a new algorithm of the guidance line extraction for autonomous agricultural robot based on vision camera in paddy field. It is the important process for guidance line extraction which finds the central point or area of rice row. We are trying to use the central region data of crop that the direction of rice leaves have convergence to central area of rice row in order to improve accuracy of the guidance line. The guidance line is extracted from the intersection points of extended virtual lines using the modified robust regression. The extended virtual lines are represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.
In this paper, we proposed a new algorithm of the guidance line extraction for autonomous weeding robot based on infrared vision sensor in wet paddy. It is the critical process for guidance line extraction which finds the central point or area of rice row. In order to improve accuracy of the guidance line, we are trying to use the morphological characteristics of rice that the direction of rice leaves have convergence to central area of rice row. Using Hough transform, we were represented the curved leaves as a combination of segmented straight lines on binary image that has been skeletonized and segmented object. A slope of the guidance line was gotten as calculate the average slope of all segmented lines. An initial point of the guidance line was determined that is the maximum pixel value of the accumulated white columns of a binary image which is rotated the slope of guidance line in the opposite direction. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.
This paper describes efficient flight control algorithms for building a reconfigurable ad-hoc wireless sensor networks between nodes on the ground and airborne nodes mounted on autonomous vehicles to increase the operational range of an aerial robot or the communication connectivity. Two autonomous flight control algorithms based on adaptive gradient climbing approach are developed to steer the aerial vehicles to reach optimal locations for the maximum communication throughputs in the airborne sensor networks. The first autonomous vehicle control algorithm is presented for seeking the source of a scalar signal by directly using the extremum-seeking based forward surge control approach with no position information of the aerial vehicle. The second flight control algorithm is developed with the angular rate command by integrating an adaptive gradient climbing technique which uses an on-line gradient estimator to identify the derivative of a performance cost function. They incorporate the network performance into the feedback path to mitigate interference and noise. A communication propagation model is used to predict the link quality of the communication connectivity between distributed nodes. Simulation study is conducted to evaluate the effectiveness of the proposed reconfigurable airborne wireless networking control algorithms.
Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.
Path planing method for an autonomous mobile robot is considered. For the practical applications, the simplified local potential field methods are applied under the constraints of the driving condition. To improve the performance, the fuzzy-approximated linear function method is also used.