In this paper, a design for a vehicle body of an armored robot for complex disasters is described. The proposed design considers various requirements in complex disaster situations. Fire, explosion, and poisonous gas may occur simultaneously under those sites. Therefore, the armored robot needs a vehicle body that can protect people from falling objects, high temperature, and poisonous gas. In addition, it should provide intuitive control devices and realistic surrounding views to help the operator respond to emergent situations. To fulfill these requirements of the vehicle body, firstly, the frame was designed to withstand the impact of falling objects. Secondly, the positive pressure device and the cooling device were applied. Thirdly, a panoramic view was implemented that enables real-time observation of surroundings through a number of image sensors. Finally, the cockpit in the vehicle body was designed focused on the manipulability of the armored robot in disaster sites.
In this paper, we introduce a target position reasoning system based on Bayesian network that selects destinations of robots on a map to explore compound disaster environments. Compound disaster accidents have hazardous conditions because of a low visibility and a high temperature. Before firefighters enter the environment, the robots notify information in advance, such as victim’s positions, number of victims, and status of debris of building. The problem of the previous system is that the system requires a target position to operate the robots and the firefighter need to learn how to use the robot. However, selecting the target position is not easy because of the information gap between eyewitness accounts and map coordinates. In addition, learning the technique how to use the robots needs a lot of time and money. The proposed system infers the target area using Bayesian network and selects proper x, y coordinates on the map based on image processing methods of the map. To verify the proposed system, we designed three example scenarios based on eyewetinees testimonies and compared time consumption between human and the system. In addition, we evaluate the system usability by 40 subjects.