In this study, we evaluate artificial neural network (ANN) models that estimate the positions of gamma-ray sources from plastic scintillating fiber (PSF)-based radiation detection systems using different filtering ratios. The PSF-based radiation detection system consists of a single-stranded PSF, two photomultiplier tubes (PMTs) that transform the scintillation signals into electric signals, amplifiers, and a data acquisition system (DAQ). The source used to evaluate the system is Cs-137, with a photopeak of 662 keV and a dose rate of about 5 μSv/h. We construct ANN models with the same structure but different training data. For the training data, we selected a measurement time of 1 minute to secure a sufficient number of data points. Conversely, we chose a measurement time of 10 seconds for extracting time-difference data from the primary data, followed by filtering. During the filtering process, we identified the peak heights of the gaussian-fitted curves obtained from the histogram of the time-difference data, and extracted the data located above the height which is equal to the peak height multiplied by a predetermined percentage. We used percentage values of 0, 20, 40, and 60 for the filtering. The results indicate that the filtering has an effect on the position estimation error, which we define as the absolute value of the difference between the estimated source position and the actual source position. The estimation of the ANN model trained with raw data for the training data shows a total average error of 1.391 m, while the ANN model trained with 20%-filtered data for the training data shows a total average error of 0.263 m. Similarly, the 40%-filtered data result shows a total average error of 0.119 m, and the 60%-filtered data result shows a total average error of 0.0452 m. From the perspective of the total average error, it is clear that the more data are filtered, the more accurate the result is. Further study will be conducted to optimize the filtering ratio for the system and measuring time by evaluating stabilization time for position estimation of the source.
This study introduces the accurate correction method of bearing position error of mobile robots using Stargazer sensor. The mobile robots require some vital functions including map building, localization, path planning, obstacle avoidance for autonomous navigation. In most cases, the localization of angular pose of a robot is essential because its result has a great effect on the performance of the other functions. We demonstrated the validity of the proposed method with the results of real experiments and applied it to the photographer robot for correct bearing position error at the moment of taking a picture.
All-surface, all-tooth machining and roll forming of cast iron have been used to manufacture the crankshaft position sensor wheel (CPSW). However, these methods pose many problems such as difficult processing, high material cost, and low tooth precision. Thus, we developed a sintered CPSW with an improved detection ability in order to resolve the problems related with the previous methods of manufacturing CPSW by simplifying the process flow and improving tooth precision. The sintering process is introduced in this study. We conducted an experiment to compare the sintered and roll formed products and analyzed the results to evaluate the reliability of the sintering process. Furthermore, we compared and analyzed stress and displacement in the sintered and roll formed products through the "Finite Element Method(FEM)". According to the experimental and FEM results, the sintered product showed satisfactory mechanical properties. It was less expensive to process and lighter and showed better quality than the roll formed product. The results of this study could be applied to design an optimum CPSW using the sintering process.
Multi-floor navigation of a mobile robot requires a technology that allows the robot to safely get on and off the elevator. Therefore, in this study, we propose a method of recognizing the elevator from the current position of the robot and estimating the location of the elevator locally so that the robot can safely get on the elevator regardless of the accumulated position error during autonomous navigation. The proposed method uses a deep learning-based image classifier to identify the elevator from the image information obtained from the RGB-D sensor and extract the boundary points between the elevator and the surrounding wall from the point cloud. This enables the robot to estimate the reliable position in real time and boarding direction for general elevators. Various experiments exhibit the effectiveness and accuracy of the proposed method.