The role of QR Code robots in smart logistics is great. Cognitive robots, such as logistics robots, were mostly used to adjust routes and search for peripheral sensors, cameras, and recognition signs attached to walls. However, recently, the ease of making QR Codes and the convenience of producing and attaching a lot of information within QR Codes have been raised, and many of these reasons have made QR Codes recognizable as visions and others. In addition, there have been cases in developed countries and Korea that control several of these robots at the same time and operate logistics factories smartly. This representative case is the KIVA robot in Amazon. KIVA robots are only operated inside Amazon, but information about them is not exposed to the outside world, so a variety of similar robots are developed and operated in several places around the world. They are applied in various fields such as education, medical, silver, military, parking, construction, marine, and agriculture, creating a variety of application robots. In this work, we are developing a robot that can recognize its current position, move and control in the directed direction through two-dimensional QR Codes with the same horizontal and vertical sides, and the error is to create a QR Code robot with accuracy to reach within 3mm. This paper focuses on indoor mobile robot position recognition and driving experiment using QR Code during the development of QR Code-aware indoor mobility robots.
The role of QR Code robots in smart logistics is great. Cognitive robots, such as logistics robots, were mostly used to adjust routes and search for peripheral sensors, cameras, and recognition signs attached to walls. However, recently, the ease of making QR Codes and the convenience of producing and attaching a lot of information within QR Codes have been raised, and many of these reasons have made QR Codes recognizable as visions and others. In addition, there have been cases in developed countries and Korea that control several of these robots at the same time and operate logistics factories smartly. This representative case is the KIVA robot in Amazon. KIVA robots are only operated inside Amazon, but information about them is not exposed to the outside world, so a variety of similar robots are developed and operated in several places around the world. They are applied in various fields such as education, medical, silver, military, parking, construction, marine, and agriculture, creating a variety of application robots. In this work, we are developing a robot that can recognize its current position, move and control in the directed direction through two-dimensional QR Codes with the same horizontal and vertical sides, and the error is to create a QR Code robot with accuracy to reach within 3mm. This paper focuses a study on the driving directions of QR Code-aware movable robots during the development of QR Code-aware indoor mobility robots.
Multi-floor navigation of a mobile robot requires a technology that allows the robot to safely get on and off the elevator. Therefore, in this study, we propose a method of recognizing the elevator from the current position of the robot and estimating the location of the elevator locally so that the robot can safely get on the elevator regardless of the accumulated position error during autonomous navigation. The proposed method uses a deep learning-based image classifier to identify the elevator from the image information obtained from the RGB-D sensor and extract the boundary points between the elevator and the surrounding wall from the point cloud. This enables the robot to estimate the reliable position in real time and boarding direction for general elevators. Various experiments exhibit the effectiveness and accuracy of the proposed method.
In this paper, we propose a new method for improving the accuracy of localizing a robot to find the position of a robot in indoor environment. The proposed method uses visible light for indoor localization with a reference receiver to estimate optical power of individual LED in order to reduce localization errors which are caused by aging of LED components and different optical power for each individual LED, etc. We evaluate the performance of the proposed method by comparing it with the performance of traditional model. In several simulations, probability density functions and cumulative distribution functions of localization errors are also obtained. Results indicate that the proposed method is able to reduce localization errors from 7.3 cm to 1.6 cm with a precision of 95%.
This paper proposes an underwater localization algorithm using probabilistic object recognition. It is organized as follows; 1) recognizing artificial objects using imaging sonar, and 2) localizing the recognized objects and the vehicle using EKF(Extended Kalman Filter) based SLAM. For this purpose, we develop artificial landmarks to be recognized even under the unstable sonar images induced by noise. Moreover, a probabilistic recognition framework is proposed. In this way, the distance and bearing of the recognized artificial landmarks are acquired to perform the localization of the underwater vehicle. Using the recognized objects, EKF-based SLAM is carried out and results in a path of the underwater vehicle and the location of landmarks. The proposed localization algorithm is verified by experiments in a basin.
In this paper, we present a global localization and position error compensation method in a known indoor environment using magnet hall sensors. In previous our researches, it was possible to compensate the pose errors of xe, ye, θe correctly on the surface of indoor environment with magnets sets by regularly arrange the magnets sets of identical pattern. To improve the proposed method, new strategy that can realize the global localization by changing arrangement of magnet pole is presented in this paper. Total six patterns of the magnets set form the unique landmarks. Therefore, the virtual map can be built by using the six landmarks randomly. The robots search a pattern of magnets set by rotating, and obtain the current global pose information by comparing the measured neighboring patterns with the map information that is saved in advance. We provide experimental results to show the effectiveness of the proposed method for a differential drive wheeled mobile robot.
This paper proposes a low-complexity indoor localization method of mobile robot under the dynamic environment by fusing the landmark image information from an ordinary camera and the distance information from sensor nodes in an indoor environment, which is based on sensor network. Basically, the sensor network provides an effective method for the mobile robot to adapt to environmental changes and guides it across a geographical network area. To enhance the performance of localization, we used an ordinary CCD camera and the artificial landmarks, which are devised for self-localization. Experimental results show that the real-time localization of mobile robot can be achieved with robustness and accurateness using the proposed localization method.
Recently, simultaneous localization and mapping (SLAM) approaches employing Rao-Blackwellized particle filter (RBPF) have shown good results. However, no research is conducted to analyze the result representation of SLAM using RBPF (RBPF-SLAM) when particle diversity is preserved. After finishing the particle filtering, the results such as a map and a path are stored in the separate particles. Thus, we propose several result representations and provide the analysis of the representations. For the analysis, estimation errors and their variances, and consistency of RBPF-SLAM are dealt in this study. According to the simulation results, combining data of each particle provides the better result with high probability than using just data of a particle such as the highest weighted particle representation.
This paper presents a multi-robot localization based on multidimensional scaling (MDS) in spite of the existence of incomplete and noisy data. While the traditional algorithms for MDS work on the full-rank distance matrix, there might be many missing data in the real world due to occlusions. Moreover, it has no considerations to dealing with the uncertainty due to noisy observations. We propose a robust MDS to handle both the incomplete and noisy data, which is applied to solve the multi-robot localization problem. To deal with the incomplete data, we use the Nyström approximation which approximates the full distance matrix. To deal with the uncertainty, we formulate a Bayesian framework for MDS which finds the posterior of coordinates of objects by means of statistical inference. We not only verify the performance of MDS-based multi-robot localization by computer simulations, but also implement a real world localization of multi-robot team. Using extensive empirical results, we show that the accuracy of the proposed method is almost similar to that of Monte Carlo Localization(MCL).
Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision-based global localization algorithm.
We present an implementation of particle filter algorithm for global localization and kidnap recovery of mobile robot. Firstly, we propose an algorithm for efficient particle initialization using sonar line features. And then, the average likelihood and entropy of normalized weights are used as a quality measure of pose estimation. Finally, we propose an active kidnap recovery by adding new particle set. New and independent particle set can be initialized by monitoring two quality measures. Added particle set can re-estimate the pose of kidnapped robot. Experimental results demonstrate the capability of our global localization and kidnap recovery algorithm.