Line observations of young stellar objects (YSOs) at (sub)millimeter wavelengths provide essential information of gas kinematics in star and planet forming environments. For Class 0 and I YSOs, identification of Keplerian rotation is of particular interest, because it reveals presence of rotationally-supported disks that are still being embedded in infalling envelopes and enables us to dynamically measure the protostellar mass. We have developed a python library SLAM (Spectral Line Analysis/Modeling) with a primary focus on analyses of emission line data at (sub)millimeter wavelengths. Here, we present an overview of the pvanalysis tool from SLAM, which is designed to identify Keplerian rotation of a disk and measure the dynamical mass of a central object using a position-velocity (PV) diagram of emission line data. The advantage of this tool is that it analyzes observational features of given data and thus requires few computational time and parameter assumptions, in contrast to detailed radiative transfer modelings. In this article, we introduce the basic concept and usage of this tool, present an application to observational data, and discuss remaining caveats.
Quiet environment is essential for improving quality of living and rest. For example, noisy neighbors degrade quality of sleeping of residents. Due to emphasis on energy saving of buildings, more insulation and tighter sealing is applied for windows, doors, and walls of new buildings. As results, not only energy dissipation of new building is reduced, but also interior of new buildings have become significantly quieter. However, sound and heat insulation performance of existing buildings is relatively worse, when compared to that of recent buildings. As elapse of time, gap between bricks or concrete of building and sealing grows larger, worsening of sound and heat insulation performance of buildings is unavoidable. To provide quiet room for quality control of products or other purposes in noisy environment, indoor noise chamber is required in practice. In present work, sound transmission through indoor noise chamber is measured using different noise sources. For the worst case, noise source and sound transmission of door slam noise is shown.
This paper shows how effectively sonar data can be worked with approaches suggested for the indoor SLAM (Simultaneous Localization And Mapping). A sonar sensor occasionally provides wrong distance range due to the wide beam width and the specular reflection phenomenon. To overcome weak points enough to use for the SLAM, several approaches are proposed. First, distance ranges acquired from the same object have been stored by using the FPA (Footprint-association) model, which associates two sonar footprints into a hypothesized circle frame. Using the Least Squares method, a line feature is extracted from the data stored through the FPA model. By using raw sonar data together with the extracted features as observations, the visibility for landmarks can be improved, and the SLAM performance can be stabilized. Additionally, the SP (Symmetries and Perturbations) model, a representation of uncertain geometric information that combines the probability theory and the theory of symmetries, is applied in this paper. The proposed methods have been tested in a real home environment with a mobile robot.
In this paper, we propose a modified ORB-SLAM (Oriented FAST and Rotated BRIEF Simultaneous Localization And Mapping) for precise indoor navigation of a mobile robot. The exact posture and position estimation by the ORB-SLAM is not possible all the times for the indoor navigation of a mobile robot when there are not enough features in the environment. To overcome this shortcoming, additional IMU (Inertial Measurement Unit) and encoder sensors were installed and utilized to calibrate the ORB-SLAM. By fusing the global information acquired by the SLAM and the dynamic local location information of the IMU and the encoder sensors, the mobile robot can be obtained the precise navigation information in the indoor environment with few feature points. The superiority of the modified ORB-SLAM was verified to compared with the conventional algorithm by the real experiments of a mobile robot navigation in a corridor environment.
Side scanning sonar (SSS) provides valuable information for robot navigation. However using the side scanning sonar images in the navigation was not fully studied. In this paper, we use range data, and side scanning sonar images from UnderWater Simulator (UWSim) and propose measurement models in a feature based simultaneous localization and mapping (SLAM) framework. The range data is obtained by echosounder and sidescanning sonar images from side scan sonar module for UWSim. For the feature, we used the A-KAZE feature for the SSS image matching and adjusting the relative robot pose by SSS bundle adjustment (BA) with Ceres solver. We use BA for the loop closure constraint of pose-graph SLAM. We used the Incremental Smoothing and Mapping (iSAM) to optimize the graph. The optimized trajectory was compared against the dead reckoning (DR).
본 논문은 마커를 사용하지 않고 현실 객체 대상 모바일 증강현실 게임의 일반적인 제작 과정에 대해서 기술하고 있다. 본 논문에서는 모바일 환경에서의 성능 최적화를 위해서 slam 기술을 사용하여 만들어진 포인트 클라우드 데이터를 별도의 편집툴을 사용하여 편집하였다. 또한 게임 실행단계에서 특징점 추출 및 디스크립터 매칭으로 인해 많은 부하가 발생하는데, 이를 줄이기 위해서 이전 입력 영상에서 매칭된 특징점에 대한 위치 추적을 위해 Opticalflow 추적을 사용하였다.
Localization of underwater vehicle is essential to use underwater robotic systems for various applications effectively. For this purpose, this paper presents a method of two-dimensional SLAM for underwater vehicles equipped with two hydrophones. The proposed method uses directional angles for underwater acoustic sources. A target signal transmitted from acoustic source is extracted using band-pass filters. Then, directional angles are estimated based on Bayesian process with generalized cross-correlation. The acquired angles are used as measurements for EKF-SLAM to estimate both vehicle location and locations of acoustic sources. Through these processes, the proposed method provides reliable estimation for two dimensional locations of underwater vehicles. Experimental results demonstrate the performance of the proposed method in a real sea environment.
In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.
This paper proposes a pose-graph based SLAM method using an upward-looking camera and artificial landmarks for AGVs in factory environments. The proposed method provides a way to acquire the camera extrinsic matrix and improves the accuracy of feature observation using a low-costcamera. SLAM is conducted by optimizing AGV’s explored path using the artificial landmarks installed on the ceiling at various locations. As the AGV explores, the pose nodes are added based on the certain distance from odometry and the landmark nodes are registered when AGV recognizes the fiducial marks. As a result of the proposed scheme, a graph network is created and optimized through a G2O optimization tool so that the accumulated error due to the slip is minimized. The experiment shows that the proposed method is robust for SLAM in real factory environments.
By a SLAM (simultaneous localization and mapping) method, we get a map of an environment for autonomous navigation of a robot. In this case, we want to know how accurate the map is. Or we want to know which map is more accurate when different maps can be obtained by different SLAM methods. So, several methods for map comparison have been studied, but they have their own drawbacks. In this paper, we propose a new method which compares the accuracy or error of maps relatively and quantitatively. This method sets many corresponding points on both reference map and SLAM map, and computes the translational and rotational values of all corresponding points using least-squares solution. Analyzing the standard deviations of all translational and rotational values, we can know the error of two maps. This method can consider both local and global errors while other methods can deal with one of them, and this is verified by a series of simulations and real world experiments.
As computer vision algorithms are developed on a continuous basis, the visual information from vision sensors has been widely used in the context of simultaneous localization and mapping (SLAM), called visual SLAM, which utilizes relative motion information between images. This research addresses a visual SLAM framework for online localization and mapping in an unstructured seabed environment that can be applied to a low-cost unmanned underwater vehicle equipped with a single monocular camera as a major measurement sensor. Typically, an image motion model with a predefined dimensionality can be corrupted by errors due to the violation of the model assumptions, which may lead to performance degradation of the visual SLAM estimation. To deal with the erroneous image motion model, this study employs a local bundle optimization (LBO) scheme when a closed loop is detected. The results of comparison between visual SLAM estimation with LBO and the other case are presented to validate the effectiveness of the proposed methodology.
본 논문에서는 시퀀스 상에서 확장 칼만필터(Extended Kalman Filter) 기반의 SLAM(Simultaneous Localization And Mapping) 시스템의 안정적인 카메라 추적과 재위치(re-localization) 방법이 제안된다. SLAM으로 얻어진 3차원 특징점에 들로네(Delaunay) 삼각화를 적용하여 기준(reference) 평면을 설정하며, 평면상에 존재하는 특징점의 BRISK(Binary Robust Invariant Scalable Keypoints) 기술자(descriptor)를 생성한다. 기존 확장 칼만필터의 오차가 누적되는 경우를 판단하여 기준 평면의 호모그래피로부터 카메라 정보를 해석한다. 또한 카메라가 급격하게 이동해서 특징점 추적이 실패하면, 저장된 강건한 기술자 정보를 매칭하여 카메라의 위치를 다시 추정한다.
The loop closure problem is one of the most challenging issues in the vision-based simultaneous localization and mapping community. It requires the robot to recognize a previously visited place from current camera measurements. While the loop closure often relies on visual bag-of-words based on point features in the previous works, however, in this paper we propose a line-based method to solve the loop closure in the corridor environments. We used both the floor line and the anchored vanishing point as the loop closing feature, and a two-step loop closure algorithm was devised to detect a known place and perform the global pose correction. We propose an anchored vanishing point as a novel loop closure feature, as it includes position information and represents the vanishing points in bi-direction. In our system, the accumulated heading error is reduced using an observation of a previously registered anchored vanishing points firstly, and the observation of known floor lines allows for further pose correction. Experimental results show that our method is very efficient in a structured indoor environment as a suitable loop closure solution.
This paper suggests a new observation model for Extended Kalman Filter based Simultaneous Localization and Mapping (EKF-SLAM). Since the EKF framework linearizes non-linear functions around the current estimate, the conventional line model has large linearization errors when a mobile robot locates faraway from its initial position. On the other hand, the model that we propose yields less linearization error with respect to the landmark position and thus suitable in a large-scale environment. To achieve it, we build up a three-dimensional space by adding a virtual axis to the robot’s two-dimensional coordinate system and extract a plane by using a detected line on the two-dimensional space and the virtual axis. Since Jacobian matrix with respect to the landmark position has small value, we can estimate the position of landmarks better than the conventional line model. The simulation results verify that the new model yields less linearization errors than the conventional line model.
FastSLAM is a factored solution to SLAM problem using a Rao-Blackwellized particle filter. In this paper, we propose a practical FastSLAM implementation method using an infrared camera for indoor environments. The infrared camera is equipped on a Pioneer3 robot and looks upward direction to the ceiling which has infrared tags with the same height. The infrared tags are detected with the infrared camera as measurements, and the Nearest Neighbor method is used to solve the unknown data association problem. The global map is successfully built and the robot pose is predicted in real time by the FastSLAM2.0 algorithm. The experiment result shows the accuracy and robustness of the proposed method in practical indoor environment.
Recently, simultaneous localization and mapping (SLAM) approaches employing Rao-Blackwellized particle filter (RBPF) have shown good results. However, due to the usage of the accurate sensors, distinct particles which compensate one another are attenuated as the RBPF-SLAM continues. To avoid this particle depletion, we propose the strategic games to assign the particle’s payoff which replaces the importance weight in the current RBPF-SLAM framework. From simulation works, we show that RBPF-SLAM with the strategic games is inconsistent in the pessimistic way, which is different from the existing optimistic RBPF-SLAM. In addition, first, the estimation errors with applying the strategic games are much less than those of the standard RBPF-SLAM, and second, the particle depletion is alleviated.
Recently, simultaneous localization and mapping (SLAM) approaches employing Rao-Blackwellized particle filter (RBPF) have shown good results. However, no research is conducted to analyze the result representation of SLAM using RBPF (RBPF-SLAM) when particle diversity is preserved. After finishing the particle filtering, the results such as a map and a path are stored in the separate particles. Thus, we propose several result representations and provide the analysis of the representations. For the analysis, estimation errors and their variances, and consistency of RBPF-SLAM are dealt in this study. According to the simulation results, combining data of each particle provides the better result with high probability than using just data of a particle such as the highest weighted particle representation.
Improving practicality of SLAM requires various sensors to be fused effectively in order to cope with uncertainty induced from both environment and sensors. In this case, combining sonar and vision sensors possesses numerous advantages of economical efficiency and complementary cooperation. Especially, it can remedy false data association and divergence problem of sonar sensors, and overcome low frequency SLAM update caused by computational burden and weakness in illumination changes of vision sensors. In this paper, we propose a SLAM method to join sonar sensors and stereo camera together. It consists of two schemes, extracting robust point and line features from sonar data and recognizing planar visual objects using multi-scale Harris corner detector and its SIFT descriptor from pre-constructed object database. And fusing sonar features and visual objects through EKF-SLAM can give correct data association via object recognition and high frequency update via sonar features. As a result, it can increase robustness and accuracy of SLAM in indoor environment. The performance of the proposed algorithm was verified by experiments in home –like environment.