This research propose a vehicle attitude estimation method using sensor fusion of speedometer and six-axis inertial sensor. External acceleration generated in a dynamic environment such as high-speed movement of a vehicle causes a considerable error in the attitude angle measured by the accelerometer. The external acceleration is estimated using velocity data obtained from speedometers mounted on most vehicles. The vehicle velocity is decomposed into three vector components using the angular velocity and attitude angle measured by the inertial sensor in the previous time step. The attitude angle estimated by the speedometer and the accelerometer is used to correct the error of the gyro sensor in the Kalman filter. In order to verify the performance of the proposed algorithm, experiments on a scenario of rapid acceleration/deceleration of a truck in a straight section and a scenario of high-speed driving on a long-distance highway are conducted.
사회기반 시설물의 노후화에 대응해 이상 징후를 파악하고 유지보수를 위한 최적의 의사결정을 내리기 위해선 디지털 기반 SOC 시설물 유지관리 시스템의 개발이 필수적인데, 디지털 SOC 시스템은 장기간 구조물 계측을 위한 IoT 센서 시스템과 축적 데이터 처 리를 위한 클라우드 컴퓨팅 기술을 요구한다. 본 연구에서는 구조물의 다물리량을 장기간 측정할 수 있는 IoT센서와 클라우드 컴퓨팅 을 위한 서버 시스템을 개발하였다. 개발 IoT센서는 총 3축 가속도 및 3채널의 변형률 측정이 가능하고 24비트의 높은 해상도로 정밀 한 데이터 수집을 수행한다. 또한 저전력 LTE-CAT M1 통신을 통해 데이터를 실시간으로 서버에 전송하여 별도의 중계기가 필요 없 는 장점이 있다. 개발된 클라우드 서버는 센서로부터 다물리량 데이터를 수신하고 가속도, 변형률 기반 변위 융합 알고리즘을 내장하 여 센서에서의 연산 없이 고성능 연산을 수행한다. 제안 방법의 검증은 2개소의 실제 교량에서 변위계와의 계측 결과 비교, 장기간 운 영 테스트를 통해 이뤄졌다.
The convergence of artificial intelligence with smart factories or smart mechanical systems has been actively studied to maximize the efficiency and safety. Despite the high improvement of artificial neural networks, their application in the manufacturing industry has been difficult due to limitations in obtaining meaningful data from factories or mechanical systems. Accordingly, there have been active studies on manufacturing components with sensor integration allowing them to generate important data from themselves. Additive manufacturing enables the fabrication of a net shaped product with various materials including plastic, metal, or ceramic parts. With the principle of layer-bylayer adhesion of material, there has been active research to utilize this multi-step manufacturing process, such as changing the material at a certain step of adhesion or adding sensor components in the middle of the additive manufacturing process. Particularly for smart parts manufacturing, researchers have attempted to embed sensors or integrated circuit boards within a three-dimensional component during the additive manufacturing process. While most of the sensor embedding additive manufacturing was based on polymer material, there have also been studies on sensor integration within metal or ceramic materials. This study reviews the additive manufacturing technology for sensor integration into plastic, ceramic, and metal materials.
보행보조로봇은 고령화 사회에 노인 복지 및 중증 신경계 손상을 입은 환자에게 이동을 도울 수 있는 반드시 필요한 장비이다. 특히, 하지 마비 환자를 위한 보행보조로봇의 편리성을 위해서는 환자의 보행 의도를 내현적으로 파악함으로써 환자 의지에 따라 로봇을 통제할 수 있어야 한다. 본 논문은 보행 의도 감지 모델을 개발하기 위한 선행 연구로, 먼저 저항 센서와 기울기 센서의 융합을 통하여 3족 보행 모델을 기반으로 사용자의 보행 의도를 분석하였다. 저항 센서는 사용자의 양쪽 손바닥과 발바닥에 각각 부착되어, 부착된 위치의 압력을 센싱하였다. 양쪽 손바닥의 신호는 보행 의도를 파악하기 위해, 발바닥의 신호는 보행 단계를 확인하기 위하여 사용되었다. 기울기 센서는 몸의 움직임 상태를 측정하기 위한 센서로서, 사용자의 등 부분, 요추에 부착되어 상체의 움직임(roll, pitch)을 센싱하였다. 연구 결과, 사용자가 지팡이를 바닥에 짚을 때 양 손바닥에서 측정되는 2개의 저항 신호만으로 기본적인 보행 의도를 파악할 수 있었으나, 기울기 센서 정보를 함께 이용함으로써 오른발 뻗기, 왼발 뻗기 등의 움직임을 시작하려는 상태 등에 대한 보행 의도를 보다 구체적으로 확인할 수 있었다. 결론적으로, 본 연구를 통하여 저항센서와 기울기 센서의 융합에 기반한 사용자의 보행 의도를 감지할 수 있었다.
This paper proposes a parking space detection method for autonomous parking by using the Around View Monitor (AVM) image and Light Detection and Ranging (LIDAR) sensor fusion. This method consists of removing obstacles except for the parking line, detecting the parking line, and template matching method to detect the parking space location information in the parking lot. In order to remove the obstacles, we correct and converge LIDAR information considering the distortion phenomenon in AVM image. Based on the assumption that the obstacles are removed, the line filter that reflects the thickness of the parking line and the improved radon transformation are applied to detect the parking line clearly. The parking space location information is detected by applying template matching with the modified parking space template and the detected parking lines are used to return location information of parking space. Finally, we propose a novel parking space detection system that returns relative distance and relative angle from the current vehicle to the parking space.
This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.
3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.
인터넷의 사용이 증가하며 촉발된 기기들 간의 상호작용이 활발히 일어나는 IoT(Internet Of Things)가 산업 전반에 확산되면서 사용자와 IoT 기기들간의 상호작용을 네트워크를 통하여 실시간으로 웹 서버 및 어플리케이션으로 데이터를 전송하는 기술은 필요함을 넘어서서 이제 하나의 IT 융합 프레임워크로서의 제공을 요구한다. 본 논문에서는 IoT 환경을 기반으로 사용 자의 이동 단말기와 각종 센서들을 이용한 콘텐츠 제작을 위한 프레임워크 설계 시 고려해야할 부분과 그에 대한 제안 기법을 보였으며, 제안 기법들을 적용한 프로토타입의 프레임워크를 보 이고, 이를 적용한 예로써 IoT 환경에서의 게임을 제작하여 IoT 환경에서 서비스로의 게임의 가능성을 보였다.
A PSC girder with I-typed section is one of the most common types applied to road and rail bridges in South Korea. As public service life of PSC bridges is on increase, lots of effort has been devoted to verify safety of PSC bridges. On the safety of PSC bridges, effective prestress force is one of the most important factor. At the current time, however, indirect measurements are mostly employed to measure the effective prestress force during construction. In this study, performances of FBG and EM sensors, which can directly measure the effective prestress force, are tested and investigated. The test result can be useful to develop block-out PSC girders with FBG and EM sensors monitoring effective prestress force during service life.
Global localization is one of the essential issues for mobile robot navigation. In this study, an indoor global localization method is proposed which uses a Kinect sensor and a monocular upward-looking camera. The proposed method generates an environment map which consists of a grid map, a ceiling feature map from the upward-looking camera, and a spatial feature map obtained from the Kinect sensor. The method selects robot pose candidates using the spatial feature map and updates sample poses by particle filter based on the grid map. Localization success is determined by calculating the matching error from the ceiling feature map. In various experiments, the proposed method achieved a position accuracy of 0.12m and a position update speed of 10.4s, which is robust enough for real-world applications.
Common feature of displacement-based sensing is that the high-frequency resolution is limited, and often relatively low sampling rates are used. Another problem is that integration of accelerometer data that causes low-frequency noise amplification, and potentially more problematic differentiation of displacement measurements which amplify high-frequency noise. A multi-rate Kalman filtering approach is proposed to solve these problems. This method yields highly accurate motion data.
Common feature of displacement-based sensing is that the high-frequency resolution is limited, and often relatively low sampling rates are used. Another problem is that integration of accelerometer data that causes low-frequency noise amplification, and potentially more problematic differentiation of displacement measurements which amplify high-frequency noise. A multi-rate Kalman filtering approach is proposed to solve these problems. This method yields highly accurate motion data.
It is very important for a mobile robot to recognize and model its environments for navigation. However, the grid map constructed by sonar sensors cannot accurately represent the environment, especially the narrow environment, due to the angular uncertainty of sonar data. Therefore, we propose a map building scheme which combines sonar sensors and IR sensors. The maps built by sonar sensors and IR sensors are combined with different weights which are determined by the degree of translational and rotational motion of a robot. To increase the effectiveness of sensor fusion, we also propose optimal sensor arrangement through various experiments. The experimental results show that the proposed method can represent the environment such as narrow corridor and open door more accurately than conventional sonar sensor-based map building methods.
This paper proposes a low-complexity indoor localization method of mobile robot under the dynamic environment by fusing the landmark image information from an ordinary camera and the distance information from sensor nodes in an indoor environment, which is based on sensor network. Basically, the sensor network provides an effective method for the mobile robot to adapt to environmental changes and guides it across a geographical network area. To enhance the performance of localization, we used an ordinary CCD camera and the artificial landmarks, which are devised for self-localization. Experimental results show that the real-time localization of mobile robot can be achieved with robustness and accurateness using the proposed localization method.
This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinates using extrinsic calibration matrixes of a camera-LRF ( ) and a camera calibration matrix ( ). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.