검색결과

검색조건
좁혀보기
검색필터
결과 내 재검색

간행물

    분야

      발행연도

      -

        검색결과 15

        1.
        2021.12 KCI 등재 구독 인증기관 무료, 개인회원 유료
        This research propose a vehicle attitude estimation method using sensor fusion of speedometer and six-axis inertial sensor. External acceleration generated in a dynamic environment such as high-speed movement of a vehicle causes a considerable error in the attitude angle measured by the accelerometer. The external acceleration is estimated using velocity data obtained from speedometers mounted on most vehicles. The vehicle velocity is decomposed into three vector components using the angular velocity and attitude angle measured by the inertial sensor in the previous time step. The attitude angle estimated by the speedometer and the accelerometer is used to correct the error of the gyro sensor in the Kalman filter. In order to verify the performance of the proposed algorithm, experiments on a scenario of rapid acceleration/deceleration of a truck in a straight section and a scenario of high-speed driving on a long-distance highway are conducted.
        4,000원
        2.
        2021.10 KCI 등재 구독 인증기관 무료, 개인회원 유료
        사회기반 시설물의 노후화에 대응해 이상 징후를 파악하고 유지보수를 위한 최적의 의사결정을 내리기 위해선 디지털 기반 SOC 시설물 유지관리 시스템의 개발이 필수적인데, 디지털 SOC 시스템은 장기간 구조물 계측을 위한 IoT 센서 시스템과 축적 데이터 처 리를 위한 클라우드 컴퓨팅 기술을 요구한다. 본 연구에서는 구조물의 다물리량을 장기간 측정할 수 있는 IoT센서와 클라우드 컴퓨팅 을 위한 서버 시스템을 개발하였다. 개발 IoT센서는 총 3축 가속도 및 3채널의 변형률 측정이 가능하고 24비트의 높은 해상도로 정밀 한 데이터 수집을 수행한다. 또한 저전력 LTE-CAT M1 통신을 통해 데이터를 실시간으로 서버에 전송하여 별도의 중계기가 필요 없 는 장점이 있다. 개발된 클라우드 서버는 센서로부터 다물리량 데이터를 수신하고 가속도, 변형률 기반 변위 융합 알고리즘을 내장하 여 센서에서의 연산 없이 고성능 연산을 수행한다. 제안 방법의 검증은 2개소의 실제 교량에서 변위계와의 계측 결과 비교, 장기간 운 영 테스트를 통해 이뤄졌다.
        4,000원
        3.
        2010.09 KCI 등재 구독 인증기관 무료, 개인회원 유료
        보행보조로봇은 고령화 사회에 노인 복지 및 중증 신경계 손상을 입은 환자에게 이동을 도울 수 있는 반드시 필요한 장비이다. 특히, 하지 마비 환자를 위한 보행보조로봇의 편리성을 위해서는 환자의 보행 의도를 내현적으로 파악함으로써 환자 의지에 따라 로봇을 통제할 수 있어야 한다. 본 논문은 보행 의도 감지 모델을 개발하기 위한 선행 연구로, 먼저 저항 센서와 기울기 센서의 융합을 통하여 3족 보행 모델을 기반으로 사용자의 보행 의도를 분석하였다. 저항 센서는 사용자의 양쪽 손바닥과 발바닥에 각각 부착되어, 부착된 위치의 압력을 센싱하였다. 양쪽 손바닥의 신호는 보행 의도를 파악하기 위해, 발바닥의 신호는 보행 단계를 확인하기 위하여 사용되었다. 기울기 센서는 몸의 움직임 상태를 측정하기 위한 센서로서, 사용자의 등 부분, 요추에 부착되어 상체의 움직임(roll, pitch)을 센싱하였다. 연구 결과, 사용자가 지팡이를 바닥에 짚을 때 양 손바닥에서 측정되는 2개의 저항 신호만으로 기본적인 보행 의도를 파악할 수 있었으나, 기울기 센서 정보를 함께 이용함으로써 오른발 뻗기, 왼발 뻗기 등의 움직임을 시작하려는 상태 등에 대한 보행 의도를 보다 구체적으로 확인할 수 있었다. 결론적으로, 본 연구를 통하여 저항센서와 기울기 센서의 융합에 기반한 사용자의 보행 의도를 감지할 수 있었다.
        4,000원
        6.
        2019.09 KCI 등재 서비스 종료(열람 제한)
        This paper proposes a parking space detection method for autonomous parking by using the Around View Monitor (AVM) image and Light Detection and Ranging (LIDAR) sensor fusion. This method consists of removing obstacles except for the parking line, detecting the parking line, and template matching method to detect the parking space location information in the parking lot. In order to remove the obstacles, we correct and converge LIDAR information considering the distortion phenomenon in AVM image. Based on the assumption that the obstacles are removed, the line filter that reflects the thickness of the parking line and the improved radon transformation are applied to detect the parking line clearly. The parking space location information is detected by applying template matching with the modified parking space template and the detected parking lines are used to return location information of parking space. Finally, we propose a novel parking space detection system that returns relative distance and relative angle from the current vehicle to the parking space.
        7.
        2019.06 KCI 등재 서비스 종료(열람 제한)
        This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.
        8.
        2018.12 KCI 등재 서비스 종료(열람 제한)
        3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.
        9.
        2014.05 KCI 등재 서비스 종료(열람 제한)
        Global localization is one of the essential issues for mobile robot navigation. In this study, an indoor global localization method is proposed which uses a Kinect sensor and a monocular upward-looking camera. The proposed method generates an environment map which consists of a grid map, a ceiling feature map from the upward-looking camera, and a spatial feature map obtained from the Kinect sensor. The method selects robot pose candidates using the spatial feature map and updates sample poses by particle filter based on the grid map. Localization success is determined by calculating the matching error from the ceiling feature map. In various experiments, the proposed method achieved a position accuracy of 0.12m and a position update speed of 10.4s, which is robust enough for real-world applications.
        10.
        2012.11 서비스 종료(열람 제한)
        Common feature of displacement-based sensing is that the high-frequency resolution is limited, and often relatively low sampling rates are used. Another problem is that integration of accelerometer data that causes low-frequency noise amplification, and potentially more problematic differentiation of displacement measurements which amplify high-frequency noise. A multi-rate Kalman filtering approach is proposed to solve these problems. This method yields highly accurate motion data.
        11.
        2012.11 서비스 종료(열람 제한)
        Common feature of displacement-based sensing is that the high-frequency resolution is limited, and often relatively low sampling rates are used. Another problem is that integration of accelerometer data that causes low-frequency noise amplification, and potentially more problematic differentiation of displacement measurements which amplify high-frequency noise. A multi-rate Kalman filtering approach is proposed to solve these problems. This method yields highly accurate motion data.
        12.
        2009.11 KCI 등재 서비스 종료(열람 제한)
        This paper proposes a low-complexity indoor localization method of mobile robot under the dynamic environment by fusing the landmark image information from an ordinary camera and the distance information from sensor nodes in an indoor environment, which is based on sensor network. Basically, the sensor network provides an effective method for the mobile robot to adapt to environmental changes and guides it across a geographical network area. To enhance the performance of localization, we used an ordinary CCD camera and the artificial landmarks, which are devised for self-localization. Experimental results show that the real-time localization of mobile robot can be achieved with robustness and accurateness using the proposed localization method.
        13.
        2009.11 KCI 등재 서비스 종료(열람 제한)
        This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinates using extrinsic calibration matrixes of a camera-LRF ( ) and a camera calibration matrix ( ). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.
        15.
        2003.08 KCI 등재 서비스 종료(열람 제한)
        This paper proposes a sensor-fusion technique where rho data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent only on the current data sets. As the results, more of sensors are required to measure a certain physical promoter or to improve the accuracy of the measurement. However, in this approach, intend of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples md the effectiveness is proved through the simulation. Finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in the indoor environment and the performance was demonstrated by the real experiments.