PURPOSES : This study aimed to compare the object detection performance based on various analysis methods using point-cloud data collected from LiDAR sensors with the goal of contributing to safer road environments. The findings of this study provide essential information that enables automated vehicles to accurately perceive their surroundings and effectively avoid potential hazards. Furthermore, they serve as a foundation for LiDAR sensor application to traffic monitoring, thereby enabling the collection and analysis of real-time traffic data in road environments. METHODS : Object detection was performed using models based on different point-cloud processing methods using the KITTI dataset, which consists of real-world driving environment data. The models included PointPillars for the voxel-based approach, PartA2-Net for the point-based approach, and PV-RCNN for the point+voxel-based approach. The performance of each model was compared using the mean average precision (mAP) metric. RESULTS : While all models exhibited a strong performance, PV-RCNN achieved the highest performance across easy, moderate, and hard difficulty levels. PV-RCNN outperformed the other models in bounding box (Bbox), bird’s eye view (BEV), and 3D object detection tasks. These results highlight PV-RCNN's ability to maintain a high performance across diverse driving environments by combining the efficiency of the voxel-based method with the precision of the point-based method. These findings provide foundational insights not only for automated vehicles but also for traffic detection, enabling the accurate detection of various objects in complex road environments. In urban settings, models such as PV-RCNN may be more suitable, whereas in situations requiring real-time processing efficiency, the voxelbased PointPillars model could be advantageous. These findings offer important insights into the model that is best suited for specific scenarios. CONCLUSIONS : The findings of this study aid enhance the safety and reliability of automated driving systems by enabling vehicles to perceive their surroundings accurately and avoid potential hazards at an early stage. Furthermore, the use of LiDAR sensors for traffic monitoring is expected to optimize traffic flow by collecting and analyzing real-time traffic data from road environments.
In the contemporary era, 3D printing technology has become widely utilized across diverse fields, including biomedicine, industrial design, manufacturing, food processing, aerospace, and construction engineering. The inherent advantages of automation, precision, and speed associated with 3D printing have progressively led to its incorporation into road engineering. Asphalt, a temperature-responsive material that softens at high temperatures and solidifies as it cools, presents distinctive challenges and opportunities in this context. For the effective implementation of 3D printing technology in road engineering, 3D printed asphalt (3DPA) must exhibit favorable performance and printability. This requires attributes such as good fluidity, extrudability, and buildability. Furthermore, materials utilizing 3DPA for crack repair should possess high viscosity, elasticity, toughness, superior high-temperature stability, and resistance to low-temperature cracking. These characteristics ultimately contribute to enhancing pavement longevity and ensuring worker safety.
PURPOSES : Demonstrated performance degradation of LiDAR for vehicle and pedestrian dummy in rainy and foggy road conditions. METHODS : In real-scale rain and fog road conditions, adjust the distance between LiDAR and the measurement target from 10m to 70m (in 10m interval), measure LiDAR NPC (number of point cloud) and intensity, and compare the resulting numerical values. RESULTS : LiDAR's NPC and Intensity showed statistically significant differences by overall weather condition (normal, rain, fog), and the values were found to be larger in the order of normal>rainfall>fog. In the case of vehicles, sunder rain conditions, NPC and intensity are recognized even at 70m as in normal conditions, but under fog conditions, NPC and intensity are measured only up to 30m. In the case of pedestrians, the reflective area size is smaller than that of vehicles, so they are recognized only up to 30m in rainy conditions, and NPC and intensity are measured only up to 20m in fog conditions. CONCLUSIONS : It was confirmed that LiDAR performance deteriorates in rain and fog compared to normal.
algorithms for deriving and analyzing retroreflectivity influence factors through regression analysis. METHODS : An experimental road lane was created to examine the trends of retroreflectivity and LiDAR intensity values, and a controlled indoor experiment was conducted to identify influencing factors. The optimal algorithm was developed by regression analysis of the experimental data. RESULTS : The significance probability (P-value) through SPSS linear regression analysis was 0.000 for measured height, 0.001 for perpendicular angle, 0.157 for vertical angle, and 0.000 for LiDAR intensity, indicating that measured height, vertical angle, and LiDAR intensity are significant factors because the significance probability is less than 0.05, and vertical angle is not significant. The NNR regression model performed the best, so the measurement data with height (1.2m, 2m, 2.2m) and vertical angle (11.3°, 12.3°, 13.5°) were analyzed to derive the optimal LiDAR Intensity measurement height and vertical angle. CONCLUSIONS : For each LiDAR measurement height and vertical angle, the highest correlation between LiDAR Intensity and retroreflectivity was found at a measurement height of 1.2 meters and a vertical angle of 12.3°, where the model learning accuracy (R2) was the highest.
PURPOSES : The purpose of this study was to quantitatively evaluate the variability of LiDAR performance indicators, such as intensity and Number of Point Cloud(NPC), according to various environmental factors and material characteristics.
METHODS : To consider the material characteristics of road safety facilities, various materials (Reference Material(RM), reflective sheet, matte sheet, granite, plastic, and rubber) were used in a darkroom, and the performance indicators of LiDAR were repeatedly measured in terms of changes in the measurement distance, rainfall, and angle of observation.
RESULTS : In the case of standard reflective materials, the intensity measurement value decreased as the measurement distance and rainfall increased. The NPC showed a tendency to decrease as the measurement distance increased, regardless of rainfall intensity. For materials with high-intensity values, it was found that rainfall intensity and color had negligible effect on the change in intensity compared with the measurement distance. However, for materials with low-intensity values, it was found that the measurement distance, rainfall intensity, and color all had a significant effect on the change in intensity.
CONCLUSIONS : For materials with high-intensity values, it was found that rainfall and color had negligible effect on change in intensity compared with the measurement distance. However, for materials with low-intensity values, the measurement distance, rainfall, and color all had a significant effect on the change in intensity value.
본 논문은 자동계류 장치에 설치하여 선박의 접안 상황을 검출할 수 있는 시각 센서의 개발에 대하여 논하고 있다. 선박의 접 안 시 사고방지를 위해 선박의 속도를 통제하고 위치를 확인하고 있음에도 불구하고 부두에서의 선박 충돌사고는 매년 발생하고 있으며, 이로 인한 경제적, 환경적 피해가 매우 크다. 따라서 부두에 접안하는 선박에 대한 안전성 확보를 위해 선박의 위치 및 속도 정보를 신속 하게 확보할 수 있는 시각 시스템의 개발은 중요하다. 이에 본 연구에서는 선박의 접안 시 사람과 유사하게 영상을 통해 접안하는 선박을 관찰하고, 주변 환경에 따른 선박의 접안 상태를 적절하게 확인할 수 있는 시각센서를 개발하였다. 먼저, 개발하고자 하는 시각 센서의 적 정성을 확보하기 위해 기존 센서로부터 제공되는 정보, 감지 범위, 실시간성, 정확도 및 정밀도 측면에서 센서 특성을 분석하였다. 이러한 분석 자료를 바탕으로 LiDAR형태의 3D시각 시스템의 개념 설계, 구동메커니즘 설계 및 모션 구동부의 힘과 위치 제어기 설계 등을 수행 하여 대상물의 정보를 실시간으로 획득할 수 있는 3D 시각 모듈을 개발하였다. 최종적으로 시스템 구동을 위한 제어 시스템의 성능평가 와 스캔 속도에 대한 성능을 분석하였고, 실험을 통해 개발된 시스템의 유용성을 확인할 수 있었다.
Along with the current rapid development of technology, object classification is being researched, developed, and applied to security systems, autonomous driving, and other applications. A common technique is to use vision cameras to collect data of objects in the surrounding environment. Along with many other methods, LiDAR sensors are being used to collect data in space to detect and classify objects. By using the LiDAR sensors, some disadvantages of image sensors with the negative influence on the image quality by weather and light condition will be covered. In this study, a volumetric image descriptor in 3D shape is developed to handle 3D object data in the urban environment obtained from LiDAR sensors, and convert it into image data before using deep learning algorithms in the process of object classification. The study showed the potential possibility of the proposal and its further application.
자율주행 시물레이터는 자율 주행을 시험하고 검증하는 일에 있어 현실에 비해 높은 비용 절감의 효과를 가 지고 오지만 높은 컴퓨터 연산량에 의해 많은 하드웨어 기기를 요구하게 된다. 게임을 이용하여 자율 주행에 필요한 학습 데이터를 획득하는 경우도 있다. 게임은 저비용 시뮬레이터로 활용되고 있지만 게임 외적인 특정 상황을 모의하기에도, 필요한 데이터 획득에도 제한적이다. 또 다른 방법으로 게임 엔진을 통한 가상 환경 모 의 연구가 수행되고 있다. 하지만 게임 엔진에서는 사용자가 직접 필요한 모델링을 해줘야 하기 때문에 개발 비용이 크게 작용된다. 특히, 3D LIDAR는 360도로 Ray를 쏴서 정밀 거리를 최소 10Hz 이내의 실시간 획득이 필요하다. 실시간으로 3D LIDAR 데이터를 획득하는 것은 GPU(Graphics Processing Unit) 사용량이 많은 작업 이기 때문에, 저비용 시뮬레이터를 위해서는 저비용 3D LIDAR 모의가 필요하다. 본 논문에서는 낮은 컴퓨터 연산을 사용하는 C++ 기반 3D LIDAR 모의 프레임 워크를 제안한다. 제안된 3D LIDAR는 다수의 언덕으로 이 루어진 비포장 Map에서 성능을 검증 하였으며, 성능 검증을 의해 본 논문에서 생성된 3D LIDAR로 간단한 LPP(Local Path Planning) 생성 방법도 소개한다. 제안된 3D LIDAR 프레임 워크는 저비용 실시간 모의가 필요 한 자율 주행 분야에 적극 활용되길 바란다.
본 연구에서는 동력수상레저기구 조종면허의 기존 수기채점방식을 전자채점방식으로 전환할 수 있도록 실기시험항목을 분석하여 현재의 실기시험 시스템의 개념을 해치지 않는 범위 내에서 디지털화할 수 있는 방안을 모색하고, LiDAR센서와 WAVE통신을 통한 전자채점 시스템을 개발했다. 연구를 통해 얻은 결과는 다음과 같다. 첫째, 시험선에서는 LiDAR 및 시험관 평가입력장치를 통해 입력된 채점 데이터가 통합처리장치에서 WAVE통신을 통해 육상 관제센터로 전송되어 응시자의 데이터를 저장/관리할 수 있는 시스템을 구축 및 검증했다. 둘째, 사행 과제를 평가함에 있어서 기존에 목시관측 또는 막대기를 사용한 것을 대신하여 LiDAR를 이용함으로써 정확한 거리측정이 가능해졌고, 정확한 거리정보를 시험선 통합처리장치를 통해 시험관 평가입력장치에 신속하게 표출했다. 셋째, 시험선에서 취 득한 평가점수 데이터를 육상 관제센터로 유실없이 고속으로 전송하기 위해 활용한 WAVE통신에서 원활한 전송 및 처리가 가능함을 확인했다.