최근 결빙으로 인한 교통사고가 빈번히 발생하고 있으며, 도로순찰시 육안 인식이 어려운 도로살얼음 검지를 위해 다양한 방식의 검지센서가 도입되고 있다. 본 연구에서는 국내외 상용화되어 있는 차량부착식 노면상태 검지센서에 대한 현장 검증을 통해 국내 도 로조건에의 적용 가능성을 검토하였다. 차량부착식 검지센서의 성능을 평가하기 위해 한국건설기술연구원의 연천SOC실증연구센터 내 의 도로기상재현 실험시설에 결빙(Ice), 습윤(Wet), 건조(Dry) 등 3가지의 노면상태가 육안으로 명확히 구분이 가능하도록 도로환경을 구현하였으며, 센서종류별로 차량에 부착하여 다양한 도로상태를 측정하였다. 평가결과 노면상태 측정결과의 정확도는 높은 것으로 나 타났으나, 그 외의 측정항목의 정확도는 상당한 차이가 발생하기도 하였다. 향후 다양한 도로환경 조건에서 추가적인 시험을 통해 차 량부착식 노면상태 검지센서의 현장적용을 기반자료로 활용할 수 있을 것으로 판단된다.
PURPOSES : Demonstrated performance degradation of LiDAR for vehicle and pedestrian dummy in rainy and foggy road conditions. METHODS : In real-scale rain and fog road conditions, adjust the distance between LiDAR and the measurement target from 10m to 70m (in 10m interval), measure LiDAR NPC (number of point cloud) and intensity, and compare the resulting numerical values. RESULTS : LiDAR's NPC and Intensity showed statistically significant differences by overall weather condition (normal, rain, fog), and the values were found to be larger in the order of normal>rainfall>fog. In the case of vehicles, sunder rain conditions, NPC and intensity are recognized even at 70m as in normal conditions, but under fog conditions, NPC and intensity are measured only up to 30m. In the case of pedestrians, the reflective area size is smaller than that of vehicles, so they are recognized only up to 30m in rainy conditions, and NPC and intensity are measured only up to 20m in fog conditions. CONCLUSIONS : It was confirmed that LiDAR performance deteriorates in rain and fog compared to normal.
This research proposes a novel approach to tackle the challenge of categorizing unstructured customer complaints in the automotive industry. The goal is to identify potential vehicle defects based on the findings of our algorithm, which can assist automakers in mitigating significant losses and reputational damage caused by mass claims. To achieve this goal, our model uses the Word2Vec method to analyze large volumes of unstructured customer complaint data from the National Highway Traffic Safety Administration (NHTSA). By developing a score dictionary for eight pre-selected criteria, our algorithm can efficiently categorize complaints and detect potential vehicle defects. By calculating the score of each complaint, our algorithm can identify patterns and correlations that can indicate potential defects in the vehicle. One of the key benefits of this approach is its ability to handle a large volume of unstructured data, which can be challenging for traditional methods. By using machine learning techniques, we can extract meaningful insights from customer complaints, which can help automakers prioritize and address potential defects before they become widespread issues. In conclusion, this research provides a promising approach to categorize unstructured customer complaints in the automotive industry and identify potential vehicle defects. By leveraging the power of machine learning, we can help automakers improve the quality of their products and enhance customer satisfaction. Further studies can build upon this approach to explore other potential applications and expand its scope to other industries.
우리나라는 여러 건의 여객선 사고를 겪으면서, 여객선 안전관리를 위해 다양한 제도를 운영하고 있다. 2021년 기준 우리나라 연안을 운항하는 여객선 162척 중, 차량갑판이 개방된 형태의 차도선이 105척(65 %)을 차지하고 있다. 차도선은 2~4개의 섬을 경유하는 운항 패턴을 가지고 있다. 출항지(모항)에서 안전점검은 선원과 운항관리실의 운항감독관, 해사안전감독관에 의해 실시된다. 경유지에서 의 안전점검은 자체점검이 실시되는 경우가 있다. 여느 제도와 마찬가지로 제도적, 현실적 한계 등이 있다. 이를 위해 영상처리기법을 활 용하여 차량을 검출하고 이를 선박 복원성 계산과 연동하는 방안을 제안하고자 본 연구를 수행하였다. 차량 검출을 위해 차영상을 이용 하는 방법과 기계학습을 이용하는 방법을 사용하였다. 검출된 데이터를 선박 복원성 계산에 활용하였다. 기계학습을 통해 차량을 검출하 는 경우, 차영상에 의한 차량 검출 방법보다 차량 식별에 안정적임을 알 수 있었다. 다만, 카메라가 일몰과 같은 상황에서 역광을 받는 경 우와 야간과 같은 상황에서 부두와 선박 내부의 강한 조명에 의해 차량이 식별되지 않는 한계가 있었다. 안정적인 영상처리를 위해 충분 한 영상 데이터 확보와 프로그램 고도화가 필요해 보인다.
PURPOSES : This study uses deep learning image classification models and vehicle-mounted cameras to detect types of pavement distress — such as potholes, spalling, punch-outs, and patching damage — which require urgent maintenance.
METHODS : For the automatic detection of pavement distress, the optimal mount location on a vehicle for a regular action camera was first determined. Using the orthogonal projection of obliquely captured surface images, morphological operations, and multi-blob image processing, candidate distressed pavement images were extracted from road surface images of a 16,036 km in-lane distance. Next, the distressed pavement images classified by experts were trained and tested for evaluation by three deep learning convolutional neural network (CNN) models: GoogLeNet, AlexNet, and VGGNet. The CNN models were image classification tools used to identify and extract the combined features of the target images via deep layers. Here, a data augmentation technique was applied to produce big distress data for training. Third, the dimensions of the detected distressed pavement patches were computed to estimate the quantity of repair materials needed.
RESULTS : It was found that installing cameras 1.8 m above the ground on the exterior rear of the vehicle could provide clear pavement surface images with a resolution of 1 cm per pixel. The sensitivity analysis results of the trained GoogLeNet, AlexNet, and VGGNet models were 93 %, 86 %, and 72 %, respectively, compared to 62.7 % for the dimensional computation. Following readjustment of the image categories in the GoogLeNet model, distress detection sensitivity increased to 94.6 %.
CONCLUSIONS : These findings support urgent maintenance by sending the detected distressed pavement images with the dimensions of the distressed patches and GPS coordinates to local maintenance offices in real-time.
The purpose of this study is to obtain data on the zones formed by the movement of livestock vehicles and to determine if such areas can be used to establish quarantine activities and quarantine policies for livestock epidemics. For this purpose, this study used mobile data on poultry-related livestock vehicles generated in 2019. InfoMap, a community detection method, was used for regional classification, and the results of the analysis were visualized on a map using GIS. The study results confirmed that the zone of the administrative unit can be classified based on the movement of livestock vehicles. In addition, the zones created by the vehicle movement could be seen to change depending on the purpose and timing of the operation of livestock vehicles. Some areas form relatively stable zones, such as Jeolla-do and Gyeongsang-do, while others change depending on the situation, such as Chungcheong-do, Gyeonggi-do, and Gangwon-do. Further, the zones derived for poultry differed from those derived for cattle and pigs in previous studies.
PURPOSES: This study was initiated to estimate expressway traffic congestion costs by using Vehicle Detection System (VDS) data. METHODS : The overall methodology for estimating expressway traffic congestion costs is based on the methodology used in a study conducted by a study team from the Korea Transport Institute (KOTI). However, this study uses VDS data, including conzone speeds and volumes, instead of the volume delay function for estimating travel times. RESULTS : The expressway traffic congestion costs estimated in this study are generally lower than those observed in KOTI's method. The expressway lines that ranked highest for traffic congestion costs are the Seoul Ring Expressway, Gyeongbu Expressway, and the Youngdong Expressway. Those lines account for 64.54% of the entire expressway traffic congestion costs. In addition, this study estimates the daily traffic congestion costs. The traffic congestion cost on Saturdays is the highest. CONCLUSIONS : This study can be thought of as a new trial to estimate expressway traffic congestion costs by using actual traffic data collected from an entire expressway system in order to overcome the limitations of associated studies. In the future, the methodology for estimating traffic congestion cost is expected to be improved by utilizing associated big-data gathered from other ITS facilities and car navigation systems.
최근 교통감시시스템은 실시간의 영상검지시스템(VIPS)을 가장 선호하고 있으며, 그 수요는 매년 증가하고 있는 추세이다. 일반적으로 영상검지시스템은 공간기반의 검지알고리즘을 사용하고 있으며, 교통량, 속도, 점유율 등의 교통정보를 제공하고 있다. 현재 전 세계적으로 이미 상용화되어 있는 대부분의 영상검지시스템들은 Tripwire기반의 검지영역 내 차량의 존재유무를 판단하여 교통정보를 수집하는 알고리즘으로 구성되어 있으나, 개별차량에 대한 걸지는 불가능한 한계를 갖고 있다. 반면 개벽차량의 추적시스템은 보다 구체적인 공간적 교통정보를 제공할 수 있어 사고검지, 급차선 변경 등 교통정보를 보다 다양화 할 수 있다는 장점이 있으나 추적길이가 불과 100미터이내이면, 그 이상 관측하기 위해서는 운영자가 카메라를 줌인을 하여 영상을 확대하여야 한다. 따라서 본 논문에서는 차량 추적의 효과를 높이기 위해서 기존의 100미터 이내 추적거리를 여러 대의 CCTV시스템을 이용하더라도 200미터이상으로 확대함으로써 사고 또는 비정상적 차량흐름을 검지할 수 있는 알고리즘을 제안한다.
Recently, the safety in vehicle also has become a hot topic as self-driving car is developed. In passive safety systems such as airbags and seat belts, the system is being changed into an active system that actively grasps the status and behavior of the passengers including the driver to mitigate the risk. Furthermore, it is expected that it will be possible to provide customized services such as seat deformation, air conditioning operation and D.W.D (Distraction While Driving) warning suitable for the passenger by using occupant information. In this paper, we propose robust vehicle occupant detection algorithm based on RGB-Depth-Thermal camera for obtaining the passengers information. The RGB-Depth-Thermal camera sensor system was configured to be robust against various environment. Also, one of the deep learning algorithms, OpenPose, was used for occupant detection. This algorithm is advantageous not only for RGB image but also for thermal image even using existing learned model. The algorithm will be supplemented to acquire high level information such as passenger attitude detection and face recognition mentioned in the introduction and provide customized active convenience service.
Recent developments in robotics and intelligent vehicle area, bring interests of people in an autonomous driving ability and advanced driving assistance system. Especially fully automatic parking ability is one of the key issues of intelligent vehicles, and accurate parked vehicles detection is essential for this issue. In previous researches, many types of sensors are used for detecting vehicles, 2D LiDAR is popular since it offers accurate range information without preprocessing. The L shape feature is most popular 2D feature for vehicle detection, however it has an ambiguity on different objects such as building, bushes and this occurs misdetection problem. Therefore we propose the accurate vehicle detection method by using a 3D complete vehicle model in 3D point clouds acquired from front inclined 2D LiDAR. The proposed method is decomposed into two steps: vehicle candidate extraction, vehicle detection. By combination of L shape feature and point clouds segmentation, we extract the objects which are highly related to vehicles and apply 3D model to detect vehicles accurately. The method guarantees high detection performance and gives plentiful information for autonomous parking. To evaluate the method, we use various parking situation in complex urban scene data. xperimental results shows the qualitative and quantitative performance efficiently.
In this paper we propose the method that detects moving objects in autonomous navigation vehicle using LRF sensor data. Object detection and tracking methods are widely used in research area like safe-driving, safe-navigation of the autonomous vehicle. The proposed method consists of three steps: data segmentation, mobility classification and object tracking. In order to make the raw LRF sensor data to be useful, Occupancy grid is generated and the raw data is segmented according to its appearance. For classifying whether the object is moving or static, trajectory patterns are analysed. As the last step, Markov chain Monte Carlo (MCMC) method is used for tracking the object. Experimental results indicate that the proposed method can accurately detect moving objects.