This study investigates a vision-based autonomous landing algorithm using a VTOL-type UAV. VTOL (Vertical Take-Off and Landing) UAVs are hybrid systems that combine the forward flight capability of fixed-wing aircraft with the vertical take-off and landing functionality of multirotors, making them increasingly popular in drone-based industrial applications. Due to the complexity of control during the transition from multirotor mode to fixed-wing mode, many companies rely on commercial software such as ArduPilot. However, when using ArduPilot as-is, the software does not support the velocity-based GUIDED commands commonly used in multirotor systems for vision-based landing. Additionally, the GUIDED mode in VTOL software is designed primarily for fixed-wing operations, meaning its control logic must be modified to enable position-based control in multirotor mode. In this study, we modified the control software to support vision-based landing using a VTOL UAV and validated the proposed algorithm in simulation using GAZEBO. The approach was further verified through real-world experiments using actual hardware.
In this paper, a water rescue mission system was developed for water safety management areas by utilizing unmanned mobility( drone systems) and AI-based visual recognition technology to enable automatic detection and localization of drowning persons, allowing timely response within the golden time. First, we detected suspected human subjects in daytime and nighttime videos, then estimated human skeleton-based poses to extract human features and patterns using LSTM models. After detecting the drowning person, we proposed an algorithm to obtain accurate GPS location information of the drowning person for rescue activities. In our experimental results, the accuracy of the Drown detection rate is 80.1% as F1-Score, and the average error of position estimation is about 0.29 meters.
Ensuring operational safety and reliability in Unmanned Aerial Vehicles (UAVs) necessitates advanced onboard fault detection. This paper presents a novel, mobility-aware multi-sensor health monitoring framework, uniquely fusing visual (camera) and vibration (IMU) data for enhanced near real-time inference of rotor and structural faults. Our approach is tailored for resource-constrained flight controllers (e.g., Pixhawk) without auxiliary hardware, utilizing standard flight logs. Validated on a 40 kg-class UAV with induced rotor damage (10% blade loss) over 100+ minutes of flight, the system demonstrated strong performance: a Multi-Layer Perceptron (MLP) achieved an RMSE of 0.1414 and R² of 0.92 for rotor imbalance, while a Convolutional Neural Network (CNN) detected visual anomalies. Significantly, incorporating UAV mobility context reduced false positives by over 30%. This work demonstrates a practical pathway to deploying sophisticated, lightweight diagnostic models on standard UAV hardware, supporting real-time onboard fault inference and paving the way for more autonomous and resilient health-aware aerial systems.
Anomaly detection technique for the Unmanned Aerial Vehicles (UAVs) is one of the important techniques for ensuring airframe stability. There have been many researches on anomaly detection techniques using deep learning. However, most of research on the anomaly detection techniques are not consider the limited computational processing power and available energy of UAVs. Deep learning model convert to the model compression has significant advantages in terms of computational and energy efficiency for machine learning and deep learning. Therefore, this paper suggests a real-time anomaly detection model for the UAVs, achieved through model compression. The suggested anomaly detection model has three main layers which are a convolutional neural network (CNN) layer, a long short-term memory model (LSTM) layer, and an autoencoder (AE) layer. The suggested anomaly detection model undergoes model compression to increase computational efficiency. The model compression has same level of accuracy to that of the original model while reducing computational processing time of the UAVs. The proposed model can increase the stability of UAVs from a software perspective and is expected to contribute to improving UAVs efficiency through increased available computational capacity from a hardware perspective.
This study evaluated the safety impact of automated traffic enforcement cameras targeting tailgating behavior at signalized intersections by comparing traffic conditions shortly after installation and one year later. The Kukkiwon intersection in Gangnam-gu, Seoul, South Korea was selected as the study site. Individual vehicle speeds, accelerations, and subsequent distances were extracted from video data using YOLOv8 and ByteTrack, which are advanced deep learning-based object detection and tracking algorithms. Surrogate safety measures (SSM), such as time to collision (TTC), modified time to collision (MTTC), and proportion of stopping distance (PSD), were calculated to assess changes in traffic safety. Every SSM indicated an improvement one year after the installation of enforcement cameras, suggesting a reduction in collision risks. In particular, the PSD indicator showed a notable improvement, reflecting a better maintenance of safe following distances. These results highlight the effectiveness of automated enforcement in improving intersection safety and suggest its scalability to other intersections with similar tail-gating issues. Future research should explore the long-term and multisite effects using diverse intersection types and behavioral indicators.
본 연구는 개선된 무인도 지리정보를 구축하기 위해 폴리곤(Polygon) 기반 무인도 지도와 속성정보를 구축하였다. 연구 결과, 3,460개의 포인트(Point) 데이터와 3,447개의 폴리곤 기반 섬 데이터를 구축하였다. 여기에는 463개의 유인도가 포함되었으며, 무인도 수는 기존 해양수산부의 무인도서 정보조회 서비스에 비해 80개 추가된 것이다. 속성정보로는 59건 이상의 이명과 61개의 무인도 지형 변화 사례를 정리하였다. 무인도 지형 변화는 육화, 소멸, 연결, 병합의 네 가지로 구분하였다. 데이터 구축 과정에서는 해안선 자료의 통합, 불필요한 폴리곤 객체의 정리, 이명 정리 과정에서 일부 한계가 있었다. 이런 한계에도 불구하고 본 연구에서 구축된 데이터는 무인도의 공간적 변화 모니터링과 보전 정책 수립을 위한 기초 자료를 제공하고, 향후 다양한 연구에 활용될 수 있을 것으로 기대된다.
This study developed an unmanned autonomous moving algae collection device (HAMA-bot) to remove high-density algae concentrated in the waterfront of urban agricultural reservoirs, and analyzed the effect of algae removal after field application to medium-sized urban reservoir. The algae reduction effect (Chl-a) of the study site in the reservoir by the HAMA-bot operation showed an average 18.5% higher in the treatment area compared to the control area before operation, while the average reduction of 24% in the treatment area after operation. In addition, the Chl-a removal rate, which directly analyzed the influent and effluent of HAMA-bot, showed a very high level with an average of 96.9% (94.7~99.2%). Currently, it is optimized for urban reservoirs and manufactured on a small scale, but it is a useful tool that can be applied on a large scale to large dams and rivers, and it is considered that the field applicability would be improved with the optimized scale.
본 연구는 수수를 대상으로 수수 주산지에서 취득한 무인기 기반 RGB영상에 Seg-Net과 U-Net모델을 작성 후, 일반화 가능성을 검토하여 실시간 재배지 파악에 더 효율적인 기법을 제안하기 위해 수행되었다. 경상북도 안동시의 수수 재배지 영상 264장을 이용해 모델학습을 진행하고, 충청북도 제천시 수수 재배지 영상 14장을 이용하여 테스트를 진행하였다. U-Net의 학습은 14epoch, AccuracyC = 0.9426, LossC = 0.1593, Dice_coefC = 0.9223, AccuracyV = 0.6403, LossV = 1.9624, Dice_coefV = 0.6402에 4시간 37분이 소요되었으며, Seg-Net의 학습은 101epoch, AccuracyC = 0.6363, LossC = 0.6573, Dice_coefC = 0.5586, AccuracyV = 0.5711, LossV = 0.6785, Dice_coefV = 0.5586이며 1시간 55분이 소요되었다. Test결과 U-Net은 AccuracyT = 0.6806, LossT = 0.7180, Dice_coefT = 0.5558, Seg-Net은 AccuracyT = 0.7472, LossT = 0.5225, Dice_coefC = 0.6159로 나타났다. Seg-Net의 Calibration성능은 낮지만 일반화 성능이 뛰어나며, 모델의 빠른 학습 시간, 더 낮은 메모리 요구량으로 수수 재배지 분할에서 U-Net보다 효율적인 모델이라고 사료된다.
North Korea has repeatedly provoked using unmanned aerial vehicles (UAVs), and the threat posed by UAVs continues to escalate, as evidenced by recent directives involving the use of waste-laden balloons and the development of suicide drones. North Korea’s small UAVs are difficult to detect due to their low radar cross-section (RCS) values, necessitating the efficient deployment and operation of assets for effective response. Against this backdrop, this study aims to predict the infiltration routes of enemy UAVs by considering their perspective, avoiding key facilities and obstacles, and propose deployment strategies to enable rapid detection and response during provocations. Utilizing the Markov Decision Process (MDP) based on previous studies, this research presents a model that reflects both UAV flight characteristics and regional environments. Unlike previous models that designate a single starting point, this study addresses the practical challenge of uncertainty in initial infiltration points by incorporating multiple starting points into the scenarios. By aggregating and integrating the probability maps derived from these variations into a unified map, the model predicts areas with a high likelihood of UAV infiltration over time. Furthermore, based on case studies in the capital region, this research proposes deployment strategies tailored to the specifications of currently known anti-drone integrated systems. These strategies are expected to support military decision-making by enabling the efficient operation of assets in areas with a high probability of UAV infiltration.
The diversity of smart EV(electric vehicle)-related industries is increasing due to the growth of battery-based eco-friendly electric vehicle component material technology, and labor-intensive industries such as logistics, manufacturing, food, agriculture, and service have invested in and studied automation for a long time. Accordingly, various types of robots such as autonomous mobile robots and collaborative robots are being utilized for each process to improve industrial engineering such as optimization, productivity management, and work management. The technology that should accompany this unmanned automobile industry is unmanned automatic charging technology, and if autonomous mobile robots are manually charged, the utility of autonomous mobile robots will not be maximized. In this paper, we conducted a study on the technology of unmanned charging of autonomous mobile robots using charging terminal docking and undocking technology using an unmanned charging system composed of hardware such as a monocular camera, multi-joint robot, gripper, and server. In an experiment to evaluate the performance of the system, the average charging terminal recognition rate was 98%, and the average charging terminal recognition speed was 0.0099 seconds. In addition, an experiment was conducted to evaluate the docking and undocking success rate of the charging terminal, and the experimental results showed an average success rate of 99%.
농업에서 거친 토양 표면은 다양한 문제를 일으킨다. 물, 토양, 씨앗, 비료와 같은 자원을 낭비하여 생산비를 높이고 환경에도 부정적인 영향을 끼친다. 또, 농업 기계는 거친 토양에서 균형을 잡기 어려워 농부들의 편의성을 해치고, 장비의 내구성과 자율 주행 성능을 떨어뜨리는 등 정밀농업의 실현을 방해한다. 거친 토양 표면은 입자가 큰 토양으로 구성되어 있을 가능성이 높으며, 이는 식물 생장을 방해하고 물빠짐에도 영향을 준다. 거친 표토를 측정하는 방식은 농지 전체를 측정하기보단 일부 지점의 거칠기를 측정하여 나머지 지점의 거칠기를 추정하는 것이 다수다. 따라서, 본 연구는 무인비행체(UAV)를 이용해 효율적으로 표토 거칠기를 측정할 수 있는 방안을 제안하고자 한다. 실험은 경상남도 밀양시 부북면에서 40m 높이에서 180장, 86%의 중첩률로 획득한 항공 사진을 이용했다. 이미지 데이터를 바탕으로 만들어진 조밀 점군에서 파이썬으로 중심점으로부터 일정한 반경 이내에 있는 가까운 주변 8개 점을 선택하여 고도차를 이용하여 결과값을 계산하였다. TRI 지수, Roughness 지수, 표준편차 세 가지 지표는 계산 후 농지에 시각화되었다. 일부 지점에서 측정한 표토 거칠기를 바탕으로 나머지 농지에 대한 거칠기 값을 추정하는 방식과 달리, 본 연구는 모든 지점에서의 거칠기 지수를 점군 단계에서 습득할 수 있는 방식을 통해 측정의 정밀도를 높이고 농지 운영을 돕고자 하였다.
PURPOSES : As the number of fixed traffic enforcement equipments increase rapidly, it is necessary to improve efficient operation and management plans. The aim of this study is to evaluate the factors influencing fixed traffic enforcement equipment failure. METHODS : This study utilized binary logistic regression analysis using the database provided by the Korean Road Traffic Authority to evaluate the factors affecting the failure of fixed traffic enforcement equipment. RESULTS : As a result of the evaluation of this study, the main factors affecting failure were red-light camera, old equipment, Jeju, National expressways, and equipment with low speed limits. CONCLUSIONS : This study can be used as basic data on the improvement of operation management plas for maintenance of traffic enforcement equipment. Through this study, it will be possible to establish a step-by-step plan with high efficiency comapared to the input of required manpower.
As technologies have been more quickly developed in this 4th Industry Revolution era, their application to defense industry has been also growing. With these much advanced technologies, we attempt to use Manned-Unmanned Teaming systems in various military operations. In this study, we consider the Location-Routing Problem for reconnaissance surveillance missions of the maritime manned-unmanned surface vehicles. As a solution technique, the two-phase method is presented. In the first location phase, the p-median problem is solved to determine which nodes are used as the seeds for the manned vehicles using Lagrangian relaxation with the subgradient method. In the second routing phase, using the results obtained from the location phase, the Vehicle Routing Problems are solved to determine the search routes of the unmanned vehicles by applying the Location Based Heuristic. For three network data sets, computational experiments are conducted to show the performance of the proposed two-phase method.