Advanced Real-Time Object and Damage Detection in Disaster Management with Unmanned Aerial Vehicles-Assisted Deep Learning
In search and rescue operations following natural disasters, the effective use of unmanned aerial vehicle imagery is crucial for rapid and accurate damage assessment. This study presents a real-time multi-class object and damage detection approach developed using the YOLOv8n and YOLOv11n base models for use in post-earthquake search, rescue, and damage assessment processes. A real-time, video-based domain-generalized YOLO framework is proposed for response to unmanned aerial vehicle-monitored disasters. It combines multi-dataset fusion with adverse condition augmentation and lightweight temporal integration to maintain detection speed. In this paper, a hybrid dataset consisting of 8,754 images is generated from datasets obtained by collecting online earthquake imagery from VisDrone2019, VisDrone-Adverse Weather, DAWN, UAVDT, RescuNet, and other sources. It is also augmented with images from low-visibility conditions such as fog, smoke, darkness, rain, and dust using various data augmentation methods. During the data augmentation process, geometric transformations were applied: rotation, horizontal and vertical translation, and scaling; color space manipulations were adjusted for brightness, contrast, and HSV; environmental conditions were simulated using fog, smoke, darkness, rain, and dust; and Gaussian noise was added and random pixel manipulations were applied. Furthermore, models were trained on this dataset to perform real-time video processing and object detection, aiming to ensure rapid adaptation to dynamic conditions in disaster areas. The YOLOv11 model demonstrates the highest performance compared to YOLOv8 with a mAP value of 92.1% and an F1 score of 0.89, achieving an average of 15% more robust results in challenging environmental conditions. This study directly works on unmanned aerial vehicle video streams and provides temporal consistency through instantaneous adaptation in dynamic disaster scenes. The proposed method is comprehensively evaluated for its real-time object detection performance and damage classification accuracy. The results demonstrate that the model can be effectively used in disaster management processes.