AIRCRAFT DETECTION WITH DEEP NEURAL NETWORKS AND CONTOUR-BASED METHODS
DOI:
https://doi.org/10.15588/1607-3274-2024-4-12Keywords:
machine learning, image and contour recognition, optical image preprocessing, high-resolution imagery, aircraft detectionAbstract
Context. Aircraft detection is an essential task in the military, as fast and accurate aircraft identification allows for timely response to potential threats, effective airspace control, and national security. The use of deep neural networks improves the accuracy of aircraft recognition, which is essential for modern defense and airspace monitoring needs.
Objective. The work aims to improve the accuracy of aircraft recognition in high-resolution optical satellite imagery by using deep neural networks and a method of sequential boundary traversal to detect object contours.
Method. A method for improving the accuracy of aircraft detection on high-resolution satellite images is proposed. The first stage involves collecting data from the HRPlanesv2 dataset containing high-precision satellite images with aircraft annotations. The second stage consists of preprocessing the images using a sequential boundary detection method to detect object contours. In the third stage, training data is created by integrating the obtained contours with the original HRPlanesv2 images. In the fourth stage, the YOLOv8m object detection model is trained separately on the original HRPlanesv2 dataset and the dataset with the applied preprocessing, which allows the evaluation of the impact of additional processed features on the model performance.
Results. Software that implements the proposed method was developed. Testing was conducted on the primary data before preprocessing and the data after its application. The results confirmed the superiority of the proposed method over classical approaches, providing higher aircraft recognition accuracy. The mAP50 index reached 0.994, and the mAP50-95 index reached 0.864, 1% and 4.8% higher than the standard approach.
Conclusions. The experiments confirm the effectiveness of the proposed method of aircraft detection using deep neural networks and the process of sequential boundary traversal to detect object contours. The results indicate this approach’s high accuracy and efficiency, which allows us to recommend it for use in research related to aircraft recognition in high-resolution images. Further research could focus on improving image preprocessing methods and developing object recognition technologies in machine learning.
References
Tarmizi I. A., Aziz A. A.Vehicle Detection Using Convolutional Neural Network for Autonomous Vehicles, International Conference on Intelligent and Advanced System, ICIAS 2018. Kuala Lumpur, Malaysia, 2018, pp. 1–5. – DOI: 10.1109/ICIAS.2018.8540563.
Li L., Mu X., Li S., Peng H. A Review of Face Recognition Technology, IEEE Access, 2020, Vol. 8, pp. 139110– 139120. DOI: 10.1109/ACCESS.2020.3011028.
Galić I., Habijan M., Leventić H., Romić K. Machine Learning Empowering Personalized Medicine: A Comprehensive Review of Medical Image Analysis Methods, Electronics, 2023, Vol. 12, № 21, P. 4411. DOI: 10.3390/electronics12214411.
Hnatushenko V., Kashtan V. Automated pansharpening information technology of satellite images, Radio Electronics, Computer Science, Control, 2021, № 2, pp. 123–132. DOI 10.15588/1607-3274-2021-2-13.
Kashtan V., Hnatushenko V. Machine learning for automatic extraction of water bodies using Sentinel-2 imagery, Radio Electronics, Computer Science, Control, 2024, № 1, pp. 118–127. DOI: 10.15588/1607-3274-2024-1-11
Zhou F., Deng H., Xu Q., Lan X. CNTR-YOLO: Improved YOLOv5 Based on ConvNext and Transformer for Aircraft Detection in Remote Sensing Images, Electronics, 2023, Vol. 12, № 12, P. 267. DOI: 10.3390/electronics12122671.
Zhou L., Yan H., Shan Y., Zheng Ch., Liu Y., Zuo X., Qiao B. Aircraft Detection for Remote Sensing Images Based on Deep Convolutional Neural Networks, Journal of Electrical and Computer Engineering, 2021, Vol. 2021, pp. 1–16. DOI: 10.1155/2021/4685644.
Kelm. A. P., Rao V. S., Zoelzer U. Object Contour and Edge Detection with RefineContourNet, Computer Analysis of Images and Patterns. CAIP 2019, Salerno, 2–6 September, 2019. Springer, Cham, 2019, pp. 246–258. DOI: 10.1007/978-3-030-29888-3_20.
Berezina S., Solonets O., Lee K., Bortsova M. An information technique for segmentation of military assets in conditions of uncertainty of initial data, Information Processing Systems, 2021, №4(167), pp. 6–18. DOI: 10.30748/soi.2021.167.01.
Liu Z., Gao Y., Du Q., Chen M., Lv W. YOLO-Extract: Improved YOLOv5 for Aircraft Object Detection in Remote Sensing Images, IEEE Access, 2023, Vol. 11, pp. 1742– 1751. DOI: 0.1109/ACCESS.2023.3233964.
Liu Z., Gao Y., Du Q. YOLO-Class: Detection and Classification of Aircraft Targets in Satellite Remote Sensing Images Based on YOLO-Extract, IEEE Access, 2023, Vol. 11. pp. 109179–109188. DOI: 10.1109/ACCESS.2023.3321828.
Chen J., Shen Y., Liang Y., Wang Z., Zhang Q. YOLOSAD: An Efficient SAR Aircraft Detection Network, Applied Sciences, 2023, Vol. 14, № 7, P. 3025. DOI: 10.3390/app14073025.
Unsal D. HRPlanesv2 – High Resolution Satellite Imagery for Aircraft Detection, Zenodo, 2022. DOI: 10.5281/ZENODO.7331974.
Suzuki S., Be K.Topological structural analysis of digitized binary images by border following, Computer Vision, Graphics, and Image Processing, 1985, Vol. 30, № 1, pp. 32–46. DOI: 10.1016/0734-189X(85)90016-7.
Song X., Zhang S., Yang J., Zhang J. Research on Luggage Package Extraction of X-ray Images Based on Edge Sensitive Multi-Channel Background Difference Algorithm, Applied Sciences, 2023, Vol. 13, № 21, P. 11981. DOI: 10.3390/app132111981.
Ju R-Y,. Cai W. Fracture detection in pediatric wrist trauma X-ray images using YOLOv8 algorithm, Scientific Reports, 2023, Vol. 13, №1, P. 20077. DOI: 10.1038/s41598-02347460-7.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Y. D. Radionov, V. Yu. Kashtan, V. V. Hnatushenko, O.V. Kazymyrenko
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Creative Commons Licensing Notifications in the Copyright Notices
The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.
The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.
The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.
Authors who publish with this journal agree to the following terms:
-
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License CC BY-SA that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
-
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
-
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.