PREDICTION THE ACCURACY OF IMAGE INPAINTING USING TEXTURE DESCRIPTORS
DOI:
https://doi.org/10.15588/1607-3274-2025-2-5Keywords:
image inpainting, accuracy prediction, LaMa network, texture descriptor, co-occurence matrixAbstract
Context. The problem of filling missing image areas with realistic assumptions often arises in the processing of real scenes in computer vision and computer graphics. To inpaint the missing areas in an image, various approaches are applied such as diffusion models, self-attention mechanism, and generative adversarial networks. To restore the real scene images convolutional neural networks are used. Although convolutional neural networks recently achieved significant success in image inpainting, high efficiency is not always provided.
Objective. The paper aims to reduce the time consumption in computer vision and computer graphics systems by accuracy prediction of image inpainting with convolutional neural networks.
Method. The prediction of image inpainting accuracy can be done by an analysis of image statistics without the execution of inpainting itself. Then the time and computer resources on the image inpainting will not be consumed. We have used a peak signalto-noise ratio and a structural similarity index measure to evaluate an image inpainting accuracy.
Results. It is shown that a prediction can perform well for a wide range of mask sizes and real-scene images collected in the Places2 database. As an example, we concentrated on a particular case of the LaMa network versions although the proposed method can be generalized to other convolutional neural networks as well.
Conclusions. The results obtained by the proposed method show that this type of prediction can be performed with satisfactory accuracy if the dependencies of the SSIM or PSNR versus image homogeneity are used. It should be noted that the structural similarity of the original and inpainted images is better predicted than the error between the corresponding pixels in the original and inpainted images. To further reduce the prediction error, it is possible to apply the regression on several input parameters
References
Xiang H., Zou Q., Nawaz M. A. et al. Deep learning for image inpainting: a survey, Pattern Recognition, 2023, Vol. 134, Article 109046. DOI: 10.1016/j.patcog.2022.109046.
Xu Z., Zhang X., Chen W. et al. A review of image inpainting method based on deep learning, Appl. Sci., 2023, Vol. 13, Article 11189. DOI: 10.3390/app132011189.
Ho J., Jain A., Abbeel P. Denoising diffusion probabilistic models, Advances in Neural Information Processing Systems, 2020, Vol. 33, pp. 6840–6851.
Lugmayr A., Danelljan M., Romero A. et al. Repaint: Inpainting using denoising diffusion probabilistic models, Computer Vision and Pattern Recognition: IEEE/CVF Conference, New Orleans, LA, USA, 19–20 June 2022 : proceedings. IEEE 2022, pp. 11451–11461. DOI:
1109/CVPR52688.2022.0111.
Yu J., Yang J., Shen X., Lu X., Huang T. S. Generative image inpainting with contextual attention. Computer Vision and Pattern Recognition Workshops: IEEE/CVF Conference, CVPRW, Salt Lake City, UT, USA, 18–22 June, 2018 : proceedings. IEEE, 2018, pp. 5505–5514. DOI: 10.1109/CVPRW.2018.00577.
Mohite T. A., Phadke G. S. Image inpainting with contextual attention and partial convolution, Artificial Intelligence and Signal Processing: 2020 International Conference, AISP, Amaravati, India, 10–12 January 2020 : proceedings. IEEE, 2020, pp. 1–6. DOI:
1109/AISP48273.2020.9073008.
Guo Q., Li X., Juefei-Xu F. et al. JPGnet: Joint predictive filtering and generative network for image inpainting, Multimedia: 29th ACM International Conference, Chengdu, China, 20–24 October 2021 : proceedings. ACM, 2021, pp. 386–394. DOI: 10.1145/3474085.3475170.
Suvorov R., Logacheva E., Mashikhin A. et al. Resolutionrobust large mask inpainting with Fourier convolutions, Applications of Computer Vision: IEEE Workshop/Winter Conference, WACV, Waikoloa, Hawaii, 4–8 January, 2022 : proceedings. IEEE, 2022, pp. 2149–2159. DOI: 10.1109/WACV51458.2022.00323
Kolodochka D. O., Polyakova M. V. LaMa-Wavelet: image inpainting with high quality of fine details and object edges, Radio Electronics, Computer Science, Control, 2024, № 1, pp. 208–220. DOI: 10.15588/1607-3274-2024-1-19.
Jain S., Shivam V., Bidargaddi A. P., Malipatil S., Patil K. Image inpainting using YOLOv8 and LaMa model, Emerging Technology: 5th International Conference (INCET), Belgaum, India, 24-26 May 2024 : proceedings. IEEE, 2024, pp. 1–7. DOI: 10.1109/INCET61516.2024.10593536.
Kolodochka D., Polyakova M., Nesteriuk O., Makarichev V. LaMa network architecture search for image inpainting, Information Control Systems & Technologies: 12th International Conference, ICST, Odesa, Ukraine, 23–25 September, 2024 : proceedings. CEUR-WS, 2024,
Vol. 3799, pp. 365 – 376.
Places365 Scene Recognition Demo [Electronic resource]. Access mode: http://places2.csail.mit.edu/
Mellor J., Turner J., Storkey A. J., Crowley E. J. Neural architecture search without training, Machine Learning: 38th International Conference, PMLR, Virtual, 18–24 July 2021: proceedings. IEEE, 2021, Vol. 139, pp. 7588–7598. DOI: 10.48550/arXiv.2006.04647.
Rubel O., Abramov S., Lukin V. et al. Is texture denoising efficiency predictable, International Journal on Pattern Recognition and Artificial Intelligence, 2018, Vol. 32, Article 1860005. DOI: 10.1142/S0218001418600054.
Rubel O. S., Lukin V. V., de Medeiros F. S. Prediction of despeckling efficiency of DCT-based filters applied to SAR images, Distributed Computing in Sensor Systems: 2015 International Conference, (DCOSS), Fortaleza, Brazil, 10–12 June, 2015 : proceedings. IEEE, 2015, pp. 159–168. DOI: 10.1109/DCOSS.2015.16.
Abramov S., Abramova V., Lukin V., Egiazarian K. Prediction of signal denoising efficiency for DCT-based filter, Telecommunications and Radio Engineering, 2019, Vol. 78, № 13, pp. 1129–114. DOI: 10.1615/TelecomRadEng.v78.i13.10.
Zalasiński M., Cader A., Patora-Wysocka Z., Xiao M. Evaluating neural network models for predicting dynamic signature signals, Journal of Artificial Intelligence and Soft Computing Research, 2024, Vol. 14, № 4, pp. 361–372. DOI: 10.2478/jaiscr-2024-0019.
Cao L., Yang T., Wang Y., Yan B., Guo Y. Generator pyramid for high-resolution image inpainting, Complex & Intelligent Systems, 2023, Vol. 9, Article 7553. DOI: 10.1007/s40747-023-01080-w.
Yamashita Y., Shimosato K., Ukita N. Boundary-aware image inpainting with multiple auxiliary cues, Computer Vision and Pattern Recognition: IEEE/CVF Workshop/Conference, New Orleans, LA, USA, 19–20 June, 2022 : proceedings. IEEE, 2022, pp. 618–628. DOI:
1109/CVPRW56347.2022.00077.
Nazeri K., Ng E., Joseph T., Qureshi F., Ebrahimi M. EdgeConnect: structure guided image inpainting using edge prediction, Computer Vision Workshop: IEEE/CVF International Conference, ICCVW, Seoul, Korea (South), 27–28 October, 2019 : proceedings. IEEE, 2019, pp. 2462–2468. DOI: 10.1109/ICCVW.2019.00408
Liao L., Xiao J., Wang Z., Lin C.-W., Satoh S. Guidance and evaluation: semantic-aware image inpainting for mixed scenes, Computer Vision: 16th European Conference, ECCV, Glasgow, UK, 23–28 August 2020 : proceedings.IEEE, 2020, pp. 683–700. DOI: 10.1007/978-3-030-58583-9_41.
Liu G., Reda F. A., Shih K. J. et al. Image inpainting for irregular holes using partial convolutions, Computer Vision: European Conference, ECCV, Munich, Germany, 8–14 September, 2018 : proceedings. IEEE, 2018, pp. 85–100. DOI: 10.1007/978-3-030-01252-6_6.
Yu J., Lin Z., Yang J. et al. Free-form image inpainting with gated convolution, Computer Vision: IEEE/CVF International Conference, ICCV, Seoul, Korea (South), 27 October – 2 November, 2019 : proceedings. IEEE, 2019, pp. 4471–4480. DOI: 10.1109/ICCV.2019.00457.
Xie C., Liu S., Li C. et al. Image inpainting with learnable bidirectional attention maps, Computer Vision: IEEE/CVF International Conference, ICCV, Seoul, Korea (South), 27 October – 2 November 2019 : procedings. IEEE, 2019, pp. 8857–8866. DOI: 10.1109/ICCV.2019.00895.
Wang X., Girshick R., Gupta A., He K. Non-local neural networks, Computer Vision and Pattern Recognition: 2018 IEEE/CVF Conference, Salt Lake City, UT, USA, 18–22 June 2018 : proceedings. IEEE, 2018, pp. 7794–7803. DOI: 10.1109/CVPR.2018.00813.
Sara U., Akter M., Uddin M. S. Image quality assessment through FSIM, SSIM, MSE and PSNR – a comparative study, Journal of Computer and Communications, 2019, Vol. 7, № 3, pp. 8–18. DOI: 10.4236/jcc.2019.73002.
Gonzalez R. C., Woods R. E. Digital Image Processing, NY: Pearson, 4th Edition,2017,1192 p.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 D. O. Kolodochka, M. V. Polyakova, V. V. Rogachko

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Creative Commons Licensing Notifications in the Copyright Notices
The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.
The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.
The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.
Authors who publish with this journal agree to the following terms:
-
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License CC BY-SA that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
-
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
-
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.