PREDICTION THE ACCURACY OF IMAGE INPAINTING USING TEXTURE DESCRIPTORS

Authors

  • D. O. Kolodochka Odesa Polytechnic National University, Odesa, Ukraine, Ukraine
  • M. V. Polyakova Odesa Polytechnic National University, Odesa, Ukraine, Ukraine
  • V. V. Rogachko Odesa Polytechnic National University, Odesa, Ukraine, Ukraine

DOI:

https://doi.org/10.15588/1607-3274-2025-2-5

Keywords:

image inpainting, accuracy prediction, LaMa network, texture descriptor, co-occurence matrix

Abstract

Context. The problem of filling missing image areas with realistic assumptions often arises in the processing of real scenes in computer vision and computer graphics. To inpaint the missing areas in an image, various approaches are applied such as diffusion models, self-attention mechanism, and generative adversarial networks. To restore the real scene images convolutional neural networks are used. Although convolutional neural networks recently achieved significant success in image inpainting, high efficiency is not always provided.
Objective. The paper aims to reduce the time consumption in computer vision and computer graphics systems by accuracy prediction of image inpainting with convolutional neural networks.
Method. The prediction of image inpainting accuracy can be done by an analysis of image statistics without the execution of inpainting itself. Then the time and computer resources on the image inpainting will not be consumed. We have used a peak signalto-noise ratio and a structural similarity index measure to evaluate an image inpainting accuracy.
Results. It is shown that a prediction can perform well for a wide range of mask sizes and real-scene images collected in the Places2 database. As an example, we concentrated on a particular case of the LaMa network versions although the proposed method can be generalized to other convolutional neural networks as well.
Conclusions. The results obtained by the proposed method show that this type of prediction can be performed with satisfactory accuracy if the dependencies of the SSIM or PSNR versus image homogeneity are used. It should be noted that the structural similarity of the original and inpainted images is better predicted than the error between the corresponding pixels in the original and inpainted images. To further reduce the prediction error, it is possible to apply the regression on several input parameters

Author Biographies

D. O. Kolodochka, Odesa Polytechnic National University, Odesa, Ukraine

Post-graduate student of the Institute of Computer Systems

M. V. Polyakova, Odesa Polytechnic National University, Odesa, Ukraine

Dr. Sc., Associate Professor, Professor of the Department of Applied Mathematics and
Information Technologies

V. V. Rogachko, Odesa Polytechnic National University, Odesa, Ukraine

Student of the Institute of Computer Systems

References

Xiang H., Zou Q., Nawaz M. A. et al. Deep learning for image inpainting: a survey, Pattern Recognition, 2023, Vol. 134, Article 109046. DOI: 10.1016/j.patcog.2022.109046.

Xu Z., Zhang X., Chen W. et al. A review of image inpainting method based on deep learning, Appl. Sci., 2023, Vol. 13, Article 11189. DOI: 10.3390/app132011189.

Ho J., Jain A., Abbeel P. Denoising diffusion probabilistic models, Advances in Neural Information Processing Systems, 2020, Vol. 33, pp. 6840–6851.

Lugmayr A., Danelljan M., Romero A. et al. Repaint: Inpainting using denoising diffusion probabilistic models, Computer Vision and Pattern Recognition: IEEE/CVF Conference, New Orleans, LA, USA, 19–20 June 2022 : proceedings. IEEE 2022, pp. 11451–11461. DOI:

1109/CVPR52688.2022.0111.

Yu J., Yang J., Shen X., Lu X., Huang T. S. Generative image inpainting with contextual attention. Computer Vision and Pattern Recognition Workshops: IEEE/CVF Conference, CVPRW, Salt Lake City, UT, USA, 18–22 June, 2018 : proceedings. IEEE, 2018, pp. 5505–5514. DOI: 10.1109/CVPRW.2018.00577.

Mohite T. A., Phadke G. S. Image inpainting with contextual attention and partial convolution, Artificial Intelligence and Signal Processing: 2020 International Conference, AISP, Amaravati, India, 10–12 January 2020 : proceedings. IEEE, 2020, pp. 1–6. DOI:

1109/AISP48273.2020.9073008.

Guo Q., Li X., Juefei-Xu F. et al. JPGnet: Joint predictive filtering and generative network for image inpainting, Multimedia: 29th ACM International Conference, Chengdu, China, 20–24 October 2021 : proceedings. ACM, 2021, pp. 386–394. DOI: 10.1145/3474085.3475170.

Suvorov R., Logacheva E., Mashikhin A. et al. Resolutionrobust large mask inpainting with Fourier convolutions, Applications of Computer Vision: IEEE Workshop/Winter Conference, WACV, Waikoloa, Hawaii, 4–8 January, 2022 : proceedings. IEEE, 2022, pp. 2149–2159. DOI: 10.1109/WACV51458.2022.00323

Kolodochka D. O., Polyakova M. V. LaMa-Wavelet: image inpainting with high quality of fine details and object edges, Radio Electronics, Computer Science, Control, 2024, № 1, pp. 208–220. DOI: 10.15588/1607-3274-2024-1-19.

Jain S., Shivam V., Bidargaddi A. P., Malipatil S., Patil K. Image inpainting using YOLOv8 and LaMa model, Emerging Technology: 5th International Conference (INCET), Belgaum, India, 24-26 May 2024 : proceedings. IEEE, 2024, pp. 1–7. DOI: 10.1109/INCET61516.2024.10593536.

Kolodochka D., Polyakova M., Nesteriuk O., Makarichev V. LaMa network architecture search for image inpainting, Information Control Systems & Technologies: 12th International Conference, ICST, Odesa, Ukraine, 23–25 September, 2024 : proceedings. CEUR-WS, 2024,

Vol. 3799, pp. 365 – 376.

Places365 Scene Recognition Demo [Electronic resource]. Access mode: http://places2.csail.mit.edu/

Mellor J., Turner J., Storkey A. J., Crowley E. J. Neural architecture search without training, Machine Learning: 38th International Conference, PMLR, Virtual, 18–24 July 2021: proceedings. IEEE, 2021, Vol. 139, pp. 7588–7598. DOI: 10.48550/arXiv.2006.04647.

Rubel O., Abramov S., Lukin V. et al. Is texture denoising efficiency predictable, International Journal on Pattern Recognition and Artificial Intelligence, 2018, Vol. 32, Article 1860005. DOI: 10.1142/S0218001418600054.

Rubel O. S., Lukin V. V., de Medeiros F. S. Prediction of despeckling efficiency of DCT-based filters applied to SAR images, Distributed Computing in Sensor Systems: 2015 International Conference, (DCOSS), Fortaleza, Brazil, 10–12 June, 2015 : proceedings. IEEE, 2015, pp. 159–168. DOI: 10.1109/DCOSS.2015.16.

Abramov S., Abramova V., Lukin V., Egiazarian K. Prediction of signal denoising efficiency for DCT-based filter, Telecommunications and Radio Engineering, 2019, Vol. 78, № 13, pp. 1129–114. DOI: 10.1615/TelecomRadEng.v78.i13.10.

Zalasiński M., Cader A., Patora-Wysocka Z., Xiao M. Evaluating neural network models for predicting dynamic signature signals, Journal of Artificial Intelligence and Soft Computing Research, 2024, Vol. 14, № 4, pp. 361–372. DOI: 10.2478/jaiscr-2024-0019.

Cao L., Yang T., Wang Y., Yan B., Guo Y. Generator pyramid for high-resolution image inpainting, Complex & Intelligent Systems, 2023, Vol. 9, Article 7553. DOI: 10.1007/s40747-023-01080-w.

Yamashita Y., Shimosato K., Ukita N. Boundary-aware image inpainting with multiple auxiliary cues, Computer Vision and Pattern Recognition: IEEE/CVF Workshop/Conference, New Orleans, LA, USA, 19–20 June, 2022 : proceedings. IEEE, 2022, pp. 618–628. DOI:

1109/CVPRW56347.2022.00077.

Nazeri K., Ng E., Joseph T., Qureshi F., Ebrahimi M. EdgeConnect: structure guided image inpainting using edge prediction, Computer Vision Workshop: IEEE/CVF International Conference, ICCVW, Seoul, Korea (South), 27–28 October, 2019 : proceedings. IEEE, 2019, pp. 2462–2468. DOI: 10.1109/ICCVW.2019.00408

Liao L., Xiao J., Wang Z., Lin C.-W., Satoh S. Guidance and evaluation: semantic-aware image inpainting for mixed scenes, Computer Vision: 16th European Conference, ECCV, Glasgow, UK, 23–28 August 2020 : proceedings.IEEE, 2020, pp. 683–700. DOI: 10.1007/978-3-030-58583-9_41.

Liu G., Reda F. A., Shih K. J. et al. Image inpainting for irregular holes using partial convolutions, Computer Vision: European Conference, ECCV, Munich, Germany, 8–14 September, 2018 : proceedings. IEEE, 2018, pp. 85–100. DOI: 10.1007/978-3-030-01252-6_6.

Yu J., Lin Z., Yang J. et al. Free-form image inpainting with gated convolution, Computer Vision: IEEE/CVF International Conference, ICCV, Seoul, Korea (South), 27 October – 2 November, 2019 : proceedings. IEEE, 2019, pp. 4471–4480. DOI: 10.1109/ICCV.2019.00457.

Xie C., Liu S., Li C. et al. Image inpainting with learnable bidirectional attention maps, Computer Vision: IEEE/CVF International Conference, ICCV, Seoul, Korea (South), 27 October – 2 November 2019 : procedings. IEEE, 2019, pp. 8857–8866. DOI: 10.1109/ICCV.2019.00895.

Wang X., Girshick R., Gupta A., He K. Non-local neural networks, Computer Vision and Pattern Recognition: 2018 IEEE/CVF Conference, Salt Lake City, UT, USA, 18–22 June 2018 : proceedings. IEEE, 2018, pp. 7794–7803. DOI: 10.1109/CVPR.2018.00813.

Sara U., Akter M., Uddin M. S. Image quality assessment through FSIM, SSIM, MSE and PSNR – a comparative study, Journal of Computer and Communications, 2019, Vol. 7, № 3, pp. 8–18. DOI: 10.4236/jcc.2019.73002.

Gonzalez R. C., Woods R. E. Digital Image Processing, NY: Pearson, 4th Edition,2017,1192 p.

Downloads

Published

2025-06-29

How to Cite

Kolodochka, D. O. ., Polyakova, M. V. ., & Rogachko, V. V. . (2025). PREDICTION THE ACCURACY OF IMAGE INPAINTING USING TEXTURE DESCRIPTORS. Radio Electronics, Computer Science, Control, (2), 56–67. https://doi.org/10.15588/1607-3274-2025-2-5

Issue

Section

Mathematical and computer modelling