AN IMPROVED MOVING OBJECTS DETECTION ALGORITHM IN VIDEO SEQUENCES
DOI:
https://doi.org/10.15588/1607-3274-2020-3-8Keywords:
Аlgorithm, method, video sequence, background subtraction, dynamic object, colour model, pixel, background, ViBe.Abstract
Context. The implementation of video analytics functions in video surveillance systems makes it possible to increase the efficiency of these systems. One of the functions of these intelligent video surveillance systems is to detect dynamic objects in the surveillance sectors of video surveillance cameras. Existing methods of background subtractoin and object recognition have important disadvantages that limit their application in practice: under low contrast algorithms can not select an object from the background; some moving objects can be recognized as a background, algorithms critical to lighting conditions, and so on. Therefore, an important task is to develop and improve methods for detecting dynamic objects in video sequences.
Objective. The research is devoted to the development of an improved method for detecting dynamic objects in video sequences.
Method. For moving objects detection in video sequences we used background subtraction methods based on pixel-by-pixel analysis of frames using elements of the expert systems theory.
Results. In this paper, we propose an improved method for detecting dynamic objects in video sequences, which is based on the ViBe algorithm. The proposed approach differs from original the using of U*V*W* color model, using double threshold levels and some elements of theory expert systems for removal of vaguenesses in pixel classification (Dempster-Shafer theory) and dynamic method for updating background pixel models. Proposed algorithm include following stages: initialization of the background model (for each pixel with known parameters, the number of previous values in the current frame is stored); foreground detection; the next step is a calculation amounts of points, that belong to the foreground and to the background. For removal of vaguenesses in pixel classification we used some elements of Dempster-Shafer theory. After initialization of the background model and foreground detection next stage is updating background model. For this we used a three-level constructed neighborhood of the studied pixel and used of the even distribution of random values is into each of three levels.
Conclusions. Experimental research of the improved algorithm in comparing to original ViBe conducted with the use of test frames from a set of CDNET in the various variants of environment and with the different variants of discriminability. The consolidated results specify on the improvement of results of an offer method as compared to original ViBe on the average on 6,7%.
References
Weiming H., Tieniu T., Liang W. at all.A Survey on Visual Surveillance of Object Motion and Behaviors, IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), 2004, Vol. 34, No. 3, pp. 334– 352. DOI : 10.1109/TSMCC.2004.829274.
Stauffer C., Grimson W. Adaptive background mixture models for real-time tracking, Computer Society Conference on Computer Vision and Pattern Recognition : proceedings. Ft. Collins, CO, USA, IEEE Computer Society, 1999, pp. 2246–2252. DOI : 10.1109/CVPR.1999.784637.
Hayman E., Eklundh J. Statistical background subtraction for a mobile observer, Proceedings Ninth IEEE International Conference on Computer Vision. Nice, France, IEEE, 2003, pp. 67–74. DOI : 10.1109/ICCV.2003.1238315.
Zivkovic Z., Van der Heijden F. Efficient adaptive density estimation per image pixel for the task of background subtraction, Pattern Recognition Letters : proceedings, 2006, № 27, pp. 773–780. DOI : https://doi.org/10.1016/j.patrec.2005.11.005.
Zivkovic Z. Improved Adaptive Gaussian Mixture Model for Background Subtraction, Pattern Recognition : proceedings of the 17th International Conference. Cambridge, UK, IEEE, 2004, Vol. 2, pp. 28–31. DOI : 10.1109/ICPR.2004.1333992.
Babaryka А. О. The justification of optimal algorithms index choice for the background subtraction in video sequences derived from stationary cameras of video surveillance systems, Modern Information Technologies in the Sphere of Security and Defence. Кyiv, National Defence University of Ukraine, 2019, No. 3 (36), pp. 97–102. DOI : http://dx.doi.org/10.33099/2311-7249/2019-36-3-97102.
Godbehere A., Matsukawa A., Goldberg K. Y. Visual Tracking of Human Visitors under Variable-Lighting Conditions for a Responsive Audio Art Installation, American Control Conference (ACC), proceedings. Montreal, QC, Canada, 2012, pp. 4305–4312. DOI : 10.1109/ACC.2012.6315174.
Barnich O., Van Droogenbroeck M. ViBe: A universal background subtraction algorithm for video sequences, IEEE Transactions on Image Processing : proceedings, 2011, Vol. 20, No. 6, pp. 1709–1724. DOI : 10.1109/TIP.2010.2101613.
Barnich O., Van Droogenbroeck M. ViBe: a powerful random technique to estimate the background in video sequences, Speech and Signal Processing : 2009 IEEE International Conference on Acoustics. Taipei, 2009, pp. 945–948. DOI : 10.1109/ICASSP.2009.4959741.
Gevers T., Smeulders A. W. Color-based object recognition, Pattern Recognition, 1999, Vol. 32, pp. 453–464. DOI : https://doi.org/10.1016/S0031-3203(98)00036-3.
Salvador E., Cavallaro A., Ebrahimi T. Shadow identification and classification using invariant color models / 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing : proceedings. Salt Lake City, UT, USA, 2001, pp. 1545–1548. DOI : 10.1109/ICASSP.2001.941227.
Scandaliaris J., Villamizar M., Andrade-Cetto J. et all. Robust Color Contour Object Detection Invariant to Shadows, Progress in pattern recognition, image analysis and applications (CIARP’07) : proceedings of the Congress on pattern recognition 12th Iberoamerican conference, Berlin, Springer-Verlag, 2007, pp. 301–310. DOI : 10.1007/978-3-540-76725-1_32.
Salvador E., Cavallaro A., Ebrahimi T. Cast shadow segmentation using invariant color features, Computer Vision and Image Understanding : proceedings, 2004, Vol. 95, pp. 238–259. DOI : j.cviu.2004.03.008.
Rasouli A., Tsotsos J. K. The effect of color space selection on detectability and discriminability of colored objects [Electronic resource]. Acces mode: https://arxiv.org/abs/1702.05421.
MacAdam D. L. Projective transformations of I.C.I. color specifications, Journal of the Optical Society of America, 1937, Vol. 27, Issue 8, pp. 294–299. DOI : 10.1364/JOSA.27.000294.
Wyszecki G. Proposal for a New Color-Difference Formula, Journal of the Optical Society of America, 1963, Vol. 53, Issue 11, pp. 1318–1319. DOI : 10.1364/JOSA.53.001318.
Pearl J. Reasoning with Belief Functions: An Analysis of Compatibility, The International Journal of Approximate Reasoning, 1990, Vol. 4, No. 5/6, pp. 363–389. DOI : 10.1016/0888-613X(90)90013-R.
Yager R. Liping Liu Classic Works of the Dempster-Shafer Theory of Belief Functions. Berlin, 2008, 806 p. DOI : 10.1007/978-3-540-44792-4.
Beynon M., Curry B., Morgan P. The Dempster-Shafer theory of evidence: an alternative approach to multicriteria decision modelling, Omega, 2000, Vol. 28(1), pp. 37–50. DOI : https://doi.org/10.1016/S0305-0483(99)00033-X.
Deng Y. Generalized evidence theory, Applied Intelligence, 2015, Vol. 43, pp. 530–543. DOI : https://doi.org/10.1007/s10489-015-0661-2.
Smets P., Kennes Robert The Transferable Belief Model. Classic Works of the Dempster-Shafer Theory of Belief Functions. Berlin, Springer, 2008, pp. 693–736. DOI : https://doi.org/10.1007/978-3-540-44792-4_28.
Downloads
How to Cite
Issue
Section
License
Copyright (c) 2020 I. S. Katerynchuk, A. O. Babaryka
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Creative Commons Licensing Notifications in the Copyright Notices
The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.
The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.
The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.
Authors who publish with this journal agree to the following terms:
-
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License CC BY-SA that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
-
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
-
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.