Radio Electronics, Computer Science, Control https://ric.zp.edu.ua/ <p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com/search-results" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p> en-US <h3 id="CopyrightNotices" align="justify"><span style="font-size: small;">Creative Commons Licensing Notifications in the Copyright Notices</span></h3> <p>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.</p> <p>The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.</p> <p>The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.</p> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors who publish with this journal agree to the following terms:</span></p> <ul> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution License CC BY-SA</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.</span></p> </li> </ul> subbotin.csit@gmail.com (Sergey A. Subbotin) subbotin@zntu.edu.ua (Sergey A. Subbotin) Thu, 10 Apr 2025 07:41:16 +0300 OJS 3.2.1.2 http://blogs.law.harvard.edu/tech/rss 60 APPLICATION OF SINGULAR SPECTRAL ANALYSIS IN CONTROL SYSTEMS OF TECHNOLOGICAL PROCESSES AND EXPLOSION SAFETY CONTROL OF FACILITIES https://ric.zp.edu.ua/article/view/324553 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">The question of increasing the productivity of technological processes of extraction, processing and preparation of raw materials, improving product quality, reducing energy consumption, as well as creating safe working conditions during technological processes and preventing accidents is always quite relevant and requires the implementation of modern control and management systems. For the effective operation of such systems, it is important to pre-process and filter the data received from the sensors for monitoring the grinding processes and the explosive status of objects. One of the possible ways to increase the informativeness of data is the use of singular spectral analysis.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">Increasing the efficiency of technological process control systems and the reliability of explosive control systems of coal mines and oil and fuel complex facilities by processing and pre-filtering data received from sensors for monitoring grinding processes and the state of facilities.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">To analyze the output signals of sensors used in control and management systems, the method of singular spectral analysis is used, which allows revealing hidden structures and regularities in time series by pre-filtering and data processing of acoustic, thermocatalytic, and semiconductor sensors.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">A new approach to the management of technological processes of grinding raw materials in jet mills and control of the explosiveness of coal mines and objects of the oil and fuel complex is proposed, based on methods that allow to speed up the processing speed of sensor output data and improve the quality of information. It is shown that one of the promising methods that can be used for the pre-processing of time series of output data of sensors in control and control systems is the method of singular spectral analysis, the use of which allows filtering data, revealing hidden structures and regularities, and forecasting changes based on the analysis of previous information , identify anomalies and unusual situations, make more informed decisions and improve the processes of managing technological processes.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">The conducted experiments have confirmed the proposed software operability and allow recommending it for use in advancing both theoretical and practical aspects of process control systems through an enhanced singular spectral analysis (SSA) method for time series processing. This improved approach has been successfully demonstrated in real-world applications, including grinding processes in jet mills and explosion monitoring in coal mines and oil and fuel facilities. The implementation demonstrates a significant increase in data processing speed and information quality, which makes it particularly valuable for use in safety-critical industrial facilities.</span></p> O. V Holinko, M. O. Alekseev, V. I. Holinko, V. A. Zabelina Copyright (c) 2025 O. V Holinko, M. O. Alekseev, V. I. Holinko, V. A. Zabelina https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324553 Thu, 10 Apr 2025 00:00:00 +0300 LIGHTWEIGHT MULTI-SCALE CONVOLUTIONAL TRANSFORMER FOR AIRCRAFT FAULT DIAGNOSIS USING VIBRATION ANALYSIS https://ric.zp.edu.ua/article/view/324172 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">Fault diagnosis in rotating machinery, especially in aircraft, plays an important role in health monitoring systems. Early and accurate fault detection can significantly reduce the cost of repair and increase the lifetime of the mechanism. To detect the fault efficiently, intelligent methods based on traditional machine learning and deep learning techniques are used. The object of the research is the process of detecting faults in aircraft based on vibration analysis.<br /></span><span class="fontstyle0">Objective </span><span class="fontstyle2">of the work is the development of a deep learning method for fault diagnosis in rotating machinery with a high accuracy rate.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">The proposed method employs Transformer architecture. The first stage of processing the vibration signal is the multiscale feature extractor. This stage allows the model to examine input signals in different scales and reduce the impact of the noise.<br />The second stage is the Convolutional Transformer neural network. The convolution was introduced to the Transformer to combine locality and long-range dependencies feature extraction. The Self-attention mechanism of the Transformer was changed to Channel Attention, which reduces the number of parameters but maintains the strength of the attention. To maintain this idea, similar changes were made in the position-wise feed-forward network.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">The proposed method is tested on the aircraft vibration dataset. Two conditions were chosen for testing: limited data and noisy environment. The limited data condition is simulated by selecting a small number of samples into the training set (a maximum of 10 per class). The noisy environment condition is simulated by adding Gaussian noise to the raw signal. According to the obtained results, the proposed method achieves a high average precision metric rate with a small number of parameters. The experiments also show the importance of the proposed modules and changes, confirming the assumptions about the process of feature extraction.<br /></span><span class="fontstyle0">Conclusion. </span><span class="fontstyle2">The results of the conducted experiments show that the proposed model can detect faults with almost perfect accuracy, even with a small number of parameters. The proposed lightweight model is robust in limited data conditions and noisy environment conditions. The prospects for further research are the development of fast and accurate neural networks for fault diagnosis and the development of limited data training techniques.</span></p> Andrii Y. Didenko, Artem Y. Didenko, S. A. Subbotin Copyright (c) 2025 Andrii Y. Didenko, Artem Y. Didenko, S. A. Subbotin https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324172 Thu, 10 Apr 2025 00:00:00 +0300 APPROACH TO DATA DIMENSIONALITY REDUCTION AND DEFECT CLASSIFICATION BASED ON VIBRATION ANALYSIS FOR MAINTENANCE OF ROTATING MACHINERY https://ric.zp.edu.ua/article/view/324317 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">The actual problem of effective intelligent diagnostics of malfunctions of rotating equipment is solved. The object of study is the process of data dimensionality reduction and defect classification based on vibration analysis for maintenance of rotating machines. The subject of study is the methods of dimension reduction and defect classification by vibration analysis.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">Development of an approach to data dimensionality reduction and defect classification based on vibration analysis for maintenance of rotating machines<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">The comprehensive approach to data dimensionality reduction and defect classification based on vibration analysis is proposed, which solves the problem of data dimensionality reduction for training classifiers and defect classification, and also solves the problem of building a neural network classifier capable of ensuring the speed of fault classification without loss of accuracy on data of reduced dimensionality. The approach differs from the existing ones by the possibility of using optional union and intersection operators when forming a set of significant features, which provides flexibility and allows to adapt to different contexts and data types, ensuring classification efficiency in cases of large-dimensional data.<br />A denoising method allows to preserve important information, avoiding redundancy and improving the quality of data for further analysis. It involves calculating the signal-to-noise ratio, setting thresholds, and applying a fast Fourier transform that separates relevant features from noise. Applying the LIME method to a set of machine learning models allows to identify significant features with greater accuracy and interpretability. This contributes to more reliable results, as LIME helps to understand the influence of each feature on the final model solution, which is especially important when working with large datasets, where the importance of individual features may not be obvious. The implementation of optional operators of union and intersection of significant features provides additional flexibility in choosing an approach to defining important features. This allows the method to be adapted to different contexts and data types, ensuring efficiency even in cases with a large number of features.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">The developed method was implemented in software and examined when solving the problem of defect classification based on vibration analysis for maintenance of rotating machines.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">The conducted experimental studies confirmed the high efficiency and workability of the proposed approach for<br />reducing the dimensionality of data and classifying defects based on vibration analysis in the aspect of maintenance of rotating machines. Prospects for further research will be directed to the search for alternative neural network architectures and their training to reduce training time</span></p> M. O. Molchanova , V. O. Didur, O. V. Mazurets Copyright (c) 2025 M. O. Molchanova , V. O. Didur, O. V. Mazurets https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324317 Thu, 10 Apr 2025 00:00:00 +0300 KEYSTROKE DYNAMICS RECOGNITION USING NINE-VARIATE PREDICTION ELLIPSOID FOR NORMALIZED DATA https://ric.zp.edu.ua/article/view/324321 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">Keystroke dynamics recognition is a crucial element in enhancing security, enabling personalized user authentication, and supporting various identity verification systems. This study investigates the influence of data distribution on the performance of one-class classification models in keystroke dynamics, focusing on the application of a nine-variate prediction ellipsoid. The object of research is the keystroke dynamics recognition process. The subject of the research is a mathematical model for keystroke dynamics recognition. Unlike typical approaches assuming a multivariate normal distribution of data, real-world keystroke datasets often exhibit non-Gaussian distributions, complicating model accuracy and robustness. To address this, the dataset underwent normalization using the multivariate Box-Cox transformation, allowing the construction of a more precise decision boundary based on the prediction ellipsoid for normalized data.<br /></span><span class="fontstyle0">The objective </span><span class="fontstyle2">of the work is to increase the probability of keystroke dynamics recognition by constructing a nine-variate prediction ellipsoid for normalized data using the Box-Cox transformation.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">This research involves constructing a nine-variate prediction ellipsoid for data normalized using the Box-Cox transformation to improve keystroke dynamics recognition. The squared Mahalanobis distance is applied to identify and remove outliers, while the Mardia test assesses deviations from normality in the multivariate distribution. Estimates for parameters of multivariate Box-Cox transformation are derived using the maximum likelihood method.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">The results demonstrate significant performance improvements after normalization, reaching higher accuracy and robustness compared to models built for non-normalized data. The application of the nine-variate Box-Cox transformation successfully accounted for feature correlations, enabling the prediction ellipsoid to better capture underlying data patterns.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">For keystroke dynamics recognition, a mathematical model in the form of the nine-variate prediction ellipsoid for data normalized using the multivariate Box-Cox transformation has been developed, which enhances the probability of recognition compared to models constructed for non-normalized data. However, challenges remain in determining the optimal normalization technique and selecting the significance level for constructing the prediction ellipsoid. These findings underscore the importance of careful feature selection and advanced data normalization techniques for further research in keystroke dynamics recognition.</span></p> S. B. Prykhodko , A. S. Trukhov Copyright (c) 2025 S. B. Prykhodko , A. S. Trukhov https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324321 Thu, 10 Apr 2025 00:00:00 +0300 METHOD OF NEURAL NETWORK DETECTION OF DEFECTS BASED ON THE ANALYSIS OF ROTATING MACHINES VIBRATIONS https://ric.zp.edu.ua/article/view/324350 <p>Context. The paper proposes a solution to the urgent problem of detecting equipment defects by analyzing the vibrations of rotating machines. The object of study is the process of detecting defects by analyzing the vibrations of rotating machines. The subject of<br />study is artificial intelligence methods for detecting defects by analyzing the vibrations of rotating machines.<br />Objective. Improving the accuracy of detecting defects in the analysis of rotating machine vibrations by creating a method for neural network detection of defects in the analysis of rotating machine vibrations and a corresponding neural network model that can<br />detect defects in the analysis of rotating machine vibrations without removal preliminary noise in order to preserve important features for more accurate classification.<br />Method. A method of neural network defect detection based on the analysis of vibrations of rotating machines is proposed, which is capable of predicting the presence or absence of a defect based on the input data of vibrations with the implementation of preliminary processing, namely the creation of a two-dimensional time-frequency image. The method differs from the existing ones<br />in that the defect analysis is performed without removing noise by fine-tuning the model parameters.<br />Results. The proposed method of neural network detection of defects based on the analysis of rotating machines vibrations is implemented in the form of a web application and the effectiveness of the neural network model obtained by performing the steps of the method is studied.<br />Conclusions. The study results show that the model has achieved high accuracy and consistency between training and validation data, which is confirmed by high values of such indicators as Accuracy, Precision, Recall і F1-Score on the validation dataset, as well<br />as minimal losses. The cross-validation confirmed the stable efficiency of the model, demonstrating high averaged metrics and insignificant deviations from the obtained metrics. Thus, the neural network model detects defects in rotating machines with high efficiency even without cleaning vibration signals from noise. Prospects for further research are to test the described method and the resulting neural network model on larger data sets.</p> O. V. Sobko, R. A. Dydo, O. V. Mazurets Copyright (c) 2025 O. V. Sobko, R. A. Dydo, O. V. Mazurets https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324350 Thu, 10 Apr 2025 00:00:00 +0300 DATA-DRIVEN DIAGNOSTIC MODEL BUILDING FOR HELICOPTER GEAR HEALTH AND USAGE MONITORING https://ric.zp.edu.ua/article/view/324356 <p>Context. Modern technical objects (in particular vehicles) are extremely complex and place high demands on reliability. This requires automation of condition monitoring and fault diagnosis of objects and their components. The predictive maintenance improves operational readiness of technical objects. The object of study is a technical object health and usage monitoring process. The subject of study is a methods of computational intelligence for data-driven model building and related data processing tasks for health and usage monitoring system.<br />Objective. The purpose of the work is to formulate data processing problems, to form a data set for data-driven model building and construct simple method for automatic diagnostic model building on example of helicopter health and usage monitoring system.<br />Method. The method is proposed for the mapping of multidimensional data into a two-dimensional space preserving local properties of class separation, allowing for the visualization of multidimensional data and the production of simple diagnostic models for the automatic classification of diagnostic objects. The proposed method allows obtaining highly accurate diagnostic model with small training samples, provided that the frequency of classes in the samples is preserved. A method for synthesizing diagnostic models based on a two-layer feed-forward neural network is also proposed, which allows obtaining models in a non-iterative mode.<br />Results. A sample of observations of the state of helicopter gears was obtained, which can be used to compare data-driven diagnostic methods and data processing methods that solve the problems of data dimensionality reduction. The Software has been developed that allows displaying a sample from a multidimensional to a two-dimensional space, which makes it possible to visualize data and reduces the dimensionality of the data. Diagnostic models have been obtained that allow automating the decision-making process on whether the diagnosed object (helicopter gear) belongs to one of two classes of states.<br />Conclusions. The results of conducted experiments allow to conclude that the proposed method provides a significant reduction in the data dimensionality (in particular, for the considered problem of constructing a model for helicopter gear diagnosis, it reduces<br />the data dimensionality due to the compression of features by 46876 times). As the results of the conducted experiments for randomly selected instances in a two-dimensional system of artificial features obtained on the basis of the proposed method showed a significant reduction of the sample for individual tasks may allow to provide acceptable accuracy. And taking into account individual estimates of the instance significance will allow, even for small samples, to ensure the topological representativeness of the formed sample in relation to the original sample. The prospects for further research are to compare methods for constructing data-driven models, as well as methods for reducing the dimensionality of data based on the proposed sample. Additionally, it may be of interest to study a possible combination of the<br />proposed method with methods for sample forming using metrics of the value of instances.</p> S. A. Subbotin, E. Bechhoefer Copyright (c) 2025 S. A. Subbotin, E. Bechhoefer https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324356 Thu, 10 Apr 2025 00:00:00 +0300 METHOD OF FORMING MULTIFACTOR PORTRAITS OF THE SUBJECTS SUPPORTING SOFTWARE COMPLEXES, USING A MULTILAYER PERCEPTRON https://ric.zp.edu.ua/article/view/324357 <p>Context. The problem of identification and determination of personalized comprehensive indicators of presence each of the impact factors in the processes of personal subjectivization of the researched supported object’s perception by the relevant subjects interacting with it and making influence on its support, is being considered in this research. The process of forming multifactor portraits of subjects supporting software complexes, using a multilayer perceptron, is an object of study. While methods and means of forming such multifactor portraits of subjects supporting software complexes is the subject of study respectively.<br />Objective. The goal of the work is the creation of a method of forming multifactor portraits of subjects supporting software complexes, using a multilayer perceptron.<br />Method. A method of forming multifactor portraits of subjects supporting software complexes is proposed, using artificial neural networks of the multilayer perceptron type, which provides possibility to form appropriate personalized multifactor portraits of subjects which, directly or indirectly, interact with the object of support (which can represent both the supported software complex itself as well as the processes associated with its complex support activities).<br />Results. The results of functioning of the developed method are the corresponding models of multifactor portraits of subjects supporting software complexes, which later are used to solve a cluster of scientific and applied problems of software complexes’<br />support automation, in particular, the problem of identification and determination of personalized comprehensive indicators of presence each of the impact factors (from appropriate pre-agreed and declared set of impact factors) in the processes of personal subjectivization of the researched supported object’s perception by the relevant subjects interacting (directly, or indirectly) with it and making influence on its support. As an example, of practical application and approbation of the developed method, the results of resolving the applied practical task of automated search and selection of a maximal relevant candidate (from among the members of the support team of the supported software complex) for best solving of a stack of specialized client’s requests (related to the support of this software complex), are given.<br />Conclusions. The developed method provides possibility to resolve the scientific and applied problem of identification and determination of personalized comprehensive indicators of presence each of the impact factors (from appropriate pre-agreed and declared set of impact factors) in the processes of personal subjectivization of the researched supported object’s perception by the relevant subjects interacting (directly, or indirectly) with it and making influence on its support. In addition, the developed method provides possibility for creating appropriate models of multifactor portraits of subjects supporting software complexes, which makes it possible to use them in solving problems, tasks, or issues related to the automation of search and selection of subjects supporting software complexes, which (subjects) meet the given criteria both in the context of subjectivization processes of personal perception<br />of the support objects (e.g. supported software complexes themselves, or processes directly related to their support), as well as in the context of compatibility in interaction with client’s users of these supported software products (as those users, in fact, are also subjects of interaction with the same researched supported object).</p> A. I. Pukach, V. M. Teslyuk Copyright (c) 2025 A. I. Pukach, V. M. Teslyuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324357 Thu, 10 Apr 2025 00:00:00 +0300 METHOD OF PREVENTING FAILURES OF ROTATING MACHINES BY VIBRATION ANALYSIS USING MACHINE LEARNING TECHNIQUES https://ric.zp.edu.ua/article/view/324366 <p>Context. The problem of determining transitional conditions that precede the shift from an operating state to a non-operating state based on data obtained from the sensors of rotating machine elements is being solved. The object of the study is the process of detecting faults and states that indicate an approach to breakdown in rotating machine elements based on data obtained from sensors.<br />The subject of the study is the application of k-means and the elbow method algorithms for clustering and convolutional neural networks for classifying sensor data and detecting near-failure states of machine elements.<br />Objective. The purpose of the work is to create a method for processing sensor data from rotating machines using convolutional neural networks to accurately detect conditions close to failure in rotating machine elements, which will increase the efficiency of maintenance and prevent equipment failures.<br />Method. The proposed method of preventing failures of rotating machines by vibration analysis using machine learning techniques using a combination of clustering and deep learning methods. At the first stage, the sensor data undergoes preprocessing, including normalization, dimensionality reduction, and noise removal, after which the K-means algorithm is applied. To determine the<br />optimal number of clusters, the Elbow method is used, which provides an effective grouping of the states of rotating machine elements,<br />identifying states close to the transition to fault. A CNN model has also been developed that classifies clusters, allowing for the accurate separation of nominal, fault, and transitional conditions. The combination of clustering methods with the CNN model improves the accuracy of detecting potential faults and enables timely response, which is critical for preventing accidents and ensuring<br />the stability of equipment operation.<br />Results. A method of preventing failures of rotating machines by vibration analysis using machine learning techniques and a relevant software package have been developed. The implemented method allows us to identify not only normal and emergency<br />states but also to distinguish a third class – transitional, close to breakdown. The quality of clustering for the three classes is confirmed<br />by the value of the silhouette coefficient of 0.506, which indicates the proper separation of the clusters, and the Davis-Boldin index of 0.796, which demonstrates a high level of internal cluster coherence. Additionally, CNN was trained to achieve 99% accuracy for classifying this class, which makes the method highly efficient and distinguishes it from existing solutions.<br />Conclusions. A method of preventing failures of rotating machines by vibration analysis using machine learning techniques was<br />developed, the allocation of the third class – transitional, indicating a state close to breakdown – was proposed, and its effectiveness<br />was confirmed. The practical significance of the results lies in the creation of a neural network model for classifying the state of rotating elements and the development of a web application for interacting with these models.</p> O. O. Zalutska, O. V. Hladun, O. V. Mazurets Copyright (c) 2025 O. O. Zalutska, O. V. Hladun, O. V. Mazurets https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324366 Thu, 10 Apr 2025 00:00:00 +0300 DEEPFAKE AUDIO DETECTION USING YOLOV8 WITH MEL-SPECTROGRAM ANALYSIS: A CROSS-DATASET EVALUATION https://ric.zp.edu.ua/article/view/324371 <p>Context. The problem of detecting deepfake audio has become increasingly critical with the rapid advancement of voice synthesis technologies and their potential for misuse. Traditional audio processing methods face significant challenges in distinguishing sophisticated deepfakes, particularly when tested across different types of audio manipulations and datasets. The object of study is<br />developing a deepfake audio detection model that leverages mel-spectrograms as input to computer vision techniques, focusing on improving cross-dataset generalization capabilities.<br />Objective. The goal of the work is to improve the generalization capabilities of deepfake audio detection models by employing<br />mel-spectrograms and leveraging computer vision techniques. This is achieved by adapting YOLOv8, a state-of-the-art object detection model, for audio analysis and investigating the effectiveness of different mel-spectrogram representations across diverse datasets.<br />Method. A novel approach is proposed using YOLOv8 for deepfake audio detection through the analysis of two types of melspectrograms:<br />traditional and concatenated representations formed from SincConv filters. The method transforms audio signals into visual representations that can be processed by computer vision algorithms, enabling the detection of subtle patterns indicative of<br />synthetic speech. The proposed approach includes several key components: BCE loss optimization for binary classification, SGD with momentum (0.937) for efficient training, and comprehensive data augmentation techniques including random flips, translations, and HSV color augmentations. The SincConv filters cover a frequency range from 0 Hz to 8000 Hz, with a step size of approximately<br />533.33 Hz per filter, providing detailed frequency analysis capabilities. The effectiveness is evaluated using the EER metric across multiple datasets: ASVspoof 2021 LA (25,380 genuine and 121,461 spoofed utterances) for training, and ASVspoof 2021 DF,<br />Fake-or-Real (111,000 real and 87,000 synthetic utterances), In-the-Wild (17.2 hours fake, 20.7 hours real), and WaveFake (117,985<br />fake files) datasets for testing cross-dataset generalization.<br />Results. The experiments demonstrate varying effectiveness of different mel-spectrogram representations across datasets. Concatenated<br />mel-spectrograms showed superior performance on diverse, real-world datasets (In-the-Wild: 34.55% EER, Fake-or-Real:<br />35.3% EER), while simple mel-spectrograms performed better on more homogeneous datasets (ASVspoof DF: 28.99% EER, Wave-<br />Fake: 34.55% EER). Feature map visualizations reveal that the model’s attention patterns differ significantly between input types, with concatenated spectrograms showing more distributed focus across relevant regions for complex datasets. The training process, conducted over 50 epochs with a learning rate of 0.01 and warm-up strategy, demonstrated stable convergence and consistent performance<br />across multiple runs.<br />Conclusions. The experimental results confirm the viability of using YOLOv8 for deepfake audio detection and demonstrate that<br />the effectiveness of mel-spectrogram representations depends significantly on dataset characteristics. The findings suggest that input<br />representation should be selected based on the specific properties of the target audio data, with concatenated spectrograms being<br />more suitable for diverse, real-world scenarios and simple spectrograms for more controlled, homogeneous datasets. The study provides<br />a foundation for future research in adaptive representation selection and model optimization for deepfake audio detection.</p> U. R. Zbezhkhovska Copyright (c) 2025 U. R. Zbezhkhovska https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324371 Thu, 10 Apr 2025 00:00:00 +0300 SEGMENTATION OF LOW-CONTRAST IMAGES IN THE BASIS OF EIGEN SUBSPACES OF TYPE-2 FUZZY MEMBERSHIP FUNCTIONS https://ric.zp.edu.ua/article/view/324511 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">The study addresses the current task of automating a sensitive image segmentation algorithm based on the Type-2 fuzzy clustering method. The research object is low-contrast greyscale images which are outcomes of standard research methods across various fields of human activity.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">The aim of the work is to create a new set of informative features based on the input data, perform sensitive fuzzy<br />segmentation using a clustering method that employs Type-2 fuzziness, and implement automatic defuzzification in eigen subspace of membership functions.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">A method for segmenting low-contrast images is proposed. It consists of the following steps: expanding the feature space of the input data, applying singular value decomposition (SVD) to the extended dataset with subsequent automatic selection of the most significant components, which serve as input for fuzzy clustering using Type-2 fuzzy sets. Clustering is performed using the T2FCM method, which allows the automatic selection of the number of fuzzy clusters based on an initially larger guaranteed number, followed by the merging of close clusters (proximity was defined in the study using a weighted Euclidean distance). After fuzzy clustering, the proposed method integrates its results (fuzzy membership functions) with the input data for clustering, preprocessed using fuzzy transformations. The resulting matrix undergoes another fuzzy transformation, followed by SVD and the automatic selection of the most significant components. A grayscale image is formed based on the weighted sum of these selected components, to which the adaptive histogram equalization method is applied, resulting in the final segmentation output. The proposed segmentation method involves a small number of control parameters: the initial number of fuzzy clusters, the error of the T2FCM method, the maximum number of iterations, and the coefficient of applied fuzzy transformations. Adjusting these parameters to the processed images does not require significant effort.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">The developed algorithm has been implemented as software, and experiments have been conducted on real images of different physical nature.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">The experiments confirmed the efficiency of the proposed algorithm and recommend its practical application for<br />visual analysis of low-contrast grayscale images. Future research prospects may include analyzing the informative potential of the algorithm when using other types of transformations of fuzzy membership functions and modifying the proposed algorithm for segmenting images of various types.</span></p> L. G. Akhmetshina, А. А. Yegorov, А. А. Fomin Copyright (c) 2025 L. G. Akhmetshina, А. А. Yegorov, А. А. Fomin https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324511 Thu, 10 Apr 2025 00:00:00 +0300 ANALYSIS OF DATA ACCESS APPROACHES IN A MULTI-CLOUD ENVIRONMENT https://ric.zp.edu.ua/article/view/324539 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">A multi-cloud system is characterized by the sequential or simultaneous use of services from different cloud providers to run applications. Such a system is a preferred infrastructure for the vast majority of IT businesses today. Currently, there are various approaches to combining cloud platforms from multiple vendors. This article explores practical approaches to achieve multi-cloud interoperability, focusing on abstract data access between different cloud storage providers<br />and multi-cloud computing resource allocation. Key technologies and methodologies for uninterrupted data management are presented, such as the use of multi-cloud storage gateways (using S3Proxy as an example), the implementation of data management platforms (Apache NiFi), and the use of cloud-agnostic libraries (Apache Libcloud). The paper highlights the advantages and disadvantages of the selected approaches and conducts experiments to determine the cost and performance of these strategies. The result of the research is to determine the cost and performance of different approaches to data access in multicloud environments.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">To investigate different approaches to multi-cloud data access and determine the most optimal in terms of cost and performance.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">We propose the optimization of multi-cloud infrastructures based on experimental data. Experimental modeling includes empirical measurements of performance and comparison of storage costs. The determination of performance is based<br />on the measurement of data reading time and latency. The AWS S3 pricing model is used to estimate the cost. Optimization approaches are described, considering file sizes and data storage, combining the strengths of different multi-cloud approaches<br />and dynamic switching between solutions. An algorithm for selecting multi-cloud approaches is proposed, which takes into account the criteria of cost and performance, as well as their priority.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">The experiment yielded values for the cost of storing and downloading data of different sizes (100 GB, 1 TB, 10<br />TB), and the performance of transferring files of different sizes (100 KB, 1 MB, 10 MB) for multi-cloud gateway technologies, data management platforms, and cloud-agnostic libraries. S3Proxy was found to have the fastest file access for large data volumes, while Apache Libcloud showed better value for smaller volumes. Both approaches significantly outperformed Apache NiFi. This study can contribute to the development of methods for efficient resource management in multi-cloud environments.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">The obtained results can assist in prioritizing the selection of these paradigms, aiding organizations in developing and deploying effective multi-cloud strategies. This approach enables them to leverage the distinctive features of each cloud provider while maintaining a unified, flexible, and efficient storage and computing environment.</span></p> А. Caceres , L. Globa Copyright (c) 2025 А. Caceres , L. Globa https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324539 Thu, 10 Apr 2025 00:00:00 +0300 USE CASE METHOD IN IT PROJECT MANAGEMENT BASED ON AGILE METHODOLOGY https://ric.zp.edu.ua/article/view/324545 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">The article considers the role and process of forming user requirements based on the Use Case method in assessing the complexity of an Agile project at the stage of preliminary assessment by the company’s management. Since the mid-70s, it has been known that errors in requirements are the most numerous, expensive, and time-consuming to correct in projects. In this regard, the importance of requirements management in IT projects using modern technologies and methods for their formation and evaluation is increasing.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">Formation and evaluation of user requirements in IT project management based on the Use Case method and their impact on one of the project performance indicators at the planning stage, particularly labor intensity.<br /></span><span class="fontstyle0">Method</span><span class="fontstyle2">. The article proposes a new author’s approach to the formation and evaluation of user requirements in Agile projects, taking into account the impact of risks and system complexity assessment based on the Use Case method, and as a result of the study and proposals to achieve this goal, a mathematical model for estimating project complexity is proposed.<br />The mathematical template of the model allows us to consider additional variables that may affect the project, such as the number of user levels, available functionality, and technical and organizational risks. It is flexible and can be adapted to the different needs of a particular project, which aligns with the principles of the Agile methodology. The number of components in the formula can be changed to take into account the importance of different variables or expanded to take into account additional variables that may affect the project.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">A mathematical model for estimating project complexity based on the use case method has been developed and tested using the example of a mobile application, which contains a set of initial data for product development and constraints on changing user requirements and organizational and technical risks. The proposed mathematical model allows you to quickly, accurately, and efficiently determine scenarios of project labor intensity of various types and levels of complexity and can serve as an effective tool for making management decisions. A mathematical model for estimating project complexity based on the use case method has been developed and tested using the example of a mobile application, which contains a set of initial data for product development and constraints on changing user requirements and organizational and technical risks.<br />The proposed mathematical model allows you to quickly, accurately, and efficiently determine scenarios of project labor intensity of various types and levels of complexity and can serve as an effective tool for making management decisions.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">The general findings obtained after analyzing the methods of forming and evaluating user requirements in Agile management are as follows. At the work planning stage, based on an expert assessment of each functional requirement, the primary project evaluation model has been replaced by a more modern and complex one based on the use case method and considering changes in user requirements and other product development risks. The new model uses graphical, analytical, and mathematical tools, including a use case diagram, adjustment factors considering the complexity of the actor and use case, and factors considering organizational and technical risks. As a result, we get a mathematical format for calculating the project’s complexity. This approach allows us to adapt to different types of projects quickly. With the correct initial data definition, the model will enable us to obtain reasonably accurate estimates early in project planning. The practical results of the study demonstrate the potential of the proposed mathematical model, which can be logically continued by verifying the model on a larger sample and assessing its resilience to different types of projects and risks.</span></p> O. M. Svintsycka, I. V. Puleko, M. S. Graf, R. V. Petrosian Copyright (c) 2025 O. M. Svintsycka, I. V. Puleko, M. S. Graf, R. V. Petrosian https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324545 Thu, 10 Apr 2025 00:00:00 +0300 APPLICATION OF BINARY SEARCH TREE WITH FIXED HEIGHT TO ACCELERATE PROCESSING OF ONE-DIMENSIONAL ARRAYS https://ric.zp.edu.ua/article/view/324550 <p><span class="fontstyle0">Topicality. </span><span class="fontstyle2">Nowadays, binary search trees are widely used to speed up searching, sorting, and selecting array elements. But the computational complexity of searching using a binary tree is proportional to its height, which depends on the sequence of processing the elements of the array. In order to reduce the height of a tree, its balancing is periodically carried out, which is a long process,, thus, the development of alternative methods of controlling the height of a binary tree is currently an actual scientific task.<br></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">Development of algorithms for the formation and use of a binary tree with a fixed height to accelerate the search for an element in an array and to determine arbitrary </span><span class="fontstyle3">i</span><span class="fontstyle2">-th order statistics, in particular, the median of the array.<br></span><span class="fontstyle0">Method. </span><span class="fontstyle2">In this study, it is proposed to set the fixed height of the binary search tree by one greater than the minimum possible height of the binary tree to accommodate all the elements of the array because increasing the fixed height leads to excessive RAM consumption, and decreasing it slows down tree modifications. The formation of such trees is similar to the balancing of trees but, unlike it, the recursive movement of nodes in them is performed only when the corresponding subtree is completely filled. For a binary search tree with a fixed height, RAM is allocated once when it is created, immediately under all possible nodes of a binary tree with a given height. This allows to avoid allocating and freeing memory for each node of the tree and store the values of the nodes in a one-dimensional array without using pointers.<br></span><span class="fontstyle0">The results. </span><span class="fontstyle2">Our experiments showed that in order to speed up the search of elements and to determine the </span><span class="fontstyle3">i</span><span class="fontstyle2">-th order statistics of frequently changing unordered arrays, it is advisable to additionally form a binary search tree with a fixed height. To initialize this tree, it is advisable to use a sorted copy of the keys of the array elements, and not to insert them one by one. For example, the use of a binary tree with a fixed height accelerates the search of medians of such arrays by more than 7 times compared to the method of two binary pyramids and additionally accelerates the redistribution of compressed data between modified DEFLATE-blocks in the process of progressive hierarchical lossless compression of images of the ACT set by an average of 2.92%.<br></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">To determine medians or </span><span class="fontstyle3">i</span><span class="fontstyle2">-th order statistics of individual unrelated arrays and subarrays, instead of known sorting methods, it is advisable to use Hoare partitioning with exchange over long distances as it rearranges only individual elements and does not order the entire array completely. In order to determine the medians of the sequence of nested subarrays, ordered by the growth of their length, it is worth using the method of two binary pyramids because they are oriented to rapid addition of new elements. To find medians or </span><span class="fontstyle3">i</span><span class="fontstyle2">-th order statistics after changes or removal of elements of an unordered array, it is advisable to use a binary search tree for the keys of array elements with a fixed height as such fixing prevents uncontrolled growth of the number of comparison operations and makes it possible to process the tree without using instructions.</span> </p> A. V. Shportko, A. Ya. Bomba Copyright (c) 2025 A. V. Shportko, A. Ya. Bomba https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324550 Thu, 10 Apr 2025 00:00:00 +0300 IMPLICIT CURVES AND SURFACES MODELING WITH PSEUDOGAUSSIAN INTERPOLATION https://ric.zp.edu.ua/article/view/324222 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">With the contemporary development of topological optimization, and parametric and AI-guided design, the problem of implicit surface representation became prominent in additive manufacturing. Although more and more software packages use implicit modeling for design, there is no common standard way of writing, storing, or passing a set of implicit surfaces or curves over the network. The object of the study is one of the possible ways of such representation, specifically: modeling implicit curves and surfaces using pseudo-Gaussian interpolation.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">The goal of the work is the development of a modeling method that improved the accuracy of the implicit object representation wothout significant increase in memory used or processing time spent.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">One of the conventional ways to model an implicit surface would be to represent its signed distance function (SDF) with its values defined on a regular grid. Then a continuous SDF could be obtained from the grid values by means of interpolation.<br />What we propose instead is to store not SDF values but the coefficients of a pseudo-Gaussian interpolating function in the grid, which would enable picking the exact interpolation points before the SDF model is written. In this way we achieve better accuracy in the regions we’re interested the most in with no additional memory overhead.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">The developed method was implemented in software for curves in 2D and validated against several primitive implicit curves of different nanture: circles, sqaures, rectangles with different parameters of the model. The method has shown improved accuaracy in general, but there were several classes of corner cases found for which it deserves further development.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">Pseudo-Gaussian interpolation defined as a sum of radial basis functions on a regular grid with points of interpolation defined in the proximity of the grid points generally allows to model an implicit surface more accurately than a voxel model interpolation does. The memory intake or computational toll isn’t much different in these two approaches. However, the interpolating points selection strategy and the choice of the best modeling parameters for each particular modeling problem remain an open quesition.</span></p> N. M Ausheva, Iu. V. Sydorenko, O. S. Kaleniuk, O. V. Kardashov, M. V. Horodetskyi Copyright (c) 2025 N. M Ausheva, Iu. V. Sydorenko, O. S. Kaleniuk, O. V. Kardashov, M. V. Horodetskyi https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324222 Thu, 10 Apr 2025 00:00:00 +0300 THE STATES’ FINAL PROBABILITIES ANALYTICAL DESCRIPTION IN AN INCOMPLETELY ACCESSIBLE QUEUING SYSTEM WITH REFUSALS AND WITH INPUT FLOW OF REQUIREMENTS’ GROUPS https://ric.zp.edu.ua/article/view/324249 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">The basis for the creation and management of real queuing systems (QS) is the ability to predict their effectiveness. For the general case of such systems with refusals, with limited approachability of service devices and with a random composition of group requirements in the input flow, the prediction of their performance remains an unsolved problem.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">The research has the aim to find an analytical representation for final probabilities in the above-mentioned case of Markov QS, which allows us to predict the efficiency of its operation depending on the values of the parameters in its structure and control.<br /></span><span class="fontstyle0">Method</span><span class="fontstyle2">. For the above-mentioned types of QS, the state probabilities can be described by a system of Kolmogorov’s differential equations, which for the stationary case is transformed into a homogeneous system of linearly dependent algebraic equations. For real QS in communication systems, the number of equations can be estimated by the degree set and amount to several thousand, which gives rise to the problem of their recording and numerical solution for a specific set of operating conditions parameters values. The predictive value of such a solution does not exceed the probability of guessing the numerical values of the QS operating conditions parameters set and for parameters with a continuous value, for example, for random time intervals between requests, is zero.<br />The method used is based on the analytical transition to the description of QS states groups with the same number of occupied devices. At the same time, the desire to obtain the final probabilities of states in a form close to the Erlang formulas remains. The influence of the above-mentioned QS properties can be localized in individual recurrent functions that multiplicatively distort Erlang formulas.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">For the above-mentioned types of QS, analytical calculation formulas for estimating the QS states final probabilities have been found for the first time, which makes it possible to predict the values of all known indicators of system efficiency. In this case, the deformation functions of the states groups’ probability distribution in QS have a recurrent form, which is convenient both for finding their analytical expressions and for performing numerical calculations.<br />When the parameters of the QS operating conditions degenerate, the resulting description automatically turns into a description of one of known QS with failures, up to the Erlang QS.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">The analytical calculation expressions found for the final probabilities of the above-mentioned QS turned out to be applicable to all types of Markov QS with failures, which was confirmed by the results of a numerical experiment. As a result, it became possible to practically apply the obtained analytical description of the considered QS for operational assessments of developed and existing QS effectiveness in the possible range of their operating conditions.</span></p> V. P. Gorodnov, V. S. Druzhynin Copyright (c) 2025 V. P. Gorodnov, V. S. Druzhynin https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324249 Thu, 10 Apr 2025 00:00:00 +0300 METHOD FOR DETERMINING THE STRUCTURE OF NONLINEAR MODELS FOR TIME SERIES PROCESSING https://ric.zp.edu.ua/article/view/324253 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">The practice of today’s problems actualizes the increase in requirements for the accuracy, reliability and completeness of the results of time series processing in many applied areas. One of the methods that provides high-precision processing of time series with the introduction of a stochastic model of measured parameters is statistical learning methods. However, modern approaches to statistical learning are limited, for the most part, to simplified polynomial models. Practice proves that real data most often have a complex form of a trend component, which cannot be reproduced by polynomials of even a high degree. Smoothing of nonlinear models can be implemented by various approaches, for example, by the method of determining the parameters of nonlinear models using the differential spectra balance (DSB) in the scheme of differential-non-Taylor transformations (DNT). The studies proved the need for its modification in the direction of developing a conditional approach to determining the structure of nonlinear mathematical models for processing time series with complex trend dynamics.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">The development of a method for determining the structure of nonlinear by mathematical models for processing time series using DSB in DNT transformations.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">The paper develops a method for constructing nonlinear mathematical models in the DNT transformation scheme. The modification of the method consists in controlling the conditions for the formation of a certain system of equations in the DSB scheme to search for the parameters of a nonlinear model with its analytical solutions. If the system is indeterminate, the nonlinear model is supplemented by linear components. In the case of an overdetermined system, its solution is carried out using the least squares norm. A defined system is solved by classical approaches. These processes are implemented with the control of stochastic and dynamic accuracy of models in the areas of observation and extrapolation. If the results of statistical learning are unsatisfactory in accuracy, the obtained values of the nonlinear model are used as initial approximations of numerical methods.<br /></span><span class="fontstyle0">Result. </span><span class="fontstyle2">Based on carried-out research, a method for determining the structure of nonlinear models for processing time series using BDS in the scheme of DNT transformations is proposed. Its application provides a conditional approach to determining the structure of models for processing time series and increasing the accuracy of estimation at the interval of observation and extrapolation.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">The application of the proposed method for determining the structure of nonlinear models for processing time series allows obtaining models with the best predictive properties in terms of accuracy</span></p> O. O Pysarchuk, O. A. Tuhanskykh, D. R. Baran Copyright (c) 2025 O. O Pysarchuk, O. A. Tuhanskykh, D. R. Baran https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324253 Thu, 10 Apr 2025 00:00:00 +0300 MATHEMATICAL MODELLING OF COMBAT OPERATIONS WITH THE POSSIBILITY OF REDISTRIBUTING COMBAT RESOURCES BETWEEN THE AREAS OF CONTACT AND DISTRIBUTING RESERVES https://ric.zp.edu.ua/article/view/324164 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">Mathematical and computer models of the dynamics of combat operations are an important tool for predicting their outcome. The known Lanchester-type models were simulation models and did not take into account the ultimate goal and</span> <span class="fontstyle0">redistribution of resources during combat operations. This paper proposes an optimisation model of the dynamics of combat operations between parties A and B in two areas of collision, based on the method of dynamic programming with maximisation of the objective function as a function of enemy losses. The article develops a mathematical and computer model of a typical situation in modern warfare of combat operations between parties A and B in two areas of collision with the aim of inflicting maximum losses of combat resources on the enemy. This goal is achieved by redistributing resources between the areas of collision and introducing appropriate reserves to these areas.<br /></span><span class="fontstyle2">Objective. </span><span class="fontstyle0">To build a mathematical and computer model of the dynamics of combat operations between parties A and B in two areas of collision, in which the goal of party A is to maximise the losses of party B by using three resources (the first is the number of combat units that party A can distribute across the areas of collision at the initial moment of time; the second is the number of combat units that party A must transfer from one area to another at some subsequent moment of time; the third is the number of combat units that party A must distribute using the reserve) and by modelling the<br /></span><span class="fontstyle2">Method</span><span class="fontstyle0">. The mathematical model is based on the method of dynamic programming with the objective function as a function of enemy losses, and the parameters are units of combat resources in different areas of the clash. Their number is changed by redistributing them between these areas and introducing reserve combat units. The enemy’s losses are determined using Lanchester’s systems of differential equations. Given the complexity of the objective function, the Python programming language is used to find its maximum.<br /></span><span class="fontstyle2">Results. </span><span class="fontstyle0">A mathematical model of the problem has been constructed and implemented, based on a combination of the dynamic programming method with the solution of Lanchester’s systems of differential equations of battle dynamics with certain initial conditions at each of the three stages of the battle. With the help of a numerical experiment, the admissibility of the parameters of the optimisation problem (the number of combat units of side A, which are appropriately distributed, transferred from area to area or from the reserve at each stage of the battle) is analysed. The developed Python program allows, for any initial data, to give an answer to the optimal allocation of resources of party A, including from the reserve, at three stages of the battle and to calculate the corresponding largest enemy losses at a given time or to give an answer that there are no valid values of the problem parameters, i.e. the problem has no solution for certain initial data.<br /></span><span class="fontstyle2">Conclusions. </span><span class="fontstyle0">The scientific novelty lies in the development of mathematical and computer models of the dynamics of combat in two areas of collision, which takes into account the redistribution of combat resources and reserves in order to inflict maximum losses on the enemy. Numerical modelling made it possible to analyse the admissibility of redistribution and reserve parameters. Based on the examples considered, it is concluded that if the problem is unsolvable with certain data, it means that it is necessary to reduce the time of redeployment of combat units at one or more stages of the battle, i.e. to reduce the duration of the battle at a certain stage, thus allowing to predict the time of redeployment of combat resources.</span> <br /><br /></p> O.K. Fursenko, N.M. Chernovol Copyright (c) 2025 O.K. Fursenko, N.M. Chernovol https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324164 Thu, 10 Apr 2025 00:00:00 +0300 THE RESERVES FOR IMPROVING THE EFFICIENCY OF RADAR MTI SYSTEM WITH BURST-TO-BURST PROBING PULSE REPETITION FREQUENCY STAGGER https://ric.zp.edu.ua/article/view/324200 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">The development and improvement of technologies for creating unmanned aerial vehicles (UAVs) and their use in the military conflicts, particularly in the war in Ukraine, pose the task of effectively counteraction to UAVs. The most difficult targets for radar detection are small, low-speed UAVs flying at low altitudes. Therefore, the search for efficient methods of detecting, tracking, and identifying UAVs using both existing and new promising tools is a relevant task for scientific research.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">The analysis of the operation algorithm of the moving target indication (MTI) system based on the discrete Fourier transform in radars with burst-to burst probing pulse repetition frequency stagger and to propose the modernisation of the MTI system to increase the efficiency of UAV detection against passive interferences<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">The effectiveness of the methods is determined experimentally based on the results of simulation and their comparison with known results presented in the open literature.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">It is shown that in the MTI system with burst-to burst probe pulse repetition frequency stagger, a non-adaptive filter for suppressing reflections from ground clutters (GC) and incoherent energy accumulation of pulses of the input burst are realized. These circumstances cause the losses in the ratio signal/(interference + inner noise). The proposals for improving the efficiency of the MTI system by transition to the construction of the MTI system with the structure “suppression filter and integration filter” are substantiated. They consist in the inclusion of a special filter for suppressing reflections from GC and fully coherent processing of the input burst pulses. The latter is realized by using the standard discrete Fourier transform (DFT) only as a integrating filter with a slight correction of the DFT algorithm. An algorithm for energy accumulation of the burst pulses using the current estimate of the inter-pulse phase incursion of the burst pulses reflected from the target is proposed. It is shown that this accumulation algorithm is close to the optimal one. The effectiveness of these proposals is analyzed in terms of the achievable signal-to-(interference+inner noise) ratio and the detection area compression ratio. It is shown that their implementation potentially leads to an increase in the detection range and an improvement in the measurement of UAV coordinates by about two times. The proposed ways are quite simply realized by digital processing used in this MTI system<br /></span><span class="fontstyle0">Conclusions </span><span class="fontstyle2">The conducted research is a development of the existing theory and technique of radar detection and recognition of air targets. The scientific novelty of the obtained results is that the algorithms of inter-period signal processing in radar with burst-to burst probing pulse repetition frequency stagger, namely the accumulation of a bust by correcting the algorithm of the standard DFT, have been further developed. The practical value of the research lies in the fact that the implementation of the proposed proposals provides approximately twice the efficiency of detecting the signal reflected from the target, compared to the standard processing device</span></p> D.V. Atamanskiy, V. I. Vasylyshyn, V. Y. Klуmchenko, R. L. Stovba, L. V. Prokopenko Copyright (c) 2025 D.V. Atamanskiy, V. I. Vasylyshyn, V. Y. Klуmchenko, R. L. Stovba, L. V. Prokopenko https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324200 Thu, 10 Apr 2025 00:00:00 +0300 ASSESSMENT OF THE QUALITY OF DETECTION OF A RADAR SIGNAL WITH NONLINEAR FREQUENCY MODULATION IN THE PRESENCE OF A NON-STATIONARY INTERFERING BACKGROUND https://ric.zp.edu.ua/article/view/324207 <p><span class="fontstyle0">Context. </span><span class="fontstyle2">Signals with long duration frequency modulation are widely used in radar, which allows increasing the radiated energy without degrading the range resolution and with peak power limitations. Increasing the product of the spectrum width by the radio pulse duration causes the passive interference zone to stretch out from the range, which leads to an interference with a more uniform intensity distribution in space and reduces the potential signal detection capabilities. Real passive obstacles have a non-stationary power distribution in space elements, so the signal reflected from the target can be detected in the gaps of passive obstacles or in areas with a lower level of them, provided that it is assessed (mapping of obstacles) and the detection threshold is adaptively set by space elements. Therefore, it is relevant to conduct research to assess the quality of detection of signals reflected from airborne targets depending on the level of non-stationarity of the interference background.<br /></span><span class="fontstyle0">Objective. </span><span class="fontstyle2">The aim of this work is to develop a methodology for assessing the influence of the level of the side lobes of signal correlation functions on the quality indicators of their detection in the presence of a non-stationary interference background of different intensity.<br /></span><span class="fontstyle0">Method. </span><span class="fontstyle2">The quality indicators of detection of frequency-modulated signals were studied. The problem of assessing the influence of the level of the lateral lobes of the correlation function on the quality indicators of signal detection against a non-stationary passive interference was solved by determining the parameters of the generalised gamma power distribution of such an interference, depending on the shape of the autocorrelation function of the signal.<br /></span><span class="fontstyle0">Results. </span><span class="fontstyle2">It is determined that for a high level of non-stationarity of the initial interference process for all signal models, the potential gain is almost the same and has a maximum value. In the case of reducing the level of non-stationarity of this process, the gain decreases. The traditional linear-frequency modulated signal gives a slightly worse result compared to nonlinear-frequency modulated signals. For all the studied frequency modulation laws, the gain is more noticeable when the requirements for signal detection quality are reduced.<br /></span><span class="fontstyle0">Conclusions. </span><span class="fontstyle2">A methodology for estimating the quality indicators of detecting echo signals on an interfering background with varying degrees of non-stationarity is developed. To improve the energy performance of detecting small-sized airborne objects against the background of non-stationary passive interference, it is advisable to use signals with nonlinear frequency modulation and reduce the probability of correct target detection.</span></p> A. A. Нryzo, O. O. Kostyria, A. V. Fedorov, А. А. Lukianchykov, Ye. V. Biernik Copyright (c) 2025 A. A. Нryzo, O. O. Kostyria, A. V. Fedorov, А. А. Lukianchykov, Ye. V. Biernik https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/324207 Thu, 10 Apr 2025 00:00:00 +0300