https://ric.zp.edu.ua/issue/feedRadio Electronics, Computer Science, Control2024-11-03T14:44:53+02:00Sergey A. Subbotinsubbotin.csit@gmail.comOpen Journal Systems<p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com/search-results" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p>https://ric.zp.edu.ua/article/view/312778OPTIMIZING AUTHENTICATION SECURITY IN INTELLIGENT SYSTEMS THROUGH VISUAL BIOMETRICS FOR ENHANCED EFFICIENCY2024-10-05T14:47:27+03:00T. Batiukrvv@zntu.edu.uaD. Dosynrvv@zntu.edu.ua<p>Context. The primary objective of this article is to explore aspects related to ensuring security and enhancing the efficiency of authentication processes in intelligent systems through the application of visual biometrics. The focus is on advancing and refining authentication systems by employing sophisticated biometric identification methods.</p> <p>Objective. A specialized intelligent system has been developed, utilizing a Siamese neural network to establish secure user authentication within the existing system. Beyond incorporating fundamental security measures such as hashing and secure storage of user credentials, the contemporary significance of implementing two-factor authentication is underscored. This approach significantly fortifies user data protection, thwarting most contemporary hacking methods and safeguarding against data breaches. The study acknowledges certain limitations in its approach, possibly affecting the generalizability of the findings. These limitations provide avenues for future research and exploration, contributing to the ongoing evolution of authentication methodologies in intelligent systems.</p> <p>Method. The two-factor authentication system integrates facial recognition technology, employing visual biometrics for heightened security compared to alternative two-factor authentication methods. Various implementations of the Siamese neural network, utilizing Contrastive loss function and Triplet loss function, were evaluated. Subsequently, a neural network employing the Triplet loss function was implemented and trained.</p> <p>Results. The article emphasizes the practical implications of the developed intelligent system, showcasing its effectiveness in minimizing the risk of unauthorized access to user accounts. The integration of contemporary authentication methodologies ensures a secure and robust user authentication process.</p> <p>Conclusions. The implementation of facial recognition technology in authentication processes has broader social implications. It contributes to a more secure digital environment by preventing unauthorized access, ultimately safeguarding user privacy and data. The study’s originality lies in its innovative approach to authentication, utilizing visual biometrics within a Siamese neural network framework. The developed intelligent system represents a valuable contribution to the field, offering an effective and contemporary solution to user authentication challenges.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 T. M. Батюк, Д. Г. Досинhttps://ric.zp.edu.ua/article/view/312781INTELLIGENT VIDEO ANALYSIS TECHNOLOGY FOR AUTOMATIC FIRE CONTROL TARGET RECOGNITION BASED ON MACHINE LEARNING2024-10-05T17:30:20+03:00V. Vysotskarvv@zntu.edu.uaR. Romanchukrvv@zntu.edu.ua<p>Context. Target recognition is a priority in military affairs. This task is complicated by the fact that it is necessary to recognize moving objects, different terrain and landscape create obstacles for recognition. Combat actions can take place at different times of the day, accordingly, it is necessary to take into account the perspective of lighting and general lighting. It is necessary to detect the object in the video by segmenting the video frames, recognize and classify.</p> <p>Objective of the study is to develop a technology for the analysis of the development of a technology for recognizing targets in real time as a component of the fire control system, due to the use of artificial intelligence, YOLO and machine learning.</p> <p>Method. The article develops a video stream analysis technology for automatic target recognition of the fire control system based on machine learning. The paper proposes the development of a target recognition module as a component of the fire control system within the framework of the proposed information technology using artificial intelligence. The YOLOv8 pattern recognition model family was used to develop the target recognition module. The methods used during the study of the formed dataset.</p> <p>– Bounding Box: Noise – Up to 15% of pixels (limiting frame: adding salt and pepper noise to the image – up to 15% of pixels).</p> <p>– Bounding Box: Blur – Up to 2.5px (bounding box: adding Gaussian blur to the image – up to 2.5 pixels).</p> <p>– Cutout – 3 boxes with 10% size each (cut out a part of the image – 3 boxes of 10% size each).</p> <p>– Brightness Between –25% and +25% (changing the brightness of the image to increase the resistance of the model to changes in lighting and camera settings – from –25% to +25%).</p> <p>– Rotation – Between –15 and +15 (rotation of the image object – clockwise or counterclockwise by degrees from –15 to +15).</p> <p>– Flip – Horizontal (flip the image object horizontally).</p> <p>Results. The data is collected from open sources, in particular, from videos posted in open sources on the YouTube platform. The main task of data preprocessing is the classification of three classes of objects on video or in real time – APC, BMP and TANK. The dataset is formed using the Roboflow platform based on the labeling tools and subsequently the augmentation tools. The dataset consists of 1193 unique images – approximately equally for each class. The training was conducted using Google Colab resources. It took 100 epochs to train the model.</p> <p>Conclusions. Analysis is performed according to mAP50 (average precision as 0.85), mAP50-95 (0.6), precision (0.89) and recall (0.75). Big losses are due to the fact that the background was not taken into account during the research – training the module on the basis of confirmed data (images) of the background without technology</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 V. Vysotska, R. Romanchukhttps://ric.zp.edu.ua/article/view/312879MODEL OF MAXIMAL WEIGHTS INVERSE CHAINS FOR THE ANALYSIS OF THE INFLUENCE FACTORS OF THE SOFTWARE COMPLEXES SUPPORT2024-10-07T17:21:47+03:00A. I. Pukachrvv@zntu.edu.uaV. M. Teslyukrvv@zntu.edu.ua<p>Context. The problem of identification, formation and restoration of the boundaries of influencing factors, lost as a result of the implementation of multi-layer perceptron models into the models of subjective perception of the object of software complexes support, as well as the applied practical problem of primary monitoring of the frequency manifestation of a given influencing factor in the post-real-time mode, is considered. The object of research is the influencing factors of support of software complexes.</p> <p>Objective – the goal of the work is to develop a model of inverse chains of maximum weights for the analysis of influencing factors of the software complexes support.</p> <p>Method. A model of maximal weights inverse chains for the analysis of the influence factors of the software complexes support was developed for the analysis of the influencing factors of the software complexes support. The developed model provides possibility to identify and form feedback chains of maximum weights for the identification and further analysis of influencing factors that are reflected into the results of the object perception (the supported software complex or its support processes), by the relevant subjects of interaction which directly or indirectly interact with it.</p> <p>Results. Results of the resolved applied practical problem of primary monitoring of the frequency manifestation of a given influencing factor in the post-real-time mode have been provided as an example of the applied practical use of the developed model. The output results of the developed models functioning – are the reverse chains of maximum weights. In the future, the results obtained by the developed model are used to solve the applied-scientific problem of identification, formation and restoration of the boundaries of influencing factors, lost as a result of the implementation of the appropriate models of multilayer perceptron inside the models of subjective perception of the software complexes support. So the developed model of maximal weights inverse chains for the analysis of the influence factors of the software complexes support resolves this applied-scientific problem, initially caused by the implementation of the corresponding multilayer perceptron models inside the model of the subjective perception of the object of software complexes<br>support. The developed model provides the possibility of carrying out a qualitative analysis of the transformation of the input characteristics of the object of support into the output resulting characteristics of its subjective perception.</p> <p>Conclusions. Developed model allows to resolve the described problems. At the same time, the developed model improves the classical understanding of multilayer perceptron artificial neural networks, as it introduces an additional value to the neurons of hidden layers, which (starting from now) are able to perform a completely new role of influencing factors markers, while in the classical understanding of multilayer perceptron artificial neural networks they did not perform any functions other than arithmetic to ensure the possibility of correct learning and functioning of a multilayer perceptron artificial neural networks.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 A. I. Pukach, V. M. Teslyukhttps://ric.zp.edu.ua/article/view/312891ON-BOARD LOG AND COORDINATE TRANSFORMATION FOR DETECTED OBJECTS ON THE SURFACE OF WATER2024-10-07T21:32:00+03:00V. M. Smolijrvv@zntu.edu.uaN. V. Smolijrvv@zntu.edu.ua<p>Context. The relevance of the work is to the demand for UAV technologies with the integration of artificial intelligence in today’s conditions.</p> <p>Objective. The goal of the work is to develop a minimum working version of the UAV explorer and software for controlling the UAV data.</p> <p>Method. The proposed mathematical description, which calculates the coordinates of the object, based on the dimensions of the original image from the camera, the dimensions of the image with which the neural network works, the angle of the field of view of the camera, the position of the UAV and the angles of roll, pitch and yaw, allows you to transfer the coordinates of the object, of the found NN, in the image to the geographical coordinates, thereby moving away from the rigid reference to the coordinates of the UAV.</p> <p>Results. The problem of systematization of objects detected during the mission on the surface of water bodies was solved by<br />creating a flight log, organizing interaction with a neural network, applying post-processing of recognized objects, mathematically transforming the coordinates of objects for display and visualization into geographic coordinates, thereby move away from the rigid reference to the coordinates of the UAV.</p> <p>Conclusions. A workable logbook generation and storage system has been created, which takes into account the peculiarities of information presentation in the logbook, and ensures effective interaction of the components of the created information system within the proposed hardware and software complex, which allows organizing the process of researching water bodies using the SITL environment from the flight controller developers.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 V. M. Smolij, N. V. Smolijhttps://ric.zp.edu.ua/article/view/312896INTELLECTUAL SUPPORT OF THE PROCESSES OF SEARCHING AND EXTRACTION OF PRECEDENTS IN CASE-BASED REASONING APPROACH2024-10-07T23:22:55+03:00A. V. Shvedrvv@zntu.edu.uaYe. O. Davydenkorvv@zntu.edu.uaH. V. Horbanrvv@zntu.edu.ua<p>Context. The situational approach is based on the real-time decision-making methods for solving current problem situation. An effective tool for implementing the concept of a situational approach is an experience-based technique that widely known as сasebased reasoning approach. Reasoning by precedents allows solving new (latest) problems using knowledge and accumulated experience of previously solved problems. Since cases (precedents) describing a scenario for solving a certain problem situation are stored in the case library, their search and retrieval directly determine the system response time. In these conditions, there is a need to find ways of solving an actual scientific and practical problem aimed at optimizing case searching and extracting processes. The object of the paper is the processes of searching and extracting of cases from the case library.</p> <p>Objective. The purpose of the article is to improve the process of cases searching in CBR approach by narrowing down the set of cases permissible for solving the current target situation, and excluding from further analysis such cases that do not correspond to the given set of parameters of the target situation.</p> <p>Method. The research methodology is based on the application of rough set theory methods to improve the decision-making procedure based on reasoning by precedents. The proposed two-stage procedure for narrowing the initial set of cases involves preliminary filtering of precedents whose parameter values belong to the given neighborhoods of the corresponding parameters of the target situation at the first stage, and additional narrowing of the obtained subset of cases by the methods of rough set theory at the second stage. The determination of the R-lower and R-upper approximations of a given target set of cases within the notation of rough set theory allows dividing (segmenting) the original set of cases available for solving the current problem stored in case library into three subgroups (segments). The search for prototype solutions can be performed among a selected subset of cases that can be accurately classified as belonging to a given target set; which with some degree of probability can be attributed to the given target set, or within the framework of the union of these two subsets. The third subset contains cases that definitely do not belong to the given target set and can be excluded from further consideration.</p> <p>Results. The problem of presentation and derivation of knowledge based on precedents has been considered. The procedure for searching for precedents in case library has been improved in order to reduce the system response time required to find the solution closest to the current problem situation by narrowing the initial set of cases.</p> <p>Conclusions. The case-based reasoning approach is received the further development by segmenting cases in terms of their belonging to a given target set of precedents uses methods of the rough set theory, then the search for cases is carried out within a given segment. The proposed approach, in contrast to the classic CBR framework, uses additional knowledge derived from obtained case segment; allows modeling the uncertainty regarding the belonging / non-belonging of a case to a given target set; removing from further consideration cases that do not correspond to a given target set.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 A. V. Shved, Ye. O. Davydenko, H. V. Horbanhttps://ric.zp.edu.ua/article/view/312751METHOD FOR SIGNAL PROCESSING BASED ON KOLMOGOROVWIENER PREDICTION OF MFSD PROCESS2024-10-04T17:44:35+03:00V. N. Gorevrvv@zntu.edu.uaY. I. Shedlovskarvv@zntu.edu.uaI. S. Laktionovrvv@zntu.edu.uaG. G. Diachenkorvv@zntu.edu.uaV. Yu. Kashtanrvv@zntu.edu.uaK. S. Khabarlakrvv@zntu.edu.ua<p>Context. We investigate a method to signal processing based on the Kolmogorov-Wiener filter weight function calculation for the prediction of a continuous stationary heavy-tail process in the MFSD (multifractal fractional sum-difference) model. Such a process may describe telecommunication traffic in some systems with data packet transfer, the consideration of the continuous filter may be reliable in the case of the large amount of data.</p> <p>Objective. The aim of the work is to obtain an approximate solution for the Kolmogorov-Wiener filter weight function and to show the applicability of the method to signal processing used in the paper.</p> <p>Method. The Galerkin method based on the orthogonal Chebyshev polynomials of the first kind is used for the calculation of the weight function under consideration. The approximations up to the thirteen-polynomial one are investigated. The corresponding integrals are calculated numerically on the basis of the Wolfram Mathematica package. The higher is the packet rate, the higher accuracy of the integral calculation is needed.</p> <p>Results. It is shown that for rather large number of polynomials the misalignment between the left-hand side and the right-hand side of the Wiener-Hopf integral equation under consideration is rather small for the obtained solutions. The corresponding mean absolute percentage errors of misalignment for different packet rates are calculated. The method to signal processing used in the paper leads to reliable results for the Kolmogorov-Wiener filter weight function for the prediction of a process in the MFSD model.</p> <p>Conclusions. The theoretical fundamentals of the continuous Kolmogorov-Wiener filter construction for the prediction of a random process in the MFSD model are investigated. The filter weight function is obtained as an approximate solution of the Wiener-Hopf integral equation with the help of the Galerkin method based on the Chebyshev polynomials of the first kind. It is shown that the obtained results for the filter weight function are reliable. The obtained results may be useful for the practical telecommunication traffic prediction. The paper results may also be applied to the treatment of heavy-tail random processes in different fields of knowledge, for example, in agriculture.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 В. М. Горєв, Я. І. Шедловська, І. С. Лактіонов, Г. Г. Дяченко, В. Ю. Каштан, К. С. Хабарлакhttps://ric.zp.edu.ua/article/view/312754THE SOFTWARE IMPLEMENTATION FOR AUTOMATIC GENERATION OF PETRI NETS2024-10-04T18:24:07+03:00A. A. Gurskiyrvv@zntu.edu.uaS. M. Dubnarvv@zntu.edu.ua<p>Context. The important task was solved during this scientific research related to specific development and verification of the<br>fundamental suitability of the software application that provides visualization of the automatic synthesis of Petri nets while setting up<br>the multi-level control systems. This task is current because the for the first time the integration of means of discrete-continuous networks<br>from the DC-Net environment in the Labview environment is realized through the implementation of automatic synthesis of<br>Petri nets. This makes it possible to automate the processes of synthesis for the control algorithms based on the development of appropriate<br>intelligent systems.</p> <p>Objective. The purpose of the scientific work is to minimize the time and to automatize process in synthesis of the control algorithms<br>by integrating the means of discrete-continuous networks and implementing the principles of automatic synthesis of Petri nets.</p> <p>Method. This scientific article proposes the principle for automatic formation of Petri nets based on logical algorithm for classifying<br>various uncorrected algorithms. The multilayer neural network in the Labview 2009 software environment was implemented to<br>realize the appropriate algorithm. This artificial neural network provides algorithm formation, automatic synthesis and operation of<br>Petri nets. The article is devoted to the study of operating principle of the software application implementing such automatic synthesis<br>of Petri nets while setting up the multi-level control systems.</p> <p>Results. A number of experiments were performed on the classification of algorithms and formation of Petri nets based on the<br>ready-made software application. The control system was automatically set up based on the Labview 2009 environment application<br>for the determined object.</p> <p>As a result of these experiments we have determined the fundamental suitability of the software application for the synthesis of<br>some multi-level automatic control systems. It was also shown during these experiments that all mismatch signals in the system and<br>deviations from the ratios of values controlled variables are reduced to zero. All parameters of the control systems settings were<br>noted after the multi-level system setting procedure on the front panel of the virtual stand.</p> <p>Conclusions. The task related to the software application development based on the Labview 2009 environment which provides<br>the automatic synthesis of Petri nets was solved in this scientific work. Thus the method of automatic synthesis of Petri nets and<br>technology for developing certain algorithms based on the functioning of the artificial neural network was further developed.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 О. О. Гурський, С. М. Дубнаhttps://ric.zp.edu.ua/article/view/312763OPTIMIZATION OF THE PARAMETERS OF SYNTHESIZED SIGNALS USING LINEAR APPROXIMATIONS BY THE NELDER-MEAD METHOD2024-10-04T23:25:44+03:00V. P. Lysechkorvv@zntu.edu.uaO. M. Komarrvv@zntu.edu.uaV. S. Bershovrvv@zntu.edu.uaO. K. Veklychrvv@zntu.edu.ua<p>Context. The article presents the results of a study of the effectiveness of using the Nelder-Mead method to optimize the parameters of linear approximations of synthesized signals. Algorithms have been developed and tested that integrate spectral, temporal, and statistical analyzes and provide reasonable optimization. The effectiveness of the application of the Nelder-Mead method was proven by experiment. The obtained results substantiate the improvement of the properties of the mutual correlation of signals and the reduction of the maximum deviations of the side lobes, which opens up prospects for the further application of the method in complex scenarios of signal synthesis.</p> <p>Objective. The purpose of the work is to evaluate the effectiveness of the application of the Nelder-Mead method when adjusting the parameters of linear approximations to optimize the mutual correlation and minimize side deviations of complex synthesized signals.</p> <p>Method. The main research method is the comparison of various optimization algorithms for the selection of the most effective approaches in linear approximations of synthesized signals, taking into account such criteria as accuracy, speed and minimization of deviations. Scientific works [1, 2, 4–6, 8, 9] present algorithms, including the Nelder-Mead method and differential evolution. The effectiveness of these methods is achieved due to adaptive optimization procedures that improve the characteristics of signals.</p> <p>It is worth noting that the methods have disadvantages associated with high requirements for computing resources, especially when processing large data. This can be minimized using combined optimization methods that take into account the interaction of signal parameters. Another important direction of improvement is the optimization of methods for adaptation to dynamic changes in the characteristics of complex signals, which allows to achieve high adaptability and reliability of real-time systems.</p> <p>Results. As a result of the experiment using the Nelder-Mead method, an increase in the similarity of spectral densities was achieved from 0.52 in the first iteration to 0.90 in the fourth, with a significant decrease in the distance between the peaks of the spectrum from 1.2 to 0.4, which indicates high adaptability and the accuracy of the method in adjusting the parameters of the synthesized<br>signals.</p> <p>Conclusions. The effectiveness of the Nelder-Mead method for adjusting the specified parameters of the synthesized signals was experimentally proven, which is confirmed by a significant improvement in the similarity of the spectra with each iteration. This opens the way for additional optimizations and application of the method in various technological areas.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 В. П. Лисечко, О. М. Комар, В. С. Бершов, А. К. Векличhttps://ric.zp.edu.ua/article/view/312775ADAPTATION OF THE DECISION-MAKING PROCESS IN THE MANAGEMENT OF CRITICAL INFRASTRUCTURE2024-10-05T13:14:10+03:00V. I. Perederyirvv@zntu.edu.uaE. Y. Borchikrvv@zntu.edu.uaV. V. Zosimovrvv@zntu.edu.uaO. S. Bulgakovarvv@zntu.edu.ua<p>Context. The problem of human factor management in the process of making relevant decisions in the management of critical infrastructure facilities is currently very important and complex. This issue is becoming increasingly significant due to the dynamic and unpredictable nature of the environment in which these facilities operate. Effective management of CIF requires the development of new models and methods that are based on adaptive management principles. These models and methods must take into account the personal emotional and cognitive capabilities of the decision maker, who is often operating under the influence of destabilizing uncertain factors. The challenge is further compounded by the need to integrate these adaptive methods into existing human-machine systems, ensuring that they can respond in real-time to the rapidly changing conditions that can affect the decision-making process. The complexity and importance of this problem necessitate a multifaceted approach that combines probabilistic methods, intellectual technologies, and information-cognitive technologies. These technologies must be capable of providing real-time adaptation and assessment of the DM’s emotional and cognitive state, which is critical for making relevant and timely decisions. The current unresolved problems in the field of creating adaptive information technologies for decision support in the management of CIF highlight the urgent need for a promising approach that can address these issues effectively and efficiently.</p> <p>Objective. The objective is to propose a a comprehensive method for evaluating the process of relevant decision-making, which depends on the functional stability of critical infrastructure facilities and the adaptation of factors related to the emotional-cognitive state of the decision maker. This method aims to provide a systematic approach to understanding how various factors, including the psycho-functional state of the DM, influence the decision-making process. Additionally, the objective includes the development of adaptive information and intellectual technologies that can support real-time evaluation and adjustment of the DM’s emotional and cognitive states. This approach seeks to ensure that decisions are made efficiently and effectively, even under the influence of destabilizing uncertain factors. By addressing these aspects, the method aims to enhance the overall reliability and resilience of the CIF<br />management processes. Furthermore, the objective encompasses the integration of Bayesian networks and a comprehensive knowledge base to facilitate the decision support system in providing timely and accurate information for decision-making.</p> <p>Method. To implement this method, probabilistic methods, intellectual and information-cognitive technologies were used to provide acceptable adaptation and evaluation of the relevant decision-making process in real-time.</p> <p>Results. The proposed method, based on intellectual and information-cognitive technology, allows for real-time assessment and adaptation of the emotional and cognitive state of the decision maker during the process of making relevant decisions. The implementation of probabilistic methods and Bayesian networks has enabled the development of a robust decision support system that effectively integrates adaptive management principles. This system ensures that the decision-making process remains stable and reliable, even in the presence of destabilizing uncertain factors. The real-time capabilities of the system allow for prompt adjustments to the psycho-functional state of the DM, which is critical for maintaining the functional stability of critical infrastructure facilities. The results demonstrate that the use of intellectual technologies and a comprehensive knowledge base significantly enhances the DM’s ability to make informed decisions. Experiments have shown that this method improves the overall efficiency and effectiveness of CIF management, providing a promising approach for future applications in adaptive decision support processes. The results obtained from these experiments validate the potential of the proposed method to revolutionize the management of CIF by ensuring that decisions are both timely and appropriate, thereby contributing to the resilience and reliability of these essential facilities..</p> <p>Conclusions. The results of the experiments allow us to recommend the use of the proposed method of rapid assessment and adaptation of the emotional and cognitive state of the decision maker for the process of making relevant decisions in real-time. The integration of intellectual and information-cognitive technologies into the decision support system has proven to be effective in enhancing the stability and reliability of the decision-making process in the management of critical infrastructure facilities. The realtime capabilities of the system facilitate prompt adjustments to the psycho-functional state of the DM, ensuring that decisions are made efficiently and effectively, even under the influence of destabilizing uncertain factors. The experimental results demonstrate that the proposed method significantly improves the overall efficiency of CIF management by providing a robust framework for adaptive decision support. The results obtained can be used in the development of adaptive DSS in the management of CIF, offering a promising approach for future applications. This method not only enhances the decision-making capabilities of DMs but also contributes to the resilience and reliability of CIF, ensuring their functional stability in dynamic and uncertain environments.</p>2024-10-05T00:00:00+03:00Copyright (c) 2024 В. І. Передерій, Є. Ю. Борчик, В. В. Зосімов, О. С. Булгаковаhttps://ric.zp.edu.ua/article/view/312910CRITICAL CAUSAL EVENTS IN SYSTEMS BASED ON CQRS WITH EVENT SOURCING ARCHITECTURE2024-10-08T11:16:14+03:00O. A. Lytvynovrvv@zntu.edu.uaD. L. Hruzinrvv@zntu.edu.ua<p>Context. The article addresses the problem of causal events asynchrony which appears in the service-oriented information systems that does not guarantee that the events will be delivered in the order they were published. It may cause intermittent faults occurring at intervals, usually irregular, in a system that functions normally at other times.</p> <p>Objective. The goal of the work is the comparison and assessment of several existing approaches and providing a new approach for solving the causal events synchronization issue in application to the systems developed using Command Query Responsibility Segregation (CQRS) with Event Sourcing (ES) architecture approach.</p> <p>Methods. Firstly, the method of estimation of the likelihood of causal events occurring within the systems as the foundation for choosing the solution is suggested. Based on the results of the analysis of several projects based on CQRS with ES architecture it shows that the likelihood of critical causal events depends on the relationships among entities and the use-cases connected with the entities. Secondly, the Container of Events method, which represents a variation of event with full causality history, adapted to the needs of CQRS with ES architecture systems, was proposed in this work. The variants of its practical implementation have also been discussed. Also, the different solutions, such as Synchronous Event Queues and variation of Causal Barrier method were formalized and assessed. Thirdly, the methods described have been discussed and evaluated using performance and modification complexity criteria. To make the complexity-performance comparative assessment more descriptive the integrated assessment formula was also proposed.</p> <p>Results. The evaluation results show that the most effective solution of the issue is to use the Container of Events method. To implement the solution, it is proposed to make the modifications of the Event Delivery Subsystem and event handling infrastructure.</p> <p>Conclusions. The work is focused on the solution of the critical causal events issue for the systems based on CQRS with ES architecture. The method of estimation of the likelihood of critical causal events has been provided and different solutions of the problem have been formalized and evaluated. The most effective solution based on Container of Events method was suggested.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 О. А. Литвинов, Д. Л. Грузінhttps://ric.zp.edu.ua/article/view/312918ESTIMATION OF FORMANT INFORMATION USING AUTOCORRELATION FUNCTION OF VOICE SIGNAL2024-10-08T12:29:45+03:00M. S. Pastushenkorvv@zntu.edu.uaM. A. Pastushenkorvv@zntu.edu.uaT. А. Faizulaievrvv@zntu.edu.ua<p>Context. The current scientific problem of extracting biometric characteristics of a user of a voice authentication system, which can significantly increase its reliability, is considered. There has been performed estimation of formant information from the voice signal, which is a part of the user template in the voice authentication system and is widely used in the processing of speech signals in other applications, including in the presence of interfering noise components. The work is distinguished by the investigation of a polyharmonic signal.</p> <p>Objective. The purpose of the work is to develop procedures for generating formant information based on the results of calculating the autocorrelation function of the analyzed fragment of the voice signal and their subsequent spectral analysis.</p> <p>Method. The procedures for generating formant information in the process of digital processing of voice signal are proposed. Initially, the autocorrelation function of the analyzed fragment of the voice signal is calculated. Based on the results of the autocorrelation function estimation, the amplitude-frequency spectrum is calculated, from which the formant information is extracted, for example, by means of threshold processing. When the signal-to-noise ratio of the analyzed voice signal fragment is low, it is advisable to iteratively calculate the autocorrelation function. The latter allows increasing the signal-to-noise ratio and the efficiency of formant information extraction. However, each subsequent iteration of the autocorrelation function calculation is associated with an increase in the required computational resource. The latter is conditioned by the doubling of the amount of processed data at each iteration.</p> <p>Results. The developed procedures for generating formant information were investigated both in the processing of model and experimental voice signals. The model signals had a low signal-to-noise ratio. The proposed procedures allow to determine more precisely the width of the spectrum of extracted formant frequencies, significantly increase the number of extracted formants, including cases at low signal-to-noise ratio.</p> <p>Conclusions. The conducted model experiments have confirmed the performance and reliability of the proposed procedures for extracting formant information both in the processing of model and experimental voice signals. The results of the research allow to recommend their use in practice for solving problems of voice authentication, speaker differentiation, speech and gender recognition, intelligence, counterintelligence, forensics and forensic examination, medicine (diseases of the speech tract and hearing). Prospects for further research may include the creation of procedures for evaluating formant information based on phase data of the processed voice signal.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 М. С. Пастушенко, М. О. Пастушенко, Т. А. Файзулаєвhttps://ric.zp.edu.ua/article/view/312929METHOD OF DETERMINING THE PARAMETER OF QUALITATIVE EVALUATION OF A WEB FORUM2024-10-08T13:40:04+03:00Mykola Pikuliakrvv@zntu.edu.uaMykola Kuzrvv@zntu.edu.uaIhor Lazarovychrvv@zntu.edu.uaYaroslav Kuzykrvv@zntu.edu.uaVolodymyr Skliarovrvv@zntu.edu.ua<p>Context. The development of new types of virtual environments is an urgent task of informatisation of modern education, since such services allow enhancing the quality of educational services and contribute to a deeper assimilation of new knowledge by students. A web application proposed in this paper has been built using modern approaches to creating web pages using the .NET programming language, Bootstrap and ASP.NET MVC frameworks, Azure cloud solutions and Azure SQL databases, which has enabled the simplification of software development by distributing functions between the application modules and provided the flexibility, performance, and security necessary to work with relational data. The effectiveness of the application in the educational process has been experimentally tested using the method of determining the qualitative evaluation of the web forum usefulness parameter, which was developed by introducing an informative parameter of the discussion quality based on the h-index (sometimes called the Hirsch index or Hirsch number).</p> <p>Objective. To build a mathematical model of a web forum and develop a method of determining the qualitative evaluation of the parameter of usefulness of discussions in the created web application, which would allow improving the quality of educational and scientific activities in a higher education institution.</p> <p>Method. A method of determining the parameter of qualitative evaluation of a web forum using the h-index has been developed, which enabled analysing the interest in covering the trends of discussion on the forum pages and planning on its basis further work of the forum as a tool of a virtual learning environment.</p> <p>Results. Based on the analysis of the results of the implementation of the web application in the educational process of the Department of Information Technologies Vasyl Stefanyk Precarpathian National University, the user activity of posts has been analysed and the effectiveness of discussions of the proposed topics on the forum pages has been determined using the introduced activity parameter.</p> <p>Conclusions. A mathematical model of a web forum has been built, and the application has been implemented using modern approaches to software development using an optimised MVC architecture, which enabled simplification of creating a service by distributing responsibilities between the application modules and facilitating testing and technical support of the service.</p> <p>The scientific novelty of the study is the development of a method of evaluating the usefulness of discussions in a web forum by introducing a new informative quality parameter, the use of which allowed broadening the scope of existing limitations in quantitative analytics of discussions and feedbacks in popular services. Experimental studies carried out on the basis of a higher education institution have confirmed the effectiveness of the method application to improve the quality of educational services. The practical significance of the obtained results is the development of a software product as a tool of the virtual learning environment of a higher education institution.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 М. В. Пікуляк, М. В. Кузь, І. М. Лазарович, Я. М. Кузик, В. В. Скляровhttps://ric.zp.edu.ua/article/view/312936COST OPTIMIZATION METHOD FOR INFORMATIONAL INFRASTRUCTURE DEPLOYMENT IN STATIC MULTI-CLOUD ENVIRONMENT2024-10-08T14:12:25+03:00O. I. Rolikrvv@zntu.edu.uaS. D. Zhevakinrvv@zntu.edu.ua<p>Context. In recent years, the topic of deploying informational infrastructure in a multi-cloud environment has gained popularity. This is because a multi-cloud environment provides the ability to leverage the unique services of cloud providers without the need to deploy all infrastructure components inside them. Therefore, all available services across different cloud providers could be used to build up information infrastructure. Also, multi-cloud offers versatility in selecting different pricing policies for services across different cloud providers. However, as the number of available cloud service providers increases, the complexity of building a costoptimized deployment plan for informational infrastructure also increases.</p> <p>Objective. The purpose of this paper is to optimize the operating costs of information infrastructure while leveraging the service prices of multiple cloud service providers.</p> <p>Method. This article presents a novel cost optimization method for informational infrastructure deployment in a static multicloud environment whose goal is to minimize the hourly cost of infrastructure utilization. A genetic algorithm was used to solve this problem. Different penalty functions for the genetic algorithm were considered. Also, a novel parameter optimization method is proposed for selecting the parameters of the penalty function.</p> <p>Results. A series of experiments were conducted to compare the results of different penalty functions. The results demonstrated that the penalty function with the proposed parameter selection method, in comparison to other penalty functions, on average found the best solution that was 8.933% better and took 18.6% less time to find such a solution. These results showed that the proposed parameter selection method allows for efficient exploration of both feasible and infeasible regions.</p> <p>Conclusion. A novel cost optimization method for informational infrastructure deployment in a static multi-cloud environment is proposed. However, despite the effectiveness of the proposed method, it can be further improved. In particular, it is necessary to consider the possibility of involving scalable instances for informational infrastructure deployment.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 O. I. Rolik, S. D. Zhevakinhttps://ric.zp.edu.ua/article/view/312947IDENTIFICATION AND LOCALIZATION OF VULNERABILITIES IN SMART CONTRACTS USING ATTENTION VECTORS ANALYSIS IN A BERT-BASED MODEL2024-10-08T15:35:03+03:00O. I. Tereshchenkorvv@zntu.edu.uaN. O. Komlevarvv@zntu.edu.ua<p>Context. With the development of blockchain technology and the increasing use of smart contracts, which are automatically executed in blockchain networks, the significance of securing these contracts has become extremely relevant. Traditional code auditing methods often prove ineffective in identifying complex vulnerabilities, which can lead to significant financial losses. For example, the reentrancy vulnerability that led to the DAO attack in 2016 resulted in the loss of 3.6 million ethers and the split of the Ethereum blockchain network. This underscores the necessity for early detection of vulnerabilities.</p> <p>Objective. The objective of this work is to develop and test an innovative approach for identifying and localizing vulnerabilities in smart contracts based on the analysis of attention vectors in a model using BERT architecture.</p> <p>Method. The methodology described includes data preparation and training a transformer-based model for analyzing smart contract code. The proposed attention vector analysis method allows for the precise identification of vulnerable code segments. The use of the CodeBERT model significantly improves the accuracy of vulnerability identification compared to traditional methods. Specifically, three types of vulnerabilities are considered: reentrancy, timestamp dependence, and tx.origin vulnerability. The data is preprocessed, which includes the standardization of variables and the simplification of functions.</p> <p>Results. The developed model demonstrated a high F-score of 95.51%, which significantly exceeds the results of contemporary approaches, such as the BGRU-ATT model with an F-score of 91.41%. The accuracy of the method in the task of localizing reentrancy vulnerabilities was 82%.</p> <p>Conclusions. The experiments conducted confirmed the effectiveness of the proposed solution. Prospects for further research include the integration of more advanced deep learning models, such as GPT-4 or T5, to improve the accuracy and reliability of vulnerability detection, as well as expanding the dataset to cover other smart contract languages, such as Vyper or LLL, to enhance the applicability and efficiency of the model across various blockchain platforms.</p> <p>Thus, the developed CodeBERT-based model demonstrates high results in detecting and localizing vulnerabilities in smart contracts, which opens new opportunities for research in the field of blockchain platform security.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 О. І. Терещенко, Н. О. Комлеваhttps://ric.zp.edu.ua/article/view/312726THE METHODS OF PROTECTION FROM THE PULSE DRFM JAMMING2024-10-04T11:14:09+03:00D. V. Atamanskyirvv@zntu.edu.uaV. P. Riabukharvv@zntu.edu.uaV. I. Vasylyshynrvv@zntu.edu.uaA. V. Semeniakarvv@zntu.edu.uaE. A. Katyushynrvv@zntu.edu.uaR. L. Stovbarvv@zntu.edu.ua<p>Context. The repeater reusable pulse jamming like DRFM (Digital Radio Frequency Memory) significantly complex the radar situation for radar with LFM probing signals. Besides of the marks from the existing targets the other marks are arising on the radar PPI-screen that simulate analogous to them false targets. The known methods of struggle with the repeater reusable pulse jamming like DRFM are inefficient that caused by specificity of the jamming. The synthesis of the methods of the struggle with such jamming is the actual problem</p> <p>Objective. The estimation of possibilities of the known methods of noise immunity for reduction of the negative influence of pulse jamming like DFRM on the processing of the useful signals and suggestion of the alternative method of LFM signal processing on the DFRM jamming background.</p> <p>Method. The efficiency of the methods is defined experimentally on the results of simulation and comparison of them with the known results presented in the literature</p> <p>The results. The inefficiency of the known methods of protection from repeater pulse jamming for reduction of the negative influence of DRFM jamming on the processing of the signal reflected from the target is justified. The character of negative influence of DRFM jamming on the processing of the signal reflected from the target is defined. These jammings can create as masking effect as imitate the marks from the non-existing targets. It is shown, that device with two-side amplitude limitation on the input of compression filter which traditionally used for suppression of repeater pulse jamming is inefficient for suppression of DRFMjamming.</p> <p>It is shown, that as compression filter for LFM signals with small base the filter matched with big base LFM signal can be used. However, these matched filters are not designed for LFM signal with small bases.</p> <p>The conditions of matched filtration of small base pulse LFM signals in the filters matched with big base signal are defined. The sufficient condition of the matched filtration of small base signal is coinciding of their phase frequency characteristic with corresponding area of the phase frequency characteristic of the big base signal. This fact explains the effect of forming of the maximums on the output of compression filter for pulsed of DRFM jamming and the effect of forming of false marks from targets</p> <p>It is shown that limitation of the level of signals before their processing in the compression filter remove the energetic advantage of lamming above useful signal, however, do not influences in the form of phase frequency characteristic of jamming. This detail of the amplitude limiter is the reason of ineffective processing of the useful signal on the background of DRFM jamming in the devices like amplitude limiter-compression filter.</p> <p>The method of suppression of repeater pulse jamming is proposed. The natural assumption about correspondence of powerful samples of the input mixture to jamming samples is on the base of this method In the case of digital processing this can be realised by nullifying the samples which are above the defined level of limiter. It is shown that processing devices, which uses such limitation, provides the effective processing of the useful signal on the background of DRFM jamming.</p> <p>Conclusions. The scientific novelty of the obtained results is in the further development of practice of noise immunity of the radar with LFM probing signal, specifically the device that detects the reflected signal on the background of the repeater pulse jamming is proposed. The practice of matched filtration of complex signals obtains the further development, namely, the conditions of matched filtration of LFM signals with small base in the filters matched with the signals with big bases are determined. The sufficient condition is the coincidence of phase frequency spectrum of the small base signal with corresponding area of phase frequency spectrum of the big base signal.</p> <p>The practical importance of investigation is that the processing device is proposed. This device provides the value of correct detection of the signal reflected from the target approximately twice as much to the known processing devices in the most cases.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 D. V. Atamanskyi, V. P. Riabukha, V. I. Vasylyshyn, A. V. Semeniaka, E. A. Katyushyn, R. L. Stovbahttps://ric.zp.edu.ua/article/view/312984MARGIN OF STABILITY OF THE TIME-VARYING CONTROL SYSTEM FOR ROTATIONAL MOTION OF THE ROCKET2024-10-08T19:39:37+03:00V. V. Avdieievrvv@zntu.edu.uaA. E. Alexandrovrvv@zntu.edu.ua<p>Context. The rocket motion control system is time-varying, since its parameters during flight depend on the point of the trajectory and fuel consumption. Stability margin indicators are determined in a limited area of individual points of the trajectory using algorithms that are developed only for linear stationary systems, which leads to the need to enter stock factor in hardware. In the available sources, due attention is not paid to the development of methods for determining the quantitative assessment of the stability margin of the time-varying control system.</p> <p>Objective is to develop a methodological support for the construction of an algorithm for calculating the stability margin indicators of the time-varying system for controlling the rocket rotational motion in the plane of yawing using the equivalent stationary approximation at a selected trajectory section.</p> <p>Method. The mathematical model of the control system for the rocket rotational movement in one plane is adopted in the form of a linear differential equation without considering the inertia of the executive device and other disturbing factors. The effect of deviation of parameters from their average values for a certain trajectory section is considered as a disturbance, which makes it possible to transition from a non-stationary model to an equivalent approximate stationary one. The Nyquist criterion is used to estimate the stability margin indicators, which is based on the analysis of the frequency characteristic of an open system, for the determination of which the Laplace transform mathematical apparatus is used. To simplify the transition from functions of time in the differential equation of perturbed motion to functions of a complex variable in the Laplace transform, time-varying model parameters are presented in the form of a sum of exponential functions.</p> <p>Result. Methodological support was developed for building an algorithm for determining the stability margin of the rocket’s rotary motion control system at a given trajectory section with time-inconstant parameters.</p> <p>Conclusions. Using the example of the time-varying system for controlling the rocket rotational movement, the possibility of using the Laplace transformation to determine the stability margin indicators is shown.</p> <p>The obtained results can be used at the initial stage of project work.</p> <p>The next stage of the research is an assessment of the level of algorithm complexity, considering the inertia of the executive device and the disturbed movement of the mass center.</p>2024-10-08T00:00:00+03:00Copyright (c) 2024 V. V. Avdieiev, A. E. Alexandrovhttps://ric.zp.edu.ua/article/view/312998DEVELOPMENT OF AUTOMATED CONTROL SYSTEM AND REGISTRATION OF METAL IN CONTINUOUS CASTING 2024-10-09T09:19:25+03:00S. V. Sotnikrvv@zntu.edu.ua<p>Context. Modern industrial enterprises face challenges that require introduction of latest technologies to improve efficiency and competitiveness. In metallurgy, one of key stages is continuous casting, where quality of products and economic performance of enterprise depend on accuracy and efficiency of process control. Products made using continuous casting technology are widely used in various industries due to their high mechanical properties, structural uniformity and cost-effectiveness.</p> <p>The development of automated metal management and registration system is becoming not only relevant, but also necessary to ensure stable and efficient production.</p> <p>The problem of improving quality of metal products has always been one of most important tasks in steel industry. Imperfect technological processes, human error and equipment malfunctions can lead to defects in finished metal products. This, in turn, affects final characteristics of products, their durability and reliability.</p> <p>To date, available sources have not yet found complete solution to this problem. Therefore, it is necessary to formulate problem and develop algorithm for operation of automated system for controlling and registering metal in continuous casting.</p> <p>Objective. The goal of work is to develop automated metal management and registration system to improve quality of metal products.</p> <p>Method. To achieve this goal, parametric model was proposed, which is formalized on basis of set theory. The model takes into account key parameters of continuous casting process: material characteristics, structural features of crystallizer, casting modes, metal level in crystallizer, and position of shot stopper.</p> <p>Results. The problem was formulated and key parameters were determined, which are taken into account in system’s algorithm, which made it possible to develop system for controlling parameters of continuous casting to solve problem of improving quality of metal products.</p> <p>Conclusions. To improve quality of metal products and stability of casting process, parametric model was created that is comprehensive, allows optimization of key parameters and ensures accuracy of process control by integrating not only modes of product formation, but also takes into account specific properties of source material (chemical composition of material grade, etc.) and design features of casting plant. Algorithm for automated control system has been developed that takes into account relationships between certain key parameters and ensures optimal control of casting process. Based on proposed complex parametric model and algorithm, automated control and metal registration system was created. The focus of work is on quality and efficiency of metal management and registration in continuous casting, based on modern methods of computer science and engineering. A comprehensive experimental comparison of developed system with commercial analogs in real production conditions was carried out, which allowed us to objectively assess its efficiency and reliability.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 С. В. Сотникhttps://ric.zp.edu.ua/article/view/313061INFORMATION SYSTEM OF STREET LIGHTING CONTROL IN A SMART CITY2024-10-09T23:47:13+03:00R. I. Vaskivrvv@zntu.edu.uaO. M. Hrybovskyirvv@zntu.edu.uaN. E. Kunanetsrvv@zntu.edu.uaO. M. Dudarvv@zntu.edu.ua<p>Context. In the context of the rapid development of technologies and the implementation of the concept of smart cities, smart lighting becomes a key element of a sustainable and efficient urban environment. The research covers the analysis of aspects of the use of sensors, intelligent lighting control systems with the help of modern information technologies, in particular such as the Internet of Things. The use of such technologies makes it possible to automate the regulation of lighting intensity depending on external conditions, the movement of people or the time of a day. This contributes to the efficient use of electricity and the reduction of emissions into the atmosphere.</p> <p>Objective. The purpose of the paper is to analyze the procedures for creating an information system as a tool for monitoring and evaluating the level of illumination in a smart city with the aim of improving energy efficiency, safety, comfort and effective lighting management. The implementation of a smart lighting system for Lviv will help improve energy efficiency and community safety.</p> <p>Method. A content analysis of scientific publications was carried out, in which the results of research on the creation of street lighting monitoring systems in real urban environments were presented. The collection and analysis of data on street lighting in the city, such as energy consumption, illumination level, lamp operation schedules, and others, was carried out. Machine learning methods were used to analyze data and predict lighting needs. Using the UML methodology, the conceptual model of the street lighting monitoring information system was developed based on the identified needs and requirements.</p> <p>Results. The role of data processing technologies in creating effective lighting management strategies for optimal use of resources and meeting the needs of citizens is highlighted. The study draws attention to the challenges and opportunities of implementing smart lighting in cities, maximizing the positive impact of smart lighting on modern urban environments. The peculiarities of the development and use of an information system for controlling street lighting in a smart city are analyzed. The potential advantages and limitations of using the developed system are determined.</p> <p>Conclusions. The project on the creation of an information system designed to provide an energy-efficient lighting system in a smart city will contribute to increasing security, particularly, ensuring the safety of the community through integration with security systems, reducing energy consumption, through minimizing the electricity usage in periods when the need for lighting is not necessary.</p> <p>It has been determined that to implement an information system for remote monitoring and lighting control in a smart city, it is advisable to consider the possibility of using a complex lighting control system. Calculations were made on the example of Lviv for the city’s lighting needs. The use of motion sensors to determine the need to turn on lighting was analyzed. A conceptual model of the information system was developed using the object-oriented methodology of the UML notation. The main functionality of the information system is defined.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 Р. І. Васьків, О. М. Грибовський, Н. Е. Кунанець, О. М. Дудаhttps://ric.zp.edu.ua/article/view/313943METHODS FOR ANALYZING THE EFFECTIVENESS OF INFORMATION SYSTEMS FOR INVENTORY MANAGEMENT 2024-10-24T10:10:33+03:00D. V. Yanovskyrvv@zntu.edu.uaM. S. Grafrvv@zntu.edu.ua<p>Context. Information systems for inventory management are used to forecast, manage, coordinate, and monitor the resources needed to move goods smoothly, in a timely, cost-effective, and reliable manner. The more efficiently the system works, the better results a company can achieve. A common problem with existing performance measurement methods is the difficulty of interpreting the relationship between performance indicators and the factors that influence them.</p> <p>Objective. The purpose of the study is to describe a method for evaluating the effectiveness of information systems, which allows to establish a link between performance indicators and factors that influenced these indicators.</p> <p>Method. A set of indicators characterizing the effective operation of inventory management information systems is proposed. The rules for quantifying the factors that influence the performance indicators are proposed. The factors arise during events that affect the change in order, delivery, balance, target inventory level, parameters of the forecasting algorithm, etc. The proposed method performs an iterative distribution of the quantitative value of factors among performance indicators and thus establishes the relationship between performance indicators and factors.</p> <p>Results. The implementation of the proposed method in the software was carried out and calculations were made on actual data.</p> <p>Conclusions. The calculations carried out on the basis of the method have demonstrated the dependence of performance indicators on factors. The use of the method allows identifying the reasons for the decrease in efficiency and making the company’s management more efficient. Prospect for further research may be to detail the factors, optimize software implementations, and use the method in inventory management information systems in various areas of activity.</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 Д. В. Яновський, М. С. Графhttps://ric.zp.edu.ua/article/view/313951STEWART PLATFORM MULTIDIMENSIONAL TRACKING CONTROL SYSTEM SYNTHESIS2024-10-24T11:29:23+03:00V. A. Zozulyarvv@zntu.edu.uaS. І. Osadchyrvv@zntu.edu.ua<p>Context. Creating guaranteed competitive motion control systems for complex multidimensional moving objects, including unstable ones, that operate under random controlled and uncontrolled disturbing factors, with minimal design costs, is one of the main requirements for achieving success in this class devices market. Additionally, to meet modern demands for the accuracy of motion control processes along a specified or programmed trajectory, it is essential to synthesize an optimal control system based on experimental data obtained under conditions closely approximating the real operating mode of the test object.</p> <p>Objective. The research presented in this article aims to synthesize an optimal tracking control system for the Stewart platform’s working surface motion, taking into account its multidimensional dynamic model.</p> <p>Method. The article employs a method of a multidimensional tracking control system structural transformation into an equivalent stabilization system for the motion of a multidimensional control object. It also utilizes an algorithm for synthesizing optimal stabilization systems for dynamic objects, whether stable or not, under stationary random external disturbances. The justified algorithm for synthesizing optimal stochastic stabilization systems is constructed using operations such as addition and multiplication of polynomial and fractional-rational matrices, Wiener factorization, Wiener separation of fractional-rational matrices, and the calculation of dispersion integrals.</p> <p>Results. As a result of the conducted research, the problem of defining the concept of analytical design for a Stewart platform’s optimal motion control system has been formalized. The results include the derived transformation equations from the tracking control system to the equivalent stabilization system of the Stewart platform’s working surface motion. Furthermore, the structure and parameters of the main controller transfer function matrix for of this control system have been determined.</p> <p>Conclusions. The justified use of the analytical design concept for the Stewart platform’s working surface optimal motion control system formalizes and significantly simplifies the solution to the problem of synthesizing complex dynamic systems, applying the developed technology presented in [1]. The obtained structure and parameters of the Stewart platform’s working surface motion control system main controller, which is divided into three components W1, W2, and W3, improve the tracking quality of the program signal vector, account for the cross-connections within the Stewart platform, and increase the accuracy of executing the specified trajectory by increasing the degrees of freedom in choosing the controller structure</p>2024-11-03T00:00:00+02:00Copyright (c) 2024 V. A. Zozulya, S. І. Osadchy