Radio Electronics, Computer Science, Control
https://ric.zp.edu.ua/
<p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine.<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (on-line).<span id="result_box3"><br /></span><strong>Certificate of State Registration:</strong> КВ №24220-14060ПР dated 19.11.2019. The journal is registered by the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06<br />March 2020”<strong> journal is included to the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included to the Polish List of scientific journals</strong> and peer-reviewed materials from international conferences with assigned number of points (Annex to the announcement of the Minister of Science and Higher Education of Poland from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1999. <strong>Frequency :</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science :</strong> Physics and Mathematics, Technical Sciences.<span id="result_box9"><br /></span><strong> Aim: </strong>serve to the academic community principally at publishing topical articles resulting from original research whether theoretical or applied in various aspects of academic endeavor.<strong><br /></strong><strong> Focus:</strong> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics and researchers disseminate information on state-of-the-art techniques according to the journal scope.<br /><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com/search-results" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. The articles, published in the journal, are abstracted in leading international and national <strong>abstractig journals</strong> and <strong>scientometric databases</strong>, and also are placed to the <strong>digital archives</strong> and <strong>libraries</strong> with free on-line access. <span id="result_box21"><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor; <em>Deputy Editor in Chief</em> - D. M. Piza, D. Sc., Professor. The <em>members</em> of Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to free act as reviewers of other authors articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions and repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method :</strong> <strong>Open Access</strong> on-line for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p>National University "Zaporizhzhia Polytechnic"en-USRadio Electronics, Computer Science, Control1607-3274<h3 id="CopyrightNotices" align="justify"><span style="font-size: small;">Creative Commons Licensing Notifications in the Copyright Notices</span></h3> <p>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.</p> <p>The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.</p> <p>The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.</p> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors who publish with this journal agree to the following terms:</span></p> <ul> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution License CC BY-SA</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.</span></p> </li> </ul>EVALUATING FAULT RECOVERY IN DISTRIBUTED APPLICATIONS FOR STREAM PROCESSING APPLICATIONS: BUSINESS INSIGHTS BASED ON METRICS
https://ric.zp.edu.ua/article/view/338633
<p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Stream processing frameworks are widely used across industries like finance, e-commerce, and IoT to process real-time data streams efficiently. However, most benchmarking methodologies fail to replicate production-like environments, resulting in an incomplete evaluation of fault recovery performance. The object of this study is to evaluate stream processing frameworks under realistic conditions, considering preloaded state stores and business-oriented metrics.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The aim of this study is to propose a novel benchmarking methodology that simulates production environments with varying disk load states and introduces SLO-based metrics to assess the fault recovery performance of stream processing frameworks.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The methodology involves conducting a series of experiments. The experiments were conducted on synthetic data generated by application using Kafka Streams in a Docker-based virtualized environment. The experiments evaluate system performance under three disk load scenarios: 0%, 50%, and 80% disk utilization. Synthetic failures are introduced during runtime, and key metrics such as throughput, latency, and consumer lag are tracked using JMX, Prometheus, and Grafana. The Business Fault Tolerance Impact (BFTI) metric is introduced to aggregate technical indicators into a simplified value, reflecting the business impact of fault recovery.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The developed indicators have been implemented in software and investigated for solving the problems of Fisher’s Iris classification. The approach for evaluating fault tolerance in distributed stream processing systems has been implemented, additionally, the investigated effect on system performance under different disk utilization.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The findings underscore the importance of simulating real-world production environments in stream processing<br />benchmarks. The experiments demonstrate that disk load significantly affects fault recovery performance. Systems with disk utilization exceeding 80% show increased recovery times by 2.7 times and latency degradation up to fivefold compared to 0% disk load. The introduction of SLO-based metrics highlights the connection between system performance and business outcomes, providing stakeholders with more intuitive insights into application resilience. The findings underscore the importance of simulating real-world production environments in stream processing benchmarks. The BFTI metric provides a novel approach to translating technical performance into business-relevant indicators. Future work should explore adaptive SLO-based metrics, framework comparisons, and long-term performance studies to further bridge the gap between technical benchmarks and business needs.</span></p>A. V Bashtovyi A. V. Fechan
Copyright (c) 2025 A. V Bashtovyi , A. V. Fechan
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223172710.15588/1607-3274-2025-3-2METHODS AND ALGORITHMS OF BUILDING A 3D MATHEMATICAL MODEL OF THE SURROUNDING SPACE FOR AUTOMATIC LOCALIZATION OF A MOBILE OBJECT
https://ric.zp.edu.ua/article/view/338876
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">The task of automating the positioning of a mobile object in a closed space under the condition of its partial or complete autonomy is considered. The object of study is the process of automatic construction of a 3D model of the surrounding space.<br></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The goal of the work is the develop an algorithm for creating a 3D model of the surrounding space for further<br>localization of a mobile object in conditions of its partial or complete autonomy.<br></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The results of the study of the problem of localization of a mobile object in space in real time are presented. The results of the analysis of existing methods and algorithms for creating mathematical models of the surrounding space are presented. Algorithms that are widely used to solve the problem of localization of a mobile object in space are described. A wide range of methods for constructing a mathematical model of the surrounding space has been researched – from methods that use the comparison of successive point clouds of the object of the surrounding space to methods that use a series of snapshots of characteristic points and comparison of information about them in different snapshots at points that are as similar as possible according to the parameter vector.<br></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">The method for three-stage construction of a 3D model of the surrounding space is proposed for solving the problem of localization of a mobile object in a closed space.<br></span><strong><span class="fontstyle0">Conclusions</span></strong><span class="fontstyle0">. </span><span class="fontstyle2">The conducted experiments have confirmed the possibility of the proposed algorithm for three-stage construction<br>of a mathematical model of the environment to determine the position of a mobile object in space. The methods used in the algorithm allow obtaining information about the surrounding space, which allows localizing a mobile object in a closed space. Prospects for further research may lie in the integration of information flows about the position of the object from different devices, depending on the type of data acquisition, into a centralized information base for solving a wide range of tasks performed by automatic mobile objects (robots).</span> </p>Ya. W. KorpanO. V. NechyporenkoE. E. Fedorov T. Yu. Utkina
Copyright (c) 2025 Ya. W. Korpan, O. V. Nechyporenko, E. E. Fedorov, T. Yu. Utkina
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223283610.15588/1607-3274-2025-3-3THE METHOD OF ADAPTATION OF THE PARAMETERS OF ALGORITHMS FOR THE DETECTION AND CLEANING OF A STATISTICAL SAMPLE FROM ANOMALIES FOR DATA SCIENCE PROBLEMS
https://ric.zp.edu.ua/article/view/339011
<p><strong>Context.</strong> Popularization of the Data Science for the tasks of e-commerce, the banking sector of the economy, for the tasks of managing dynamic objects – all this actualizes the requirements for indicators of the efficiency of data processing in the Time Series format. This also applies to the preparatory stage of data analysis at the level of detection and cleaning of statistical samples from anomalies such as rough measurements and omissions.<br><strong>Objective</strong>. The development of the method for adapting the parameters of the algorithms for detecting and cleaning the statistical sample of the Time Series format from anomalies for Data Science problems.<br><strong>Method</strong>. The article proposes a method for adapting the parameters of algorithms for detecting and cleaning a statistical sample from anomalies for data science problems. The proposed approach is based on and differs from similar practices by the introduction of an optimization approach in minimizing the dynamic and statistical error of the model, which determines the parameters of settings of popular algorithms for cleaning the statistical sample from anomalies using the Moving Window Method.<br><strong>Result</strong>. The introduction of the proposed approach into the practice of Data Science allows the development of software components for cleaning data from anomalies, which are trained by parameters purely according to the structure and dynamics of the Time Series.<br><strong>Conclusions</strong>. The key advantage of the proposed method is its simple implementation into existing algorithms for clearing the sample from anomalies and the absence of the need for the developer to select parameters for the settings of the cleaning algorithms manually, which saves time during development. The effectiveness of the proposed method is confirmed by the results of calculations</p>O. O. Pysarchuk S. O. PavlovaD. R. Baran
Copyright (c) 2025 O. O. Pysarchuk , S. O. Pavlova, D. R. Baran
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223374410.15588/1607-3274-2025-3-4FAST NEURAL NETWORK AND ITS ADAPTIVE LEARNING IN CLASSIFICATION PROBLEMS
https://ric.zp.edu.ua/article/view/339035
<p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">To solve a wide class of information processing tasks and, above all, pattern recognition under conditions of significant nonlinearity, artificial neural networks have become widely used, due to their universal approximating properties and ability to learn based on training training samples. Deep neural networks have become the most widespread, which indeed demonstrate very high recognition quality, but require extremely large amounts of training data, which are not always available. Under these conditions, the so-called least squares support vector machines can be effective. They do not require large amounts of training samples but can be trained only in batch mode and are quite cumbersome in numerical implementation. Therefore, the problem of training LS-SVM in sequential mode under conditions of significant non-stationarity of data that are sequentially fed online to the neural network for processing is quite relevant.<br></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The aim of the work is to introduce an approach to adaptive learning of LS-SVM, which allows us to abandon the<br>conversion of images into vector signals.<br></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">An approach for image recognition using a least squares support vector machine (LS-SVM) is proposed under conditions when data for processing is received in a sequential online mode. The advantage of the proposed approach is that reduces the time to solve the image recognition problem and allows the implementation of the learning process on non-stationary training samples. A feature of the proposed method is computational simplicity and high speed since the number of neurons in the network does not change over time, i.e., the architecture remains fixed during the tuning process.<br></span><strong><span class="fontstyle0">Results</span></strong><span class="fontstyle2">. The proposed approach to adaptive learning of LS-SVM simplifies the numerical implementation of the neural network<br>and allows for an increase in the speed of information processing and, above all, the tuning of its synaptic weights.<br></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The problem of pattern recognition using the least squares support vector machine (LS-SVM) is considered under<br>conditions when data for processing is received in a sequential online mode. The training process is implemented on a sliding window, which leads to the fact that the number of neurons in the network does not change over time, i.e. the architecture remains fixed during the tuning process. This approach simplifies the numerical implementation of the system and allows the training process to be implemented on non-stationary training samples. The possibility of training in situations where training images are given not only in vector form but also in matrix form allows us to abandon the conversion of images into vector signals</span> </p>Ye. V. BodyanskiyYe. O ShafronenkoF. A. BrodetskyiO. S. Tanianskyi
Copyright (c) 2025 Ye. V. Bodyanskiy, Ye. O Shafronenko, F. A. Brodetskyi, О. S. Tanianskyi
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223455110.15588/1607-3274-2025-3-5METHOD OF PARALLEL HYBRID SEARCH FOR LARGE-SCALE CODE REPOSITORIES
https://ric.zp.edu.ua/article/view/339142
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">Modern software systems contain extensive and growing codebases, making code retrieval a critical task for software engineers. Traditional code search methods rely on keyword-based matching or structural analysis but often fail to capture the semantic intent of user queries or struggle with unstructured and inconsistently documented code. Recently, semantic vector search and large language models (LLMs) have shown promise in enhancing code understanding. The problem – is designing a scalable, accurate, and hybrid code search method capable of retrieving relevant code snippets based on both textual queries and semantic context, while supporting parallel processing and metadata enrichment.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The goal of the study is to develop a hybrid method for semantic code search by combining keyword-based filtering and embedding-based retrieval enhanced with LLM-generated summaries and semantic tags. The aim is to improve accuracy and efficiency in locating relevant code elements across large code repositories.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">A two-path search method with post-processing is proposed, where textual keyword search and embedding-based semantic search are executed in parallel. Code blocks are preprocessed using GPT-4o model to generate natural-language summaries and semantic tags.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">The method has been implemented and validated on a .NET codebase, demonstrating improved precision in retrieving semantically relevant methods. The combination of parallel search paths and LLM generated metadata enhanced both result quality and responsiveness. Additionally, LLM-post-processing was applied to the top-most relevant results, enabling more precise identification of code lines matching the query within retrieved snippets. Other results can be further refined on-demand.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">Experimental findings confirm the operability and practical applicability of the proposed hybrid code search framework. The system’s modular architecture supports real-time developer workflows, and its extensibility enables future improvements through active learning and user feedback. Further research may focus on optimizing embedding selection strategies, integrating automatic query rewriting, and scaling across polyglot code environments</span></p>V. O. Boiko
Copyright (c) 2025 V. O. Boiko
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223526310.15588/1607-3274-2025-3-6URBAN SCENE SEGMENTATION USING HOMOGENEOUS U-NET ENSEMBLE: A STUDY ON THE CITYSCAPES DATASET
https://ric.zp.edu.ua/article/view/339153
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">Semantic segmentation plays a critical role in computer vision tasks such as autonomous driving and urban scene understanding. While designing new model architectures can be complex, improving performance through ensemble techniques applied to existing models has shown promising potential. This paper investigates ensemble learning as a strategy to enhance segmentation accuracy without modifying the underlying U-Net architecture.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The aim of this work is to develop and evaluate a homogeneous ensemble of U-Net models trained with distinct initialization and data augmentation techniques, and to assess the effectiveness of various ensemble aggregation strategies in<br />improving segmentation performance on complex urban dataset.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The proposed approach constructs an ensemble of five structurally identical U-Net models, each trained with unique weight initialization and augmentation schemes to ensure prediction diversity. Several ensemble strategies are examined, including softmax averaging, max voting, proportional weighting, exponential weighting, and optimized weighted voting. Evaluation is conducted on the Cityscapes dataset using a range of segmentation metrics.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">Experimental findings demonstrate that ensemble models outperform individual U-Net instances and the baseline in terms of accuracy, mean IoU, and specificity. The optimized weighted ensemble achieved the highest accuracy (87.56%) and mean IoU (0.6504), exceeding the best individual model by approximately 3%. However, these improvements come with a notable increase in inference time, highlighting a trade-off between accuracy and computational efficiency.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The ensemble-based approach effectively enhances segmentation accuracy while leveraging existing model architectures. Although the increased computational cost presents a limitation for real-time applications, the method is well-suited for high-precision tasks. Future research will focus on reducing inference time and extending the ensemble methodology to other architectures and datasets.</span></p>I. O. HmyriaN. S. Kravets
Copyright (c) 2025 I. O. Hmyria, N. S. Kravets
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223647610.15588/1607-3274-2025-3-7A NEURAL NETWORK APPROACH TO SEMANTIC SEGMENTATION OF VEHICLES IN VERY HIGH RESOLUTION IMAGES
https://ric.zp.edu.ua/article/view/339302
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">The semantic segmentation of vehicles in very high resolution aerial images is essential in developing intelligent transportation systems. It allows for the automation of real-time traffic management and the detection of congestion and emergencies.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">This work aims to develop and evaluate the effectiveness of a neural network approach to semantic segmentation in very high resolution aerial images, which provides high detail and correct reproduction of object boundaries.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The DeepLab architecture with ResNet-101 as a backbone is used for gradient preservation and multiscale feature analysis. We trained on DOTA data and retrained on specialized sets with classes: vehicles, green areas, buildings, and roads. A loss function based on the Dice coefficient was applied to reduce the imbalance of classes. It effectively solves the class imbalance problem and improves the accuracy of segmenting objects of different sizes. Using ResNet-101 instead of Xception in the backbone network allows us to maintain the gradient as the network depth increases.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">Experimental studies have confirmed the effectiveness of the proposed approach, which achieves a segmentation accuracy of more than 90%, outperforming existing analogs. The use of multiscale feature analysis allows for preserving the texture features of objects, reducing false classifications. A comparative study with U-Net, SegNet, FCN8s, and other methods confirms the higher performance of the proposed approach in terms of mIoU (82.3%) and Pixel Accuracy (95.1%).<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The experiments confirm the effectiveness of the proposed method of semantic segmentation of vehicles in ultrahigh spatial resolution images. Using DeepLab v3+ResNet-101 significantly improves the quality of vehicle segmentation in an urbanized environment. Excellent metric performance makes it promising for infrastructure monitoring and traffic planning tasks. Further research will focus on adapting the model to new datasets</span></p>V. Yu. KashtanV. V. Hnatushenko I. M. UdovykO. V. Kazymyrenko Y. D. Radionov
Copyright (c) 2025 V. Yu. Kashtan, V. V. Hnatushenko , I. M. Udovyk, O. V. Kazymyrenko , Y. D. Radionov
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223778510.15588/1607-3274-2025-3-8MULTI-SCALE TEMPORAL GAN-BASED METHOD FOR HIGHRESOLUTION AND MOTION STABLE VIDEO ENHANCEMENT
https://ric.zp.edu.ua/article/view/339309
<p><strong> <span class="fontstyle0">Context</span></strong><span class="fontstyle1">. The problem of improving the quality of video images is relevant in many areas, including video analytics, film production, telemedicine and surveillance systems. Traditional video processing methods often lead to loss of details, blurring and artifacts, especially when working with fast movements. The use of generative neural networks allows you to preserve textural features and improve the consistency between frames, however, existing methods have shortcomings in maintaining temporal stability and the quality of detail restoration.<br /></span><strong><span class="fontstyle0">Objective</span></strong><span class="fontstyle1">. The goal of the study is the process of generating and improving video images using deep generative neural networks. The purpose of the work is to develop and study MST-GAN (Multi-Scale Temporal GAN), which allows you to preserve both spatial and temporal consistency of the video, using multi-scale feature alignment, optical flow regularization and a temporal discriminator.<br /></span><strong><span class="fontstyle0">Method</span></strong><span class="fontstyle1">. A new method based on the GAN architecture is proposed, which includes: multi-scale feature alignment (MSFA), which corrects shifts between neighboring frames at different levels of detail; a residual feature boosting module to restore lost details after alignment; optical flow regularization, which minimizes sudden changes in motion and prevents artifacts; a temporal discriminator that learns to evaluate the sequence of frames, providing a consistent video without flickering and distortion.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle1">An experimental study of the proposed method was conducted on a set of different data and compared with other modern analogues by the metrics SSIM, PSNR and LPIPS. As a result, values were obtained that show that the proposed method outperforms existing methods, providing better frame detail and more stable transitions between them.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle1">The proposed method provides improved video quality by combining detail recovery accuracy and temporal frame consistency</span></p>M. R. MaksymivT. Y. Rak
Copyright (c) 2025 M. R. Maksymiv, T. Y. Rak
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-223869510.15588/1607-3274-2025-3-9HYBRID MACHINE LEARNING TECHNOLOGIES FOR PREDICTING COMPREHENSIVE ACTIVITIES OF INDUSTRIAL PERSONNEL USING SMARTWATCH DATA
https://ric.zp.edu.ua/article/view/339396
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">In today’s industrial development, significant attention is paid to systems for recognizing and predicting human activity in real time. Such technologies are key to the transition from the concept of Industry 4.0 to Industry 5.0, as they allow for improved interaction between man and machine, as well as to ensure a higher level of safety, adaptability and efficiency of production processes. These approaches are particularly relevant in the field of internal logistics, where cooperation with autonomous vehicles requires a high level of coordination and adaptability.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">To create a technological solution for the prompt detection and prediction of complex human activity in the internal logistics environment by using sensor data from smart watches. The main goal is to improve cooperation between employees and automated systems, increase occupational safety and efficiency of logistics processes.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">A decentralized data collection system using smart watches has been developed. A mobile application in Kotlin was created to capture sensor readings during a series of logistics actions performed by five workers. To process incomplete or distorted data, anomaly detection algorithms were applied, including STD, logarithmic transformation of STD, DBSCAN, and IQR, as well as smoothing methods such as moving average, weighted moving average, exponential smoothing, local regression, and Savitsky-Goley filter. The processed data were used to train models, with the employment of such advanced techniques as transfer learning, continuous wavelet transform, and classifier stacking.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">The pre-trained deep model with the DenseNet121 architecture was chosen as the base classifier, which showed an F1- metric of 91.01% in recognizing simple actions. Five neural network architectures (single- and multi-layer) with two data distribution strategies were tested to analyze complex activity. The highest accuracy – F1-metric 87.44% – was demonstrated by the convolutional neural network when using a joint approach to data distribution.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The results of the study indicate the possibility of applying the proposed technology for real-time recognition of complex human activities in intra-logistics systems based on data from smart-watch sensors, which will improve human-machine interaction and increase the efficiency of industrial logistics processes</span></p>O. M. PavliukM. O. MedykovskyyM. V. MishchukA. O. ZabolotnaO. V. Litovska
Copyright (c) 2025 O. M. Pavliuk, M. O. Medykovskyy, M. V. Mishchuk, A. O. Zabolotna, O. V. Litovska
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-2239611110.15588/1607-3274-2025-3-10HIERARCHICAL MACHINE LEARNING SYSTEM FOR FUNCTIONAL DIAGNOSIS OF EYE PATHOLOGIES BASED ON THE INFORMATIONEXTREMAL APPROACH
https://ric.zp.edu.ua/article/view/339405
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">The task of information-extremal machine learning for the diagnosis of eye pathologies based on the characteristic signs of diseases is considered. The object of the study is the process of hierarchical machine learning in the system for diagnosing ophthalmological diseases. The aging population and the increasing prevalence of eye diseases, such as glaucoma, optic nerve atrophy, retinal detachment, and diabetic retinopathy, necessitate effective methods for early diagnosis to prevent vision loss. Traditional diagnostic methods largely rely on the experience of the physician, which can lead to errors. The use of artificial intelligence (AI) and machine learning (ML) can significantly improve the accuracy and speed of diagnosis, making this topic highly relevant.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">To enhance the functional efficiency of a computerized system for diagnosing eye pathologies based on image data.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">A method of information-extremal hierarchical machine learning for a system of eye pathology diagnosis based on the characteristic signs of diseases is proposed. The method is based on a functional approach to modeling cognitive processes of natural intelligence, ensuring the adaptability of the diagnostic system under any initial conditions for the formation of pathology images and allowing flexible retraining of the system when the recognition class alphabet expands. The foundation of the method is the principle of maximizing the criterion of functional efficiency based on a modified Kullback information measure, which is a functional of the diagnostic rule precision characteristics. The learning process is considered as an iterative procedure for optimizing the parameters of the diagnostic system’s operation according to this information criterion. Based on the proposed categorical functional model, an information-extremal machine learning algorithm with a hierarchical data structure in the form of a binary recursive tree is developed. This data structure enables the division of a large number of recognition classes into pairs of nearest neighbors, for which the machine learning parameters are optimized using a linear algorithm of the necessary depth.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">An intelligent technology for diagnosing eye pathologies has been developed, which includes a comprehensive set of information, algorithmic, and software components. A comparative analysis of the effectiveness of different methods for organizing decision rules during system training has been conducted. It was found that the use of recursive hierarchical classifier structures allows achieving higher diagnostic accuracy compared to binary classifiers.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The developed intelligent computer-based diagnostic system for eye pathologies demonstrates high efficiency and accuracy. The implementation of such a system in medical practice could significantly improve the quality of eye disease diagnostics, reduce the workload on physicians, and minimize the risk of misdiagnosis. Further research could focus on refining algorithms and expanding their application to other types of medical images</span></p>I. V. Shelehov D. V. PrylepaY. O. KhibovskaO. A. Tymchenko
Copyright (c) 2025 І. V. Shelehov, D. V. Prylepa, Y. O. Khibovska, О. А. Tymchenko
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22311212510.15588/1607-3274-2025-3-11AN INNOVATIVE APPROXIMATE SOLUTION METHOD FOR AN INTEGER PROGRAMMING PROBLEM
https://ric.zp.edu.ua/article/view/339539
<p><strong> <span class="fontstyle0">Context</span></strong><span class="fontstyle1">. There are certain methods for finding the optimal solution to integer programming problems. However, these methods cannot solve large-scale problems in real time. Therefore, approximate solutions to these problems that work quickly have been given. It should be noted that the solutions given by these methods often differ significantly from the optimal solution. Therefore, the problem of taking any known approximate solution as the initial solution and improving it further arises.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle1">Initially, a certain approximate solution is found. Then, based on proven theorems, the coordinates of this solution that do not coincide with the optimal solution are determined. After that, new solutions are found by sequentially changing these coordinates. The one that gives the largest value to the functional among these solutions is accepted as the final solution.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle1">The method we propose in this work is implemented as follows:<br />First, a certain approximate solution to the problem is established, then the numbers of the coordinates of this solution that do not coincide with the optimal solution are determined. After that, new solutions are established by sequentially assigning values to these coordinates one by one in their intervals. The best of the solutions found in this process is accepted as the final innovative solution.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle1">A problem was solved in order to visually illustrate the quality and effectiveness of the proposed method.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle1">The method we propose in this article cannot give worse results than any approximate solution method, is simple from an algorithmic point of view, is novel, can be easily programmed, and is important for solving real practical problems.</span></p>K. Sh. MamedovR. R. Niyazova
Copyright (c) 2025 K. Sh. Mamedov, R. R. Niyazova
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22319520510.15588/1607-3274-2025-3-18METHOD FOR STUDYING THE TIME-SHIFTED MATHEMATICAL MODEL OF A TWO-FRAGMENT SIGNAL WITH NONLINEAR FREQUENCY MODULATION
https://ric.zp.edu.ua/article/view/338536
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">The further development of the theory and techniques for forming and processing complex radar signals encompasses both the study of existing mathematical models of probing radio signals and the creation of new ones. One of the directions of such research focuses on reducing the maximum side lobe level in the autocorrelation functions of signals with intra-pulse modulation of frequency or phase. In this context, the instantaneous frequency may vary according to either a linear or nonlinear law. Nonlinear frequency modulation laws can reduce the maximum level of side lobes without introducing amplitude modulation in the output signal of the radio transmitting device and, consequently, without causing power loss in the sensing signals. The widespread implementation of nonlinear-frequency-modulated signals in radar technology is constrained by the insufficient development of their mathematical models. Therefore, the development of methods for analyzing existing mathematical models of signals with nonlinear frequency modulation remains an urgent scientific task.<br></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The purpose of this work is to develop a method for conducting research to evaluate the advantages and disadvantages of a mathematical model of a nonlinear-frequency-modulated signal consisting of two fragments with linear frequency modulation.<br></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">This study proposes a method for analyzing mathematical models of signals based on the transition from a shifted time scale to the current time scale. The methodology consists of the following main stages: a formalized description of mathematical models, transition to an alternative time scale, identification of components and determination of their physical essence, and a comparative analysis. The proposed method was validated through simulation modeling.<br></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">Using the proposed method, it has been determined that the mathematical operation of time scale shifting is equivalent to the introduction of additional components in the mathematical model. These components simultaneously and automatically compensate for the frequency jump at the junction of fragments, as well as introduce an additional linear phase increment in the second linearly frequency-modulated fragment. This approach provides a clear illustration of the frequency jump compensation mechanism in the studied mathematical model. The applied method enabled the identification of a drawback in the examined mathematical model, namely, the absence of a compensatory component for the instantaneous phase jump during the transition from the first LFM fragment to the second.<br></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">A method has been developed to determine the essence and corresponding influence of the components of a<br>mathematical model in a time-shifted, nonlinear, frequency-modulated signal, which consists of two fragments with linear frequency modulation. The model under study is not entirely accurate, as it lacks a component to compensate for the phase jump at the transition from the first signal fragment to the second. The introduction of such a component ensures a further reduction in the maximum level of the side lobes of the signal autocorrelation function.<br></span></p>O. O. KostyriaA. A. HryzoI. M. TrofymovO. I. LiashenkoYe. V. Biernik
Copyright (c) 2025 O. O. Kostyria, A. A, Hryzo, I. M. Trofymov, O. I. Liashenko, Ye. V. Biernik
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22361610.15588/1607-3274-2025-3-1SIMPLE, FAST AND SCALABLE RECOMMENDATION SYSTEMS VIA EXTERNAL KNOWLEDGE DISTILLATION
https://ric.zp.edu.ua/article/view/339416
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">Recommendation systems are important tools for modern businesses to generate more income via proposing relevant goods to clients and achieve more loyal attendees. With deep learning emergence and hardware capabilities evolution it became possible to grasp customer behavioral patterns in data-driven way. However, accuracy of prediction is dependent on complexity of system, and these factors lead to increased delay in model’s output. The object of the study is the task of issuing sequential recommendations, namely the next most relevant product, subject to restrictions on system response time.<br></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The goal of the research is the synthesis of a deep neural network that can retrieve relevant items for a large portion of users with minimal delay.<br></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The proposed method of obtaining recommendation systems that leverages a mixture of Attention-based deep learning model architectures with application of knowledge graphs for prediction quality enhancement via explicit enrichment of recommendation candidate pool, demonstrates the benefits of decoder-only models and distillation learning framework. The latter approach was proven to demonstrate outstanding performance in solving recommendation retrieval task while responding fast for large user batch processing.<br></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">A model of a recommender system and a method for its training are proposed, combining the knowledge distillation<br>paradigm and learning on knowledge graphs. The proposed method was implemented via two-tower deep neural network to solve recommendation retrieval problem. A system for predicting the most relevant proposals for the user has been built, which includes the proposed model and its training method, as well as ranking indicators MAP@k and NDCG@k to assess the quality of the models. A program has been developed that implements the proposed architecture of the recommendation system, with the help of which the problem of issuing the most relevant proposals has been studied. When conducting experiments on a large amount of real data from user visits to an online retail store, it was found that the proposed method for designing recommender systems guarantees high relevance of the recommendations issued, is fast and unpretentious to computing resources at the stage of receiving responses from the system.<br></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">Series of conducted experiments confirmed that the proposed system effectively solves the problem in a short period of time, which is a strong argument in favor of its use in real conditions for large businesses that operate millions of visits per month and thousands of products. Prospects for further research within the given research topic include the use of other knowledge distillation methods, such as internal or self-distillation, the use of deep learning architectures other than the attention mechanism, and optimization of embedding vector storage</span></p>D. V. Androsov N. I. Nedashkovskaya
Copyright (c) 2025 D. V. Androsov, N. I. Nedashkovskaya
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22312613710.15588/1607-3274-2025-3-12INFORMATION TECHNOLOGY FOR DETECTION OF DISINFORMATION SOURCES AND INAUTHENTICAL BEHAVIOR OF CHAT USERS BASED ON NLP AND MACHINE LEARNING METHODS
https://ric.zp.edu.ua/article/view/339471
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">In the modern digital environment, the spread of disinformation and inauthentic behaviour of users in chat rooms poses a serious threat to society. Natural language processing and machine learning methods offer effective approaches to detecting and countering such threats.<br /></span><span class="fontstyle0"><strong>Objective</strong> </span><span class="fontstyle2"><strong>of the study</strong> is </span><span class="fontstyle2">to develop information technology for automatically detecting the spread of sources of Ukrainian-language fake news and inauthentic behaviour of chat users, which is built using natural language processing methods and implemented, based on machine learning technologies.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">To implement the project, such feature construction methods as the TF-IDF statistical indicator, the Bag of Words vectorization model, and part-of-speech mark-up were used. For other experiments, the FastText, W2V, and Glove word2vecvectorization models were used to obtain vector representations of words, as well as to recognize trigger words (reinforcing words, absolute pronouns, and “shiny” words). The idea is to find similar messages in terms of text/meaning (lexical/semantical), as well as analyse the results of the distribution of similar messages in time and space. Complement Naïve Bayes, Gaussian Naïve Bayes, HistGradientBoostingClassifier, MultinomialNB and Random Forest were used as the main modelling algorithms to identify sources of disinformation and inauthentic chat behavior.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">This article discusses the development of software for detecting propaganda messages in social networks based on the analysis of Twitter text data. The main attention is paid to the methods of text pre-processing, data vectorization and machine learning for message classification. The process of collecting, preparing and cleaning data is described, and various approaches to training the model and evaluating its effectiveness are considered. 9 experiments were conducted for the selected methods of postprocessing data, vectorization models and modelling algorithms.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The created models show excellent results in recognizing sources of propaganda, fakes and disinformation in social networks and online media. The best results so far are shown by experiment 5 on the main TF-IDF + Complement Naïve Bayes. The high recall value for class 1 (0.8) means that the model finds positive samples well, but for class 0 it is less effective (0.56). The correspondingly high precision value for class 1 (0.89) means that most of the samples predicted as class 1 are correct. The low precision for class 0 (0.38) indicates a large number of false predictions. At the same time, certain anomalies are observed in the series of experiments (in particular, in experiment 7 based on Glove + Random Forest), which require further research. The results obtained can be used to further improve the algorithms for detecting sources of disinformation, inauthentic chat behaviour and malicious content to increase the country’s transparency.</span></p>V. Vysotska
Copyright (c) 2025 V. Vysotska
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22313815310.15588/1607-3274-2025-3-13CARDIAC SIGNAL PROCESSING WITH ALGORITHMS USING VARIABLE RESOLUTION
https://ric.zp.edu.ua/article/view/339496
<p><strong> <span class="fontstyle0">Context</span></strong><span class="fontstyle1">. The proposed paper relates to the field of cardiac signal processing, in particular, to the segmentation of the cardiac signal into cardiac cycles, as well as one of the most important features definition used in cardiac diagnosis, the </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave end.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle1">The purpose and object of study is to develop an algorithm for processing the cardiac signal in the presence of interference that allows the identification of features necessary for diagnosis and, at the same time, does not distort the original signal as is usually the case when it is processed by band-pass digital filters to exclude interference, which leads to the original signal distortion and, possibly, loss of diagnostic features.<br />The proposed </span><span class="fontstyle0"><strong>Method</strong> </span><span class="fontstyle1">involves representing the cardiac signal as part of some image contour. Cardiac signal processing consists first of all in segmentation into cardiac cycles. Usually, </span><span class="fontstyle3">R</span><span class="fontstyle1">-waves are used to segment the cardiac signal into cardiac cycles, i.e., the sequence of </span><span class="fontstyle3">R</span><span class="fontstyle1">-waves in the processed part of the cardiac signal is determined. When determining the </span><span class="fontstyle3">R</span><span class="fontstyle1">-wave, a model is used that assumes an increase in the signal followed by a decrease, and the increase (decrease) rate must be greater in absolute value than a certain predetermined value. For a selected segment of the cardiac signal, the sequence of </span><span class="fontstyle3">R</span><span class="fontstyle1">-waves is determined at different resolutions. The answer is the sequence that is repeated for the largest number of resolutions and that is used to segment the cardiac signal into cardiac cycles. The </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave model can be represented as a sequence of curved arcs without breaks. In one of the common cases, the </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave is determined by the largest maximum of the cardiac signal within the cardiac cycle, following the R-wave. The end of the </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave is determined by the first minimum following the already determined maximum for the </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave. As in the case of cardiac signal segmentation, the maximum of the </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave and the </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave end are determined at different resolutions, and the answer is considered to be those values that coincide at the largest number of used resolutions.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle1">Algorithms for cardiac signal processing using variable resolution have been developed and experimentally verified,<br />namely, the algorithm for segmentation of the cardiac signal into cardiac cycles and the algorithm for </span><span class="fontstyle3">T</span><span class="fontstyle1">-wave end detection, which is of great importance in cardiac diagnostics. Means of cardiac signal processing, using the proposed algorithms, do not change the processed cardiac signal, unlike traditional means that use filtering of the cardiac signal, distorting the cardiac signal itself, which leads to distortion of the processing result.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle1">Scientific novelty consists in the fact that algorithms of cardiac signal processing in the presence of interference using variable resolution typical of visual perception are proposed. The practical significance consists in the fact that the means of cardiac signal processing, using the proposed algorithms, do not change the processed cardiac signal, unlike traditional means that use filtering of the cardiac signal, distorting the cardiac signal itself, which leads to distortion of the processing result. The use of the presented tools in practical medical practice will lead to an improvement in the quality of cardiac diagnostics and, as a result, the quality of treatment</span></p>V. G. KalmykovA. V. SharypanovV. V. Vishnevskey
Copyright (c) 2025 V. G. Kalmykov, A. V. Sharypanov, V. V. Vishnevskey
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22315416210.15588/1607-3274-2025-3-14METHODS FOR EVALUATING SOFTWARE ACCESSIBILITY
https://ric.zp.edu.ua/article/view/339503
<p><strong> <span class="fontstyle0">Context</span></strong><span class="fontstyle1">. The development and enhancement of methods for evaluating software accessibility is a relevant challenge in modern software engineering, as ensuring equal access to digital services is a key factor in improving their efficiency and inclusivity. The increasing digitalization of society necessitates the creation of software that complies with international accessibility standards such as ISO/IEC 25023 and WCAG. Adhering to these standards helps eliminate barriers to software use for individuals with diverse physical, sensory, and cognitive needs. Despite advancements in regulatory frameworks, existing accessibility evaluation methodologies are often generalized and fail to account for the specific needs of different user categories or the unique ways they interact with digital systems. This highlights the need for the development of new, more detailed methods for defining metrics that influence the quality of user interaction with software products.<br /></span><strong><span class="fontstyle0">Objective</span></strong><span class="fontstyle1">. Building a classification and mathematical model and developing accessibility assessment methods for software based on it.<br /></span><strong><span class="fontstyle0">Methods</span></strong><span class="fontstyle1">. A method for assessing the quality subcharacteristic “Accessibility”, which is part of the “Usability” quality characteristic, has been developed. This enabled the analysis of a website’s inclusivity for individuals with visual impairments, and the formulation of specific recommendations for further improvements, which is a crucial step toward creating an inclusive digital environment.<br /></span><strong><span class="fontstyle0">Results</span></strong><span class="fontstyle1">. Comparing to standardized approaches, a more detailed and practically oriented accessibility assessment methodology has been proposed. Using this methodology, an analysis of the accessibility of the main pages of Vasyl Stefanyk Precarpathian National University’s website was conducted, and improvements were suggested to enhance its inclusivity.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle1">This study presents the development of a classification and mathematical model, along with an accessibility assessment methodology for websites based on the ISO 25023 standard, and an analysis of the main pages of the university’s web portal. The identified quantitative accessibility indicators enable an evaluation of the web resource’s compliance with modern inclusivity requirements and provide recommendations for its improvement.<br />The scientific novelty of this research lies in the development of assessment methods for the “Accessibility” quality subcharacteristic by introducing new subproperties and attributes of software quality, based on clearly defined metrics specifically adapted for evaluating the accessibility level of digital products for individuals with visual impairments. This approach ensures a more precise and objective determination of web resources’ compliance with inclusivity requirements, contributing to their effectiveness and usability for this user group.<br />The practical significance of the obtained results lies in their applicability for objectively evaluating the accessibility of software<br />products and web resources.</span></p>Mykola KuzIvan YaremiyHanna Yaremii Mykola PikuliakIhor LazarovychMykola Kozlenko Denys Vekeryk
Copyright (c) 2025 Mykola Kuz, Ivan Yaremiy, Hanna Yaremii, Mykola Pikuliak, Ihor Lazarovych, Mykola Kozlenko, Denys Vekeryk
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22316317210.15588/1607-3274-2025-3-15REDUNDANT ROBOTIC ARM PATH PLANNING USING RECURSIVE RANDOM INTERMEDIATE STATE ALGORITHM
https://ric.zp.edu.ua/article/view/339504
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">Collision-free path planning in joint space for redundant robotic manipulators remains a challenging task due to the high-dimensional configuration space and dynamically changing environments. Existing methods often struggle to balance search time and path quality, which is crucial for real-time applications.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The aim of this study is to develop a new method to plan efficient, collision-free trajectories in real time for redundant robotic manipulators.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">A novel sampling-based algorithm for collision-free joint space path planning for redundant robotic manipulators presented in this study. The algorithm is called the Recursive Random Intermediate State (RRIS). The RRIS algorithm primarily works by generating a set of random intermediate states and iteratively selecting the optimal one based on the number of collisions along the discretized path. Furthermore, the paper proposes an axis-aligned bounding box generation strategy and an early exit strategy to improve algorithm speed. Finally, repeated calls of the algorithm are proposed to improve its reliability. The performance of the RRIS algorithm is evaluated through a set of comprehensive tests and compared with the popular RRT Connect algorithm implemented in Open Motion Planning Library.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">Experimental evaluations show that the RRIS algorithm under the test conditions produces collision-free paths with significantly shorter average lengths and reduces search time by approximately three times compared to the RRT Connect algorithm.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The proposed RRIS algorithm demonstrates a promising approach to real-time path planning for redundant robotic manipulators. By combining strategic intermediate state sampling with efficient collision evaluation and early termination mechanisms, the algorithm offers a robust alternative to known methods.</span></p>A. Y. MedvidV. S. Yakovyna
Copyright (c) 2025 A. Y. Medvid, V. S. Yakovyna
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22317318110.15588/1607-3274-2025-3-16ENGINEERING SOCIAL COMPUTING
https://ric.zp.edu.ua/article/view/339529
<p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">The relevance of the study is due to the need to eliminate contradictions between management and performers by introducing engineering social computing, which ensures moral management of social processes based on their metric<br />monitoring.<br /></span><span class="fontstyle0"><strong>Objective</strong>. </span><span class="fontstyle2">The goal of the investigation is to develop engineering architectures for monitoring and managing social processes based on vector logic.<br /></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The research is focused on the development of engineering vector-logical schemes and architectures for management of social processes based on their comprehensive metric monitoring in order to create comfortable conditions for creative work. Definitions of the main concepts of AI development are given. Interesting fragments of the history of computing are given. The computing equation is introduced as a transitive closure in a triad of relations – in the form of an error that creates new structures, processes or phenomena. Mechanisms of intelligent computing are developed that combine<br />algorithms and data structures of deterministic and probabilistic AI computing. Mechanisms for constructing models based on<br />the universe of primitives that have Similarity in relation to their use for process modeling (in-hardware synthesis, in-software programming, in neural network training, in-qubit quantization, in-memory modeling, in-truth table logic generation) are proposed. An intelligent computing metric is introduced, which is used to select the architecture and models of computing processes in order to obtain effective solutions to practical problems.<br /></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">The following is proposed: 1) the computing equation as a transitive closure in a triad of relations – in the form of an error that creates new structures, processes or phenomena; 2) mechanisms of intelligent computing aimed at a significant reduction in time and energy costs in solving practical problems by zeroing out algorithms for processing big data, due to the exponential redundancy of smart and redundant AI models; 3) mechanisms for constructing models based on the universe of primitives that have Similarity in relation to their use for modeling processes.<br /></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">Scientific novelty concludes the following innovative solutions: 1) a triad of relations based on the xoroperation for measuring processes and phenomena in the cyber-social world is proposed; 2) intelligent computing architectures are proposed for managing social processes based on their comprehensive monitoring; 3) the implementation of these schemes in the in-memory computing architecture makes it possible not to use processor instructions, only read-write transactions on logical vectors, which saves time and energy for the execution of big data analysis algorithms; 4) mechanisms for synthesizing vector-logical models of social processes or phenomena based on unitary coding of patterns on the universe of primitives are proposed, which are focused on verification, modeling and testing of decisions made. The practical significance of the study lies in the fact that the metric of intelligent computing is proposed, which is used as a method for selecting the architecture and models of computing processes to obtain effective solutions to practical problems. Engineering social computing is designed to contribute to the construction of peaceful, fair and open societies to achieve the Sustainable Development Goals (SDG 16).</span></p>V. I. HahanovS. V. ChumachenkoE. I. LytvynovaH. V. KhakhanovaI. V. HahanovV. I. ObrizanI. V. HahanovaN. G. Maksymova
Copyright (c) 2025 V. I. Hahanov, S. V. Chumachenko, E. I. Lytvynova, H. V. Khakhanova, I. V. Hahanov, V. I. Obrizan, I. V. Hahanova, N. G. Maksymova
https://creativecommons.org/licenses/by-sa/4.0
2025-09-222025-09-22318219410.15588/1607-3274-2025-3-17