Radio Electronics, Computer Science, Control https://ric.zp.edu.ua/ <p dir="ltr" align="justify"><strong>Description:</strong> The scientific journal «Radio Electronics, Computer Science, Control» is an international academic peer-reviewed publication. It publishes scientific articles (works that extensively cover a specific topic, idea, question and contain elements of their analysis) and reviews (works containing analysis and reasoned assessment of the author's original or published book) that receive an objective review of leading specialists that evaluate substantially without regard to race, sex, religion, ethnic origin, nationality, or political philosophy of the author(s).<span id="result_box2"><br /></span><strong>Founder and </strong><strong>Publisher</strong><strong>:</strong> <a href="http://zntu.edu.ua/zaporozhye-national-technical-university" aria-invalid="true">National University "Zaporizhzhia Polytechnic"</a>. <strong>Country:</strong> Ukraine. Unified State Register of Enterprises and Organisations of Ukraine (<strong>EDRPOU</strong>): 02070849. <strong>ROR</strong>: https://ror.org/03aph1990<span id="result_box1"><br /></span><strong>ISSN</strong> 1607-3274 (print), ISSN 2313-688X (online). <strong>DOI prefix: </strong>https://doi.org/10.15588/1607-3274-<span id="result_box3"><br /></span><strong>Registration of an entity in the field of print media: </strong>Decision of the National Council of Ukraine on Television and Radio Broadcasting No. 3040 dated 07.11.2024. Media identifier: R30-05582. Certificate of State Registration: КВ №24220-14060ПР dated 19.11.2019 - the journal is registered with the Ministry of Justice of Ukraine.<br /><span id="result_box4">By the Order of the Ministry of Education and Science of Ukraine from 17.03.2020 № 409 “On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 06 March 2020”<strong> journal is included in the list of scientific specialized periodicals of Ukraine in category “А” (highest level), where the results of dissertations for Doctor of Science and Doctor of Philosophy may be published</strong>. <span id="result_box26">By the Order of the Ministry of Education and Science of Ukraine from 12.21.2015 № 1328 "On approval of the decision of the Certifying Collegium of the Ministry on the activities of the specialized scientific councils dated 15 December 2015" journal is included in the <strong>List of scientific specialized periodicals of Ukraine</strong>, where the results of dissertations for Doctor of Science and Doctor of Philosophy in Mathematics and Technical Sciences may be published.</span><br />The <strong>journal is included in the Polish List of </strong><span style="margin: 0px; padding: 0px; text-align: left;"><strong>Scientific Journals</strong> and Peer-Reviewed Materials from International Conferences, which is assigned a number of points (Annex to the Minister of Science and Higher Education's announcement</span> from July 31, 2019: Lp. 16981). </span><span id="result_box27"><br /></span><strong> Year of Foundation:</strong> 1998. Published since 1999. <strong>Frequency:</strong> 4 times per year (before 2015 - 2 times per year).<span id="result_box6"><br /></span><strong> Volume</strong><strong>:</strong> up to 20 conventional printed sheets. <strong>Format:</strong> 60x84/8. <span id="result_box7"><br /></span><strong> Languages:</strong> English, Ukrainian. Before 2022 also Russian.<span id="result_box8"><br /></span><strong> Fields of Science:</strong> Technical Sciences. <strong>Scientific profile of the publication (cluster name):</strong> Information technologies and electronics (specialties: F2 Software engineering, F3 Computer science, F6 Information systems and technologies, F7 Computer engineering, G5 Electronics, electronic communications, instrument making and radio engineering).<span id="result_box8"><br /></span><strong style="text-align: left;">Aim: </strong><span style="text-align: left;">serve the academic community principally by publishing topical articles resulting from original research, whether theoretical or applied, in various aspects of academic endeavor.<span id="result_box8"><br /></span></span><strong style="text-align: left;">Focus:</strong><span style="text-align: left;"> fresh formulations of problems and new methods of investigation, help for professionals, graduates, engineers, academics, and researchers to disseminate information on state-of-the-art techniques according to the journal's scope.</span></p> <p dir="ltr" align="justify"><strong>Scope:</strong> telecommunications and radio electronics, software engineering (including algorithm and programming theory), computer science (mathematical modeling and computer simulation, optimization and operations research, control in technical systems, machine-machine and man-machine interfacing, artificial intelligence, including data mining, pattern recognition, artificial neural and neuro-fuzzy networks, fuzzy logic, swarm intelligence and multiagent systems, hybrid systems), computer engineering (computer hardware, computer networks), information systems and technologies (data structures and bases, knowledge-based and expert systems, data and signal processing methods).<strong><br /></strong> <strong> Journal sections:</strong><span id="result_box10"><br /></span>- radio electronics and telecommunications;<span id="result_box12"><br /></span>- mathematical and computer modelling;<span id="result_box13"><br /></span>- neuroinformatics and intelligent systems;<span id="result_box14"><br /></span>- progressive information technologies;<span id="result_box15"><br /></span>- control in technical systems. <span id="result_box17"><br /></span><strong>Abstracting and Indexing:</strong> <strong>The journal is indexed in <a href="https://mjl.clarivate.com//search-results?issn=1607-3274&amp;hide_exact_match_fl=true&amp;utm_source=mjl&amp;utm_medium=share-by-link&amp;utm_campaign=search-results-share-this-journal" target="_blank" rel="noopener">Web of Science</a></strong> (WoS) scientometric database. <span id="result_box21"><span style="margin: 0px; padding: 0px; text-align: left;">The articles, published in the journal, are abstracted in leading international and national <strong>abstracting journals</strong> and <strong>scientometric databases</strong>, and are also placed in the <strong>digital archives</strong> and <strong>libraries</strong> with free online access.</span><br /></span><strong>Editorial board: </strong><em>Editor in chief</em> - S. A. Subbotin, D. Sc., Professor. The <em>members</em> of the Editorial Board are listed <a href="http://ric.zntu.edu.ua/about/editorialTeam" aria-invalid="true">here</a>.<span id="result_box19"><br /></span><strong>Publishing and processing fee:</strong> Articles are published and peer-reviewed <strong>free of charge</strong>.<span id="result_box20"><br /></span><strong> Authors' Copyright: </strong>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions. The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles. The journal allows for the reuse and remixing of its content, in accordance with a Creative Commons license CC BY-SA.<span id="result_box21"><br /></span><strong> Authors Responsibility:</strong> Submitting the article to the journal, authors hereby assume full responsibility for the copyright compliance of other individuals and organizations, the accuracy of citations, data and illustrations, nondisclosure of state and industrial secrets, express their consent to transfer for free to the publisher the right to publish, to translate into foreign languages, to store and to distribute the article materials in any form. Authors who have scientific degrees, submitting the article in the journal, thereby giving their consent to freely act as reviewers of other authors' articles at the request of the journal editor within the established deadlines. The articles submitted to the journal must be original, new, and interesting to the reader audience of the journal, have reasonable motivation and aim, be previously unpublished, and not be considered for publication in other journals and conferences. Articles should not contain trivial and obvious results, make unwarranted conclusions, or repeat conclusions of already published studies.<span id="result_box22"><br /></span><strong> Readership: </strong>scientists, university faculties, postgraduate and graduate students, and practical specialists.<span id="result_box23"><br /></span><strong> Publicity and Accessing Method:</strong> <strong>Open Access</strong> online for full-text publications<span id="result_box24">.</span></p> <p dir="ltr" align="justify"><strong><span style="font-size: small;"> <img src="http://journals.uran.ua/public/site/images/grechko/1OA1.png" alt="" /> <img src="http://i.creativecommons.org/l/by-sa/4.0/88x31.png" alt="" /></span></strong></p> en-US <h3 id="CopyrightNotices" align="justify"><span style="font-size: small;">Creative Commons Licensing Notifications in the Copyright Notices</span></h3> <p>The journal allows the authors to hold the copyright without restrictions and to retain publishing rights without restrictions.</p> <p>The journal allows readers to read, download, copy, distribute, print, search, or link to the full texts of its articles.</p> <p>The journal allows to reuse and remixing of its content, in accordance with a Creative Commons license СС BY -SA.</p> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors who publish with this journal agree to the following terms:</span></p> <ul> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution License CC BY-SA</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</span></p> </li> <li> <p align="justify"><span style="font-family: Verdana, Arial, Helvetica, sans-serif; font-size: small;">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) as it can lead to productive exchanges, as well as earlier and greater citation of published work.</span></p> </li> </ul> subbotin.csit@gmail.com (Sergey A. Subbotin) subbotin@zntu.edu.ua (Sergey A. Subbotin) Wed, 24 Dec 2025 10:40:51 +0200 OJS 3.2.1.2 http://blogs.law.harvard.edu/tech/rss 60 LOGIC-ONTOLOGICAL RECONSTRUCTION OF SCIENTIFIC DISCOURSE AND ITS IMPLEMENTATION IN AN AI-BASED REVIEWING SYSTEM https://ric.zp.edu.ua/article/view/346111 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The growing number of scientific publications and the emergence of tools based on large language models (LLMs) highlight the need for automated verification of the structural quality of scientific texts. Most existing solutions focus on surfacelevel linguistic analysis and do not account for logical-discursive integrity – specifically, whether the text includes a hypothesis, method, results, conclusions, and whether these elements are connected by normative relationships.<br /></span><span class="fontstyle0"><strong>Objective.</strong> </span><span class="fontstyle2">The aim of this study is to develop an ontology-driven approach for the formalized verification of scientific text structures by constructing an ontological knowledge graph and evaluating its compliance with a predefined normative model of scientific discourse.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">A model is proposed based on two interrelated ontologies: “Scientific Publication” (defining node types and their roles) and “Reviewing” (defining logical-discursive requirements). The text is represented as a graph where nodes are formed through semantic markup using an LLM, and connections are verified according to a set of normative rules. A specialized GPT agent capable of dynamically applying ontological knowledge during analysis and review generation is employed for implementation.<br /></span><span class="fontstyle0"><strong>Results.</strong> </span><span class="fontstyle2">The model enables automatic detection of discourse structure violations: the absence of key elements, logical discontinuities, substitution of scientific novelty with practical significance, and incorrect interpretation of results. The proposed metrics quantitatively capture the level of structural completeness and consistency. Provided examples of graphs and reviews demonstrate that the system can detect non-obvious, latent logical inconsistencies even in formally complete texts.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The scientific novelty of the study lies in introducing the ontological graph as an interpretable model of scientific argumentation, used in tandem with a large language model. The practical significance lies in establishing a foundation for semiautomated reviewing, structural analysis of publications, and academic writing training. The methodology is scalable to other genres of scientific texts and can potentially be integrated into editorial platforms.</span></p> L. P. Bedratyuk Copyright (c) 2025 L. P. Bedratyuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346111 Wed, 24 Dec 2025 00:00:00 +0200 METAHEURISTIC FRAMEWORKS FOR PARAMETER ESTIMATION IN APPROXIMATION MODELS https://ric.zp.edu.ua/article/view/346117 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">To enhance the performance of numerical optimization techniques, hybrid approaches integrating probabilistic modeling algorithms with annealing simulation have been introduced. These include Bayesian optimization, Markov-based strategies, and extended compact genetic algorithms, each augmented by annealing mechanisms. Such methods enable more precise search trajectories without requiring fitness function transformation, owing to their ability to explore the global search space in early iterations and refine the directionality of search in later stages.<br></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The research aims to improve the effectiveness of parameter identification within approximation models of financial indicators by applying metaheuristic algorithms that incorporate probabilistic modeling and annealing-based simulation in intelligent computing systems.<br></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">This study employs metaheuristic techniques grounded in probabilistic modeling and annealing-based simulation to enhance the accuracy and efficiency of parameter estimation within economic indicator approximation frameworks. Specifically, it introduces three hybrid strategies: Bayesian-based optimization integrated with annealing simulation, Markov-driven optimization enhanced by annealing, and an extended compact genetic algorithm coupled with annealing mechanisms. These methods enhance the accuracy of the search process by exploring the entire search space in initial iterations and refining the search direction in final iterations. The Bayesian optimization method employs a Bayesian network for structured search and solution refinement. The Markov optimization method integrates Gibbs quantization within a Markov network to improve search precision. The extended compact genetic algorithm utilizes limit distribution models to generate optimal solutions. These methods eliminate the need for fitness function transformation, optimizing computational efficiency. The proposed techniques expand the application of metaheuristics in intelligent economic computer systems.<br></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The implemented optimization strategies significantly enhanced the precision of parameter estimation within intelligent financial computing frameworks. The combination of probabilistic models and annealing simulation enhanced search efficiency without requiring fitness function transformation.<br></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">The proposed method expands the application of metaheuristics in economic modeling, increasing computational effectiveness. Further research should explore their implementation across diverse artificial intelligence problems.</span> </p> O. O. Grygor, E. E. Fedorov, M. M. Leshchenko, K. S. Rudakov, T. A. Sakhno Copyright (c) 2025 O. O. Grygor, E. E. Fedorov, M. M. Leshchenko, K. S. Rudakov, T. A. Sakhno https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346117 Wed, 24 Dec 2025 00:00:00 +0200 FORMALIZED METHODOLOGY FOR COMPATIBILITY AND ADAPTATION OF REQUIREMENTS IN INTELLIGENT DIAGNOSTIC SYSTEMS https://ric.zp.edu.ua/article/view/346194 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Ensuring the consistency and adaptability of requirements in systems operating under dynamic conditions and limited resources is a pressing issue in modern requirements engineering, especially in intelligent diagnostic and decision-making environments. These systems must process conflicting, outdated, or ambiguous requirements while operating in environments characterized by high uncertainty and dynamic conditions.<br></span><span class="fontstyle0"><strong>Objective.</strong> </span><span class="fontstyle2">This work introduces a formalized methodology for analyzing and managing the compatibility of system requirements. The proposed approach integrates logical consistency, functional interaction, resource feasibility, and priority alignment to support system stability and responsiveness.<br></span><span class="fontstyle0"><strong>Method.</strong> </span><span class="fontstyle2">The methodology is implemented as a multi-level framework that incorporates formal representations of functional,<br>non-functional, and data-related requirements. It employs scenario-based modeling, a set of compatibility assessment models, and a dynamic algorithm for integrating new requirements. The integration process includes compatibility checks, adaptive refinement, expert-based weighting, and real-time feedback. The methodology’s applicability is demonstrated through a hypothetical intelligent medical diagnostic system.<br></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The proposed methodology enables systematic identification and resolution of requirement conflicts, ensuring consistent execution and effective prioritization under resource constraints. Scenario-driven modeling and the formalization of core requirements establish a foundation for adaptive system behavior and real-time decision-making.<br></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The developed methodology, which includes models and algorithms, enhances the reliability of intelligent systems operating in critical contexts. Future work will focus on extending the framework by incorporating fuzzy logic, machine learning techniques, and developing software tools for automated compatibility analysis and adaptive requirements management.</span> </p> N. O. Komleva, V. V. Liubchenko Copyright (c) 2025 N. O. Komleva, V. V. Liubchenko https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346194 Wed, 24 Dec 2025 00:00:00 +0200 RESEARCH ON A HYBRID LSTM-CNN-ATTENTION MODEL FOR TEXTBASED WEB CONTENT CLASSIFICATION https://ric.zp.edu.ua/article/view/346199 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Text-based web content classification plays a pivotal role in various natural language processing (NLP) tasks, including fake news detection, spam filtering, content categorization, and automated moderation. As the scale and complexity of textual data on the web continue to grow, traditional classification approaches – especially those relying on manual feature engineering or shallow learning techniques – struggle to capture the nuanced semantic relationships and structural variability of modern web content. These limitations result in reduced adaptability and poor generalization performance on real-world data. Therefore, there is a clear need for advanced models that can simultaneously learn local linguistic patterns and understand the broader contextual meaning of web text.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">This study presents a hybrid deep learning architecture that integrates Long Short-Term Memory (LSTM) networks, Convolutional Neural Networks (CNN), and an Attention mechanism to enhance the classification of web content based on text. Pretrained GloVe embeddings are used to represent words as dense vectors that preserve semantic similarity. The CNN layer extracts local </span><span class="fontstyle3">n</span><span class="fontstyle2">-gram patterns and lexical features, while the LSTM layer models long-range dependencies and sequential structure. The integrated Attention mechanism enables the model to focus selectively on the most informative parts of the input sequence. The model was evaluated using the dataset, which consists of over 10,000 HTML-based web pages annotated as legitimate or fake. A 5-fold cross-validation setup was used to assess the robustness and generalizability of the proposed solution.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">Experimental results show that the hybrid LSTM-CNN-Attention model achieved outstanding performance, with an accuracy of 0.98, precision of 0.94, recall of 0.92, and F1-score of 0.93. These results surpass the performance of baseline models based solely on CNNs, LSTMs, or transformer-based classifiers such as BERT. The combination of neural network components enabled the model to effectively capture both fine-grained text structures and broader semantic context. Furthermore, the use of GloVe embeddings provided an efficient and effective representation of textual data, making the model suitable for integration into systems with real-time or near-real-time requirements.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The proposed hybrid architecture demonstrates high effectiveness in text-based web content classification, particularly in tasks requiring both syntactic feature extraction and semantic interpretation. By combining convolutional, recurrent, and attention-based mechanisms, the model addresses the limitations of individual architectures and achieves improved generalization. These findings support the broader use of hybrid deep learning approaches in NLP applications, especially where complex, unstructured textual data must be processed and classified with high reliability.</span></p> M. V. Kuz , I. M. Lazarovych, M. I. Kozlenko, M. V. Pikuliak, A. D. Kvasniuk Copyright (c) 2025 M. V. Kuz , I. M. Lazarovych, M. I. Kozlenko, M. V. Pikuliak, A. D. Kvasniuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346199 Wed, 24 Dec 2025 00:00:00 +0200 CLASSIFICATION OF HISTOLOGICAL IMAGES BASED ON CONVOLUTIONAL NEURAL NETWORKS https://ric.zp.edu.ua/article/view/346250 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Automated classification of histological images is of great importance for speeding up and improving the accuracy of<br />diagnostics in medicine. Taking into account the complexity and high variability of histological structures, the use of deep learning and convolutional neural networks, in particular, is a promising direction in solving this problem.<br />The object of study is the process of classifying histological images using convolutional neural networks to determine and optimize the architecture with the highest accuracy rate.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The aim of this work is to develop an effective approach to histological image classification using convolutional neural networks that provides high accuracy through step-by-step optimization of the architecture and application of data expansion methods using affine transformations and synthesis based on diffusion models.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">The study includes four main stages. The first stage involves a comparative analysis of 12 well-known convolutional neural network architectures on a basic histological dataset. The second and third stages involve comparative analysis on extended data, including affine transformations and synthetic images generated by the diffusion model, respectively. The final, fourth stage involves a neuroevolutionary search for the optimal architectural cell. Once it is found, it is integrated into the model architecture, where for each layer a choice is made between certain blocks and the found cell. This approach allows to automatically form the optimal sequence of blocks in the model, which ensures the highest classification accuracy.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The proposed approach improved the accuracy of histological image classification compared to the initial architectures. The addition of synthetic images to the training set provided an increase in model performance. The search for the optimal cell and its integration into the model with further optimization demonstrated an additional improvement in classification quality, increasing the accuracy to 94.9, 96.1, and 99.8% on each dataset, respectively.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The proposed approach allows achieving high accuracy of histological image classification through a step-by-step process that includes the use of classical convolutional neural network architectures, generation of synthetic data, and search for optimal architectural and hyperparametric configurations. A software module for classifying histological images has been developed that can be used in an automatic diagnostic system.</span></p> P. B. Liashchynskyi Copyright (c) 2025 P. B. Liashchynskyi https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346250 Wed, 24 Dec 2025 00:00:00 +0200 ALZHEIMER’S DISEASE PREDICTION BY USING DEEP STACKED ENSEMBLE MODEL ENHANCED WITH SQUEEZE-AND-EXCITATION ATTENTION MECHANISM https://ric.zp.edu.ua/article/view/346263 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Alzheimer’s disease (AD) is a progressive, neurological degenerative disease causing memory loss, impaired cognition, and dementia. Timely identification of AD is crucial for the provision of effective treatment and intervention. Magnetic Resonance Imaging (MRI) has also become a critical tool in understanding the structural changes in the brain that occur during Alzheimer’s development. Nonetheless, the manual processing of MRI scans is time-consuming, subjective, and susceptible to human error. As such, there is increasing demand for automated and precise diagnostic technology that can support clinicians in the earlier detection and staging of Alzheimer’s disease based on medical imaging data.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The present study focuses on developing and evaluating a deep learning-based stacked ensemble model for the classification and staging of Alzheimer’s disease brain MRI scans. The primary objective is to improve the diagnosis accuracy and reliability through a combination of the strengths of several pre-trained convolutional neural network (CNN) architectures, combined with sophisticated attention mechanisms and meta-learning techniques.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">The proposed approach utilizes a deep stacked ensemble learning framework composed of three well-performing CNN architectures: MobileNetV2, ResNet50, and DenseNet121. These models are pre-trained on the ImageNet dataset, benefiting from robust feature extraction capabilities. To further improve their performance, each CNN model is enhanced with a Squeeze-andExcitation (SE) attention module, which adaptively recalibrates channel-wise feature responses, emphasizing important features while suppressing irrelevant ones. The extracted high-level features from all three SE-augmented CNNs are then concatenated and fed into a meta-learner consisting of fully connected layers. This meta-classifier incorporates dropout and batch normalization techniques to prevent overfitting and improve generalization. The overall architecture is trained and validated on a dataset of brain MRI images categorized into different stages of Alzheimer’s disease, including normal control, mild cognitive impairment, and various stages of dementia.<br /></span><span class="fontstyle0"><strong>Results.</strong> </span><span class="fontstyle2">The experimental evaluation demonstrated exceptional performance, achieving an Accuracy of 99%, a Precision of 99%, a Recall of 98%, and an F1-score of 99%. These metrics indicate the model’s strong predictive capability and reliability in<br />distinguishing between different stages of Alzheimer’s disease.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The experimental outcomes highlight the effectiveness and robustness of the proposed deep stacked ensemble model in the automated diagnosis and staging of Alzheimer’s disease using MRI scans. The integration of multiple CNNs with<br />attention mechanisms and meta-learning significantly enhances classification performance. These findings suggest that the model can serve as a reliable decision-support system for neurologists, aiding in early diagnosis, timely intervention, and improved patient outcomes in clinical settings.</span></p> Junaid Iqbal Muhammad, Mahmood Mudasir, Farhan Muhammad, Sajjad Hussain Muhammad Copyright (c) 2025 Junaid Iqbal Muhammad, Mahmood Mudasir, Farhan Muhammad, Sajjad Hussain Muhammad https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346263 Wed, 24 Dec 2025 00:00:00 +0200 THE “PRISM AND RAYS METHOD” FOR RESEARCHING SUBJECTIVIZATION OF PERCEPTION OF HUMAN-MACHINE INTERACTION OBJECTS https://ric.zp.edu.ua/article/view/346305 <p><strong> <span class="fontstyle0">Context</span></strong><span class="fontstyle1"><strong>.</strong> The task of representation formalizing as well as possibility of further research of the scientific and applied problem of perception subjectivization of the human-machine interaction objects by relevant interaction subjects, in the context of global problems of automation and intellectualization of the component processes and environments of human-machine interaction, is considered, investigated, and resolved in scope of current research. </span><span class="fontstyle0">The object of research </span><span class="fontstyle1">are processes of perception subjectivization of the human-machine interaction objects by relevant subjects of same interaction. </span><span class="fontstyle0">The subject of research </span><span class="fontstyle1">are methods and means of mathematical, computer, and simulation modeling. </span><span class="fontstyle0">Objective </span><span class="fontstyle1">– development of a method for researching the perception subjectivization of the human-machine interaction objects by relevant subjects of the same interaction.<br /></span><strong><span class="fontstyle0">Method</span></strong><span class="fontstyle1"><strong>.</strong> The development of the “prism and rays method” (author’s name of the method) is proposed and performed, which provides the possibility to resolve the scientific and applied problem of representation formalizing as well as possibility of further research into the processes of perception subjectivization of the human-machine interaction objects by relevant interaction subjects (of the same interaction).<br /></span><strong><span class="fontstyle0">Results</span></strong><span class="fontstyle1"><strong>.</strong> The results of the developed method – are corresponding models that represent and allow to investigate the researched processes of perception subjectivization of the human-machine interaction objects by relevant interaction subjects (of the same interaction). The developed method provides the possibility of both formalization as well as further interpretation and investigation of the researched processes of perception subjectivization of human-machine interaction objects. As a practical approbation, the developed method has been applied for synthesis of the basic model of perception subjectivization of the object of complex (comprehensive) support (one of the most highlighted examples of HMI/HCI) of software product(s), using the example case of an experimental task of estimating the approximate processing time of client request by the department of customer-and-technical support of the given software product.<br /></span><strong><span class="fontstyle0">Conclusions</span></strong><span class="fontstyle1"><strong>.</strong> The developed “prisms and rays method” solves the declared task of representation formalizing as well as possibility of further research of the scientific and applied problem of perception subjectivization of the human-machine interaction objects by relevant interaction subjects, in the context of global problems of automation and intellectualization of the component processes and environments of human-machine interaction. At the same time, the obtained results of experimental practical approbation of the developed method confirm its effectiveness and efficiency in the context of solving relevant practical applied tasks of the scientific and applied problem of human-machine interaction objects’ perception subjectivization.</span></p> A. I. Pukach, V. M. Teslyuk Copyright (c) 2025 A. I. Pukach, V. M. Teslyuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346305 Wed, 24 Dec 2025 00:00:00 +0200 OPTIMIZED MODEL FOR PREDICTING THE AVAILABILITY OF OBJECTS BASED ON DEEP LEARNING AND GEOSPATIAL FEATURES https://ric.zp.edu.ua/article/view/346358 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Today, predicting the availability of objects in spatially distributed systems remains one of the areas of computer science that constantly attracts the attention of researchers. There are many reasons for this. There is an increase in the amount of spatial information. New types of infrastructure networks are emerging, as well as the need for rapid decision-making in changing conditions. At the same time, traditional analysis methods do not always cope with the tasks of processing multidimensional data. This is especially true when it comes to complex or unstable environments. This opens up opportunities for applying deep learning methods that demonstrate high efficiency where classical approaches fail.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The study aims to optimize the model for predicting the availability of objects in spatially distributed systems by defining an efficient deep learning architecture that uses spatial and other infrastructure features to improve prediction accuracy and generalizability.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">To achieve this goal, deep learning architectures were used, including feed-forward models (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN, GRU, LSTM). During the modeling, methods of data normalization, training regularization, and a comprehensive system for evaluating the accuracy of forecasts using the mean square error, mean absolute error, and coefficient of determination were used.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">An optimized architecture of a recurrent neural network was built for the study, which includes a combination of two recurrent layers, Dropout regularization layers, and a fully connected layer. The analysis has shown that the proposed model provides high accuracy in predicting the availability of objects, demonstrating stability over a wide range of spatial data. Comparison of actual and predicted values confirmed the effectiveness of the proposed solution.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The proposed approach to building an optimized deep learning model for predicting the availability of objects provides a high level of generalization and accuracy, which creates prerequisites for its use in systems of intelligent decision support in spatially distributed environments.</span></p> A. M. Tryhuba, R. T. Ratushny, L. S. Koval, A. R. Ratushnyi Copyright (c) 2025 A. M. Tryhuba, R. T. Ratushny, L. S. Koval, A. R. Ratushnyi https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346358 Wed, 24 Dec 2025 00:00:00 +0200 DEEP REINFORCEMENT LEARNING OPTIMIZATION METHODS FOR TRAFFIC LIGHTS AT SIGNALIZED INTERSECTIONS https://ric.zp.edu.ua/article/view/346708 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Intersections are the most critical areas of a road network, where the largest number of collisions and the longest waiting times are observed. The development of optimal methods for traffic light control at signalized intersections is necessary for improving the flow of traffic at existing urban intersections, reducing the chance of traffic collisions, the time it takes to cross the intersection, and increasing the safety for drivers and pedestrians. Developing such an algorithm requires simulating and comparing the work of different approaches in a simulated environment.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The aim of this study is to develop an effective deep reinforcement-learning model aimed at optimizing traffic light<br />control at intersections.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">A custom simulation environment is designed, which is compatible with the OpenAI Gym framework, and two types of algorithms are compared: Deep Q-Networks and Proximal Policy Optimization. The algorithms are tested on a range of scenarios, involving ones with continuous and discrete action spaces, where the set of actions the agent may take are represented either by different states of the traffic lights, or by the length of traffic light signal phases. During training, various hyperparameters were also tuned, and different reward metrics were considered for the models: average wait time and average queue length. The developed environment rewards the agent during training according to one of the metrics chosen, while also penalizing it for any traffic rule violations.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">A detailed analysis of the test results of deep Q network and Proximal Policy Optimization algorithms was provided. In general, the Proximal Policy Optimization algorithms show more consistent improvement during training, while deep Q network algorithms suffer more from the problem of catastrophic forgetting. Changing the reward function allows the algorithms to minimize different metrics during training. The developed simulation environment can be used in the future for testing other types of algorithms on the same task, and it is much less computationally expensive compared to existing solutions. The results underline the need to study other methods of traffic light control that may be integrated with real-life traffic light systems for a more optimal and safer traffic flow.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The study has provided a valuable comparison of different methods of traffic light control in a signalized urban intersection, tested different ways of rewarding models during training and reviewed the effects this has on the traffic flow. The developed environment was sufficiently simple for the purposes of the research, which is valuable due to the large computational requirements of the models themselves, but can be improved in the future by expanding it with more complex simulation features, such as various types of intersections that aren’t urban, creating a road network of intersections that would all be connected to each other, adding pedestrian crossings, etc. Future work may be done to refine the simulation environment, expand the range of considered algorithms, consider the use of models for vehicle control in addition to traffic light control.</span></p> N. I. Boyko, Y. L. Mokryk Copyright (c) 2025 N. I. Boyko, Y. L. Mokryk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346708 Wed, 24 Dec 2025 00:00:00 +0200 PSEUDO-RANDOM ENCODING OF STATES IN THE ALGORITHM FOR ALGEBRAIC SYNTHESIS OF A FINITE STATE MACHINE https://ric.zp.edu.ua/article/view/346009 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The problem of algebraic synthesis of finite state machine with datapath of transitions is considered. The circuit of this state machine may require less hardware expenses and have a lower cost compared to circuits of other classes of digital control units. The object of research is the process of finding complete and partial solutions of the problem of algebraic synthesis of finite state machine using specialized algorithms. One of such algorithms is the previously known algorithm of complete sequential enumeration of state coding variants with a fixed set of transition operations. In the vast majority of cases, complete sequential enumeration is performed too long, which makes its practical application in the process of synthesizing of finite state machines with operational transformation of state codes impossible. This paper proposes a new approach, which consists in replacing complete sequential enumeration of state coding variants with pseudo-random coding. This allows you to increase the number of state codes that change in each iteration of the algorithm and can contribute to a faster search for satisfactory solutions to the algebraic synthesis problem.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">Development and research of an algorithm for finding solutions to the algebraic synthesis problem of a finite state machine with datapath of transitions based on pseudo-random selection of state codes.<br /></span><span class="fontstyle0"><strong>Method.</strong> </span><span class="fontstyle2">The research is based on the structure of finite state machine with datapath of transitions. The synthesis of the finite state machine circuit involves a mandatory stage of algebraic synthesis, the result of which is the combination of a certain way of states encoding with the assignment of arithmetic-logical operations to state machine transitions. Such combination is called the solution to the algebraic synthesis problem. In the general case, there are many solutions for a given finite state machine, each of which can be either complete (when operations are mapped to all transitions) or partial (when part of transitions cannot be implemented using any of the given operations). The more transitions are implemented by given operations, the less hardware expenses will be required to implement the state machine circuit and the better solution found. The search for the best solution requires consideration of a large number of possible variants of states encoding. The paper includes a modification of a previously known algorithm, which consists in replacing the complete sequential enumeration of variants of states encoding with pseudorandom code generation. Both algorithms were implemented in the form of software using the Python language and tested on the example of a finite state machine that implements an abstract control algorithm. In the course of the experiments, it was investigated which of the algorithms would find the best solution to algebraic synthesis problem in a fixed time. The experiments were repeated for different sets of transition operations. The purpose of the experiments was to evaluate which of state code assignment strategies is more effective: sequential enumeration of state codes or their pseudo-random generation.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">Using the example of an abstract control algorithm, it is demonstrated that in general, pseudo-random assignment of state codes allows finding better solutions to the algebraic synthesis problem in the same time than sequential enumeration of state codes. Factors such as computer speed or the method of pseudo-random generation of state codes do not have a significant impact on the results of the experiments. The advantage of pseudo-random generation of state codes is preserved when using different sets of transition operations.<br /></span><span class="fontstyle0"><strong>Conclusions.</strong> </span><span class="fontstyle2">The basis of the algebraic synthesis of finite state machine with datapath of transitions is an algorithm for finding solutions to the algebraic synthesis problem. The article proposes an algorithm for finding such solutions based on pseudo-random encoding of finite state machine states. The software implementation of this algorithm has proven that such approach is generally better than sequential enumeration for state encoding variants, since it allows finding better solutions (solutions with fewer operationally unimplemented transitions) in the same time. The pseudo-random assignment of state codes can be the basis of future algorithms for the algebraic synthesis of finite state machines.</span></p> R. M. Babakov, A. A. Barkalov, L. A. Titarenko Copyright (c) 2025 R. M. Babakov, A. A. Barkalov, L. A. Titarenko https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346009 Wed, 24 Dec 2025 00:00:00 +0200 A METHOD FOR DETERMINING THE FUZZY DISCRETE FRÉCHET DISTANCE https://ric.zp.edu.ua/article/view/346012 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The article addresses the problem of image similarity assessment based on the Fréchet distance metric and its modifications. In this context, images are approximated by polygonal curves. The problem arises from the need to quantitatively evaluate image similarity for tasks such as image generation, clustering, and recognition. Quantitative assessment of the proximity of biomedical images supports decision-making in automated diagnostic systems. The object of the study is the process of image similarity evaluation. The subject of the study is the Fréchet distance metric and its modifications.<br></span><span class="fontstyle0"><strong>Objective.</strong> </span><span class="fontstyle2">To develop a method for determining the fuzzy discrete Fréchet distance, to evaluate the computational complexity of the proposed method, to implement the algorithm for determining the fuzzy discrete Fréchet distance in software, and to conduct computational experiments to evaluate the fuzzy discrete Fréchet distance between polygons.<br></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The article presents a method for determining the fuzzy discrete Fréchet distance based on the fuzzy Fréchet metric between polygonal curves. The fuzzy Fréchet metric is grounded in the classical Fréchet distance defined on the space of parameterized curves. The required approximation for practical applications is achieved through the discretization of the fuzzy Fréchet metric. The developed method estimates the fuzzy discrete Fréchet distance between polygonal curves by adapting the algorithm for computing the classical discrete Fréchet distance.<br></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The computer experiments were conducted on a set of predefined regions approximated by polygonal curves. Based on the proposed method, an algorithm was developed to evaluate the discrete fuzzy Fréchet distance. The developed algorithm exhibits low computational complexity, equal to the product of the discretized segments of the polygonal curves: </span><span class="fontstyle3">O</span><span class="fontstyle2">(</span><span class="fontstyle3">Cm</span><span class="fontstyle2">·</span><span class="fontstyle3">n</span><span class="fontstyle2">). This enables the estimation of the discrete Fréchet distance with a specified similarity threshold. The software implementation of the method is intended to be integrated into an automatic medical diagnostic system.<br></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The results obtained in the study allow recommending the developed method for evaluating image similarity based on the fuzzy discrete Fréchet distance for broad application in computer vision systems, including image generation, clustering, and recognition.</span></p> O. M. Berezsky, M. O. Berezkyi, M. M Zarichnyi Copyright (c) 2025 O. M. Berezsky, M. O. Berezkyi, M. M Zarichnyi https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346012 Wed, 24 Dec 2025 00:00:00 +0200 ABOUT SPECIAL CASES OF LAGRANGIAN INTERSTRIPATION OF APPROXIMATIONS OF FUNCTIONS OF TWO VARIABLES https://ric.zp.edu.ua/article/view/346064 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The problem of approximating the values of continuous functions of two variables based on known information about them on stripes, the boundaries of which are parallel to the coordinate axes, is considered. The object of the study is the process of approximating the values of functions based on incomplete information about them, which is given on the system of stripes.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The goal of the work is the review of information operators of Lagrangian interstripation and features of the construction of information approximation operators for some cases of the mutual arrangement of stripes in some region, which allow to significantly simplify the calculation of approximate values of the function in unknown subregions of the region.<br /></span><span class="fontstyle0"><strong>Method.</strong> </span><span class="fontstyle2">Methods for approximating the values of continuous functions of two variables with incomplete information about them on some limited area are proposed. Information about the function is known only on a system of stripes limited by straight lines parallel to the coordinate axes. A method for approximating the values of continuous functions of two variables, information about which is known on two stripes, as a result of union of which only some rectangular subregion remains unknown in the region, is proposed. A method for approximating the values of continuous functions of two variables, information about which is known on three stripes, as a result of union of which only some rectangular subregion remains unknown in the region, is proposed. A method for approximating the values of continuous functions of two variables, information about which is known on four stripes, as a result of union of which only some rectangular subregion remains unknown in the region, is proposed. A method for approximating the values of continuous functions of two variables, the information about which is known on two stripes, as a result of union of which four rectangular subregions remain unknown in the region, is proposed. For all the considered cases, approximation operators are given that allow calculating the approximate form of the function in the unknown subregions in the analytical form.<br /></span><span class="fontstyle0"><strong>Results.</strong> </span><span class="fontstyle2">The information operators of Lagrangian interstripation are implemented programmatically and investigated in problems of approximating the values of functions of two variables from known information about them on the systems of stripes.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The experiments confirmed the accuracy of approximation of the values of continuous functions of two variables<br />of the proposed information interstripation operators for different systems of stripes. Approximation operators are given for special cases of the location of stripes in the region, the difference of which from the information interstripation operators of the general form lies in the significant simplification of the approximation operators without losing the accuracy of the approximation with a smaller number of arithmetic operations, which can be a decisive factor in some cases. Prospects for further research lie in the application of the proposed information operators in the problems of digital image processing, seismic mineral exploration data and remote sensing data etc.</span></p> O. Slavik Copyright (c) 2025 O. Slavik https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346064 Wed, 24 Dec 2025 00:00:00 +0200 OPTIMIZATION OF PERMANENT DECOMPOSITION PROCEDURES USING PARALLELIZATION ALGORITHM https://ric.zp.edu.ua/article/view/346071 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The problem of efficiently finding all permutations of a list of </span><span class="fontstyle3">N </span><span class="fontstyle2">elements is a key problem in many areas of computer science, such as combinatorics, optimization, cryptography, and machine learning. The object of the study was to analyze the procedure of permanent decomposition and propose an algorithm for its parallelization using modern features of working with threads in C#.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The goal of the work is the creation of the algorithm for parallelizing the generation of permutations using permanent decomposition processes.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">The main research method is the comparison of various algorithms with proposed parallelized algorithm, taking into account such criteria as accuracy and speed. Scientific works [10, 9, 17] present algorithms, including the regular permanent decomposition algorithm and the Johnson-Trotter algorithm. The Johnson-Trotter’s algorithm is one of the most effective, so it has been taken as some kind of benchmark.<br />It is worth mentioning that each paralleling process has its own disadvantages, including additional resources needed fot data<br />synchronization between threads. This can be minimized using both technical abilities of modern programming languages and optimization of the algorithm itself.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The developed parallelized algorithm have improved performance of the regular permanent decomposition algorithm for solving the problem of finding all permutations.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The conducted experiments have confirmed the proposed parallelized algorithm’s version is better from a performance standpoint than the regular one. The prospects for further research may include the application of the parallelized version of the algorithm to some practical tasks and comparison of the results.</span></p> Y. V. Turbal, A. Y. Moroziuk Copyright (c) 2025 Y. V. Turbal, A. Y. Moroziuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346071 Wed, 24 Dec 2025 00:00:00 +0200 THEORETICAL FOUNDATIONS OF THE STRUCTURE OF MULTI-ANTENNA RADIO DIRECTION FINDERS OPTIMISATION FOR DETERMINING THE STOCHASTIC SIGNAL SOURCES POSITION https://ric.zp.edu.ua/article/view/345769 <p><span class="fontstyle0"><strong>Context</strong>. </span><span class="fontstyle2">The relevance of the topic lies in the need to improve radio direction finders to increase accuracy, resistance to interference and adaptation to changing operating conditions. Modern scientific achievements require the development of methods for statistical synthesis and analysis of stochastic signal processing in multi-antenna systems, which will allow to take into account the uncertainty of real conditions. It is important to expand the capabilities of such systems for use in radar, radio navigation, communications and other industries. This will facilitate the creation of new approaches for direction finding of unknown signal sources in complex operating scenarios.<br></span><span class="fontstyle0"><strong>Objective.</strong> </span><span class="fontstyle2">The study is based on improving the measurements of the angular position accuracy of radio sources of stochastic<br>signals.<br></span><span class="fontstyle0"><strong>Method</strong>. </span><span class="fontstyle2">The approach is is based on the statistical theory of optimization of radio remote sensing and radar systems. Signal and noise models are constructed for stochastic signal sources, and the likelihood functional in the spectral domain is formulated, taking into account the structure of inverse correlation matrices. The Cramer-Rao inequality is used to determine the limiting errors of estimation of the angular position of the radio source.<br></span><span class="fontstyle0"><strong>Results</strong>. </span><span class="fontstyle2">For the first time the approach to statistical optimization of the structure of multi-antenna radio systems for direction<br>finding of stochastic radiation sources is theoretically justified, allowing to take into account the spatial orientation, antenna array geometry and radiation pattern. An optimal method of processing the observation equations for estimating the angular position of stochastic signal sources is constructed. A generalized structure of a single-antenna direction finder containing a matched filter, a decoherence filter and a digital calculator is proposed. It is proved that the use of decorelating processing allows to increase the estimation accuracy by increasing the number of independent signal samples. Analytical expressions for estimation and limiting errors, which take into account the spectrum width and directional pattern parameters, are obtained.<br></span><span class="fontstyle0"><strong>Conclusions</strong>. </span><span class="fontstyle2">This paper presents the latest theoretical foundations for the synthesis of radio direction finders of arbitrary configuration, which take into account the variety of radiation pattern shapes, spatial location and orientation of direction finders. The developed models of signals and noise using the maximum likelihood function criterion for the first time allow solving optimisation problems of synthesis with consideration the physical content consideration of correlation matrices. The obtained results are confirmed by solving the problem of measuring the radiation source angular position, which proves the proposed approaches effectiveness.</span> </p> S. S. Zhyla, E. O. Tserne, O. V. Zhyla Copyright (c) 2025 S. S. Zhyla, E. O. Tserne, O. V. Zhyla https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/345769 Wed, 24 Dec 2025 00:00:00 +0200 USE OF THE PARAMETERS OF THE LAW OF DISTRIBUTION OF THE MEASUREMENT ERRORS OF THE PULSE OXIMETER TO SELECT THE SENSOR SETTINGS https://ric.zp.edu.ua/article/view/345786 <p><strong> <span class="fontstyle0">Context </span></strong><span class="fontstyle2">is due to the need to develop a methodology for optimising the parameters of pulse oximeter sensor settings based on the practical determination of the accuracy of heart rate and blood oxygen saturation measurements, which, unlike existing ones, do not require analysis of the amplitude of the LED current.<br>A pulse oximeter is one of the sensors that monitor the patient’s vital signs, heart rate and blood oxygen saturation in particular. These indicators are determined based on the analysis of the values of the variable and constant components of the current of the red and infrared LEDs of the pulse oximeter sensor. Therefore, the accuracy of determining vital signs depends on the correct choice of the brightness and duration of the LEDs’ radiation. It is possible to select the current and duration of the LEDs’ radiation, as well as the ADC parameters of the sensor using software. In this case, the final conclusion regarding the correctness of the selected sensor settings is made based on the practical determination of the accuracy of heart rate and blood oxygen saturation measurements.<br></span><span class="fontstyle0"><strong>The object is </strong></span><span class="fontstyle2">to develop a methodology for assessing the correctness of the pulse oximeter sensor settings based on the analysis of the stationarity of errors in heart rate and blood oxygen saturation measurements.<br></span><span class="fontstyle0"><strong>Method.</strong> </span><span class="fontstyle2">An experimental study of the accuracy of heart rate and blood oxygen saturation measurements by statistical analysis of measurement errors of the developed pulse oximeter model.<br></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The practical application of the proposed methodology for determining the optimal parameters of the pulse oximeter<br>sensor settings was tested using the example of heart rate measurements.<br></span><strong><span class="fontstyle0">Conclusion. </span></strong><span class="fontstyle2">A methodology has been developed to assess the correctness of the choice of sensor setting parameters based on analysing the stationarity of errors in measuring heart rate and oxygen saturation in the patient’s blood. With the help of the developed methodology, the optimal settings parameters of the MAX30102 sensor of the pulse oximeter developed based on the ESP32 board were selected, which ensures the minimum error in measuring heart rate and blood oxygen saturation</span> </p> T. A. Vakaliuk, O. V. Andreiev, T. M. Nikitchuk, O.L. Korenivska, S. M. Nikitchuk Copyright (c) 2025 T. A. Vakaliuk, O. V. Andreiev, T. M. Nikitchuk, O.L. Korenivska, S. M. Nikitchuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/345786 Wed, 24 Dec 2025 00:00:00 +0200 PHOTOGRAMMETRIC MOTION CAPTURE SUBSYSTEM FOR CHANGE OF BODY POSITION ANALYSIS IN THE FRONTAL AND SAGITTAL PLANES https://ric.zp.edu.ua/article/view/346389 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The increase in other orthopedic injuries, particularly among military personnel, requires new innovative solutions to assess posture changes and monitor rehabilitation effectiveness. Existing systems have limitations in terms of portability, cost, and flexibility of use, which necessitates the development of hybrid systems that combine computer vision and sensory analysis methods.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">To assess the effectiveness of combining non-contact computer vision and accelerometric sensors for detecting changes in human posture under different lighting, background, and movement speeds.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">The study implemented a photogrammetric subsystem that includes MediaPipe Holistic for markerless tracking of key body points and WitMotion WT9011DCL-BT50 accelerometers for analyzing inertial motion parameters. The system model was built in IDEF0 notation. The accuracy was assessed by comparing the obtained values of the blade inclination angle and the asymmetry coefficient with the specified norms.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The combined use of visual and sensory data made it possible to reduce the error to 5.05% under normal conditions and ensure the stability of the results under conditions of changes in the external environment. Image modification (contrast, noise filtering) increased the accuracy of computer vision. Threshold values of the asymmetry coefficient corresponding to normal, mild and severe postural disorders were determined.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The proposed system demonstrates high potential effectiveness in telemedical rehabilitation support for patients with musculoskeletal disorders. Its practical significance lies in the creation of an affordable, portable, and accurate diagnostic and monitoring tool suitable for further integration into personalized medicine systems with built-in artificial intelligence modules.</span></p> O. Y. Barkovska, A. A. Kovalenko, V. O. Diachenko, L. D. Bukharova, V. Y. Korobko Copyright (c) 2025 O. Y. Barkovska, A. A. Kovalenko, V. O. Diachenko, L. D. Bukharova, V. Y. Korobko https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346389 Wed, 24 Dec 2025 00:00:00 +0200 USE OF GENETIC ALGORITHMS IN ADAPTIVE COMPILERS FOR CROSS-PLATFORM OPTIMIZATION https://ric.zp.edu.ua/article/view/346565 <p><span class="fontstyle0"><strong>Context.</strong> </span><span class="fontstyle2">Modern software is developed under conditions of continuously increasing complexity of computing systems. Today, developers must consider a vast diversity of platforms, ranging from resource-constrained mobile devices to servers with highperformance processors and specialized architectures such as GPUs, FPGAs and even quantum computers. This heterogeneity requires software to operate efficiently across various hardware platforms. However, portability remains one of the most challenging tasks, especially when high performance is required. One of the promising directions to address this problem is the use of adaptive compilers that can automatically optimize code for different architectures. This approach allows developers to focus on the functional part of the software, minimizing the effort spent on optimization and configuration. Genetic algorithms (GAs) play a special role among the methods used to create adaptive compilers. These are powerful evolutionary techniques that allow finding optimal solutions in complex and multidimensional parameter spaces.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The objective of this research is to apply genetic algorithms in the process of adaptive compilation to enable automatic optimization of software across different hardware platforms.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">The approach is based on genetic algorithms to automate the compilation process. The key stages include: population initialization – creation of an initial set of compilation parameters; fitness function evaluation – assessment of the efficiency of each parameter combination; evolutionary operations – applying crossover, mutation and selection to generate the next generation of parameters; termination criteria – stopping the iterative process upon reaching an optimal result or stabilization of metrics.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The developed algorithm was implemented in Python using the numpy, multiprocessing and subprocess libraries.<br />Performance evaluation of the algorithm was carried out using execution time, energy consumption and memory usage metrics.<br /></span><strong><span class="fontstyle0">Conclusions. </span></strong><span class="fontstyle2">The scientific novelty of the study lies in the development of: an innovative approach to automatic compilation parameter optimization based on genetic algorithms; a method for dynamic selection of optimization strategies based on performance metrics for different architectures; integration of GAs with modern compilers such as LLVM for automatic analysis of intermediate representation and code optimization; and methods for applying adaptive compilers to solve cross-platform optimization problems. The practical significance is determined by the use of genetic algorithms in adaptive compilers, which significantly improves the efficiency of the compilation process by automating the selection of optimal parameters for various architectures. The proposed approach can be successfully applied in the fields of mobile computing, cloud technologies and high-performance systems.</span></p> M. G. Berdnyk, I. P. Starodubskyi Copyright (c) 2025 M. G. Berdnyk, I. P. Starodubskyi https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346565 Wed, 24 Dec 2025 00:00:00 +0200 ANALYSIS OF PROCEDURES FOR VOICE SIGNAL NORMALIZATION AND SEGMENTATION IN INFORMATION SYSTEMS https://ric.zp.edu.ua/article/view/346581 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The current task of evaluating formant data (formant frequencies, their spectral density level, amplitude-frequency spectrum envelope, formant frequency spectrum width) in voice authentication systems is considered. The object of the study is the process of digital preprocessing of the voice signal when extracting formant data.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">Evaluation of the effectiveness of traditional procedures for digital preprocessing of a user voice signal and development of proposals for improving the quality of formant data extraction.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">A mathematical model for extracting formant data from an experimental voice signal has been developed to study the influence of normalization and segmentation procedures on the quality of the resulting estimates. By modeling the process of extracting formant data, the results of digital processing of normalized and non-normalized voice signals are compared. The influence of the processed frame duration of the experimental voice signal on the quality of the formant frequencies assessment is estimated. The results are obtained for the experimental phoneme and morpheme.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The obtained results show that when processing a voice signal with a sufficient signal-to-noise ratio, normalization procedures are not mandatory when extracting formant data. Moreover, normalization leads to a less accurate measurement of the spectrum width of formant frequencies. It is also unacceptable to use a processed frame duration of less than 40 ms. These results allow us to modify the traditional method of voice signal preprocessing. The use of the modeling method in the study of the experimental voice signal confirms the reliability of the results obtained.<br /></span><span class="fontstyle0"><strong>Conclusions.</strong> </span><span class="fontstyle2">The scientific novelty of the research results lies in the modification of the voice signal preprocessing methodology in authentication systems. Eliminating normalization procedures at high signal-to-noise ratios of the voice signal, which occurs in user authentication systems, makes it possible to increase the speed of formant data extraction and more accurately estimate the width of the formant frequency spectrum. Selecting a frame duration of at least 40 ms for the processed signal significantly improves the accuracy of formant frequency determination. Otherwise, the estimates of the formant frequencies will be high. Moreover, when processing phonemes, the processed voice signal cannot be divided into frames. Practical application of research results allows to increase the efficiency and accuracy of the formant data generation. Prospects for further research may be studies of the influence of normalization and framing procedures on other elements of a template of the authentication system user.</span></p> M. S. Pastushenko, О. М. Pastushenko, T. А. Faizulaiev Copyright (c) 2025 M. S. Pastushenko, О. М. Pastushenko, T. А. Faizulaiev https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346581 Wed, 24 Dec 2025 00:00:00 +0200 AN ANALYTICAL APPROACH TO MULTI-CRITERIA CHOOSING TECHNOLOGICAL SCHEME FOR INFORMATION PROCESSING https://ric.zp.edu.ua/article/view/346665 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">Today, effective information processing is critically important for making strategic, tactical, and operational management decisions. The increasing volumes of information and the need for its rapid analysis necessitate the development and implementation of new methods for multi-criteria choosing technological schemes for information processing. The application of analytical approaches, such as the Ordered Weighted Averaging (OWA) method, allows for the improvement of the quality of final information products, which is relevant for analysis and research in various fields.<br /></span><span class="fontstyle0"><strong>Objective.</strong> </span><span class="fontstyle2">The aim of the research is to develop an analytical approach to multi-criteria choosing an information processing technological scheme using the OWA operator.<br /></span><strong><span class="fontstyle0">Method. </span></strong><span class="fontstyle2">The paper uses an analytical approach based on the multi-criteria decision-making method. Specifically, the Ordered Weighted Averaging (OWA) operator is applied, which allows taking into account the weight coefficients of the criteria and their<br />ranking significance to determine the optimal information processing technological scheme.<br /></span><strong><span class="fontstyle0">Results. </span></strong><span class="fontstyle2">The research results show that the application of the OWA operator effectively aggregates the evaluation of alternatives and selects the technological scheme that best meets the specified criteria for the quality of the end information product. The conducted experiments confirmed the effectiveness of the proposed approach in evaluating alternative information processing schemes.<br /></span><span class="fontstyle0"><strong>Conclusions.</strong> </span><span class="fontstyle2">The proposed approach to multi-criteria selection of a technological scheme for processing intelligence data allows for improved quality of the end information product and considers the importance of various criteria. Further research could be focused on the development of automated decision support systems taking into account the metadata of intelligence data.</span></p> М. Popov, О. Zaitsev, S. Stefantsev Copyright (c) 2025 М. Popov, О. Zaitsev, S. Stefantsev https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346665 Wed, 24 Dec 2025 00:00:00 +0200 MODIFY SHADERS AND RENDER TEXTURES ON CURVED SURFACES https://ric.zp.edu.ua/article/view/346677 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">The display of curvilinear surfaces on flat screens is a complex task. The development of an interface for such surfaces is a relevant task that requires the solution of numerous issues. This paper presents an approach to UI development for curvilinear surfaces and shader modifications for the creation of realistic landscape elements. The object of the research is the development of an interface system based on a custom raycaster to ensure interactivity and create an immersive effect within the game environment.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The purpose of the paper. The primary objective of this research is to create, optimise and adapt shaders on curved surfaces to achieve more efficient rendering with high-quality visualisation.<br /></span><strong><span class="fontstyle0">Method</span></strong><span class="fontstyle2"><strong>.</strong> The development of user interfaces (UI) for curvilinear surfaces requires consideration of geometric parameters. To resolve this issue, a custom component based on BaseRaycaster was developed, enabling the computation of the intersection between the camera ray and the physical surface. To provide correct and efficient interaction with the canvas a custom component based on BaseRaycaster was created. The developed component solves the problem by identifying the ray intersection point from the camera with the canvas surface. The implementation of this component involves an algorithm for detecting the camera’s ray intersections with colliders, using a mathematical model to process the detected elements, according for their depth to ensure proper interaction.<br /></span><strong><span class="fontstyle0">Results.</span></strong><span class="fontstyle2">This approach facilitates the creation of interfaces on arbitrary static curved surfaces that are applicable in various gaming and interactive scenarios.<br /></span><strong><span class="fontstyle0">Conclusions</span></strong><span class="fontstyle2"><strong>.</strong> The use of splines and modified shaders ensures the placement of text on curvilinear surfaces and the natural arrangement of roads and other landscape elements according to the terrain contours. This approach is important for developing openworld games or games with complex geometry, where the UI on curvilinear surfaces appears natural and integrated into the environment.</span></p> I. Suhoniak, G. Marchuk, O. Oleksiuk Copyright (c) 2025 I. Suhoniak, G. Marchuk, O. Oleksiuk https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/346677 Wed, 24 Dec 2025 00:00:00 +0200 MECHANISMS OF SCHEMATIC MODELING BASED ON VECTOR LOGIC https://ric.zp.edu.ua/article/view/347219 <p><strong> <span class="fontstyle0">Context. </span></strong><span class="fontstyle2">This paper addresses issues relevant to the EDA market – reducing the cost and time of testing and verification of digital projects by synthesizing the logic vector of a digital circuit, which significantly simplifies the algorithms of good-value simulation and reduces the synthesis of the test map to three matrix operations.<br /></span><strong><span class="fontstyle0">Objective. </span></strong><span class="fontstyle2">The aim of the study is to reduce the cost and time of testing and verification of digital projects by synthesizing the logic vector of a digital circuit, which allows for simplifying the algorithm for constructing the test map to three matrix operations.<br /></span><span class="fontstyle0"><strong>Method.</strong> </span><span class="fontstyle2">The synthesis of a logic vector for a combinational circuit is proposed for good-value and fault as modeling addresses within the architecture of prompt-intelligent in-memory computing. The logic vector is the most technological, compact, and exhaustive representation of the circuit for efficiently solving all design and testing tasks. Cartesian logic is proposed as an effective intelligent mechanism for solving combinatorial problems (modeling, simulation, testing, diagnostics) using algorithms of linear computational complexity, thanks to its exponential redundancy </span><span class="fontstyle2">2 </span><em><sup>n+<span class="fontstyle4">m </span></sup></em><span class="fontstyle2">. Mechanisms are proposed for constructing a logic vector of a process or phenomenon, function or structure, based on matrix structures of Cartesian logic for solving Modeling for Simulation tasks. An engineering method of direct parallel good-value modeling of the circuit is proposed, based on the use of logic vectors of elements and truth tables. Cartesian logic is a logic vector (matrix) as the result </span><span class="fontstyle2">of modeling Cartesian logical relations between the bits of logic vectors or truth table addresses. Cartesian logic solves the following tasks: 1. Good-value modeling of a circuit logic vector without the need for a functional behavior modeling algorithm. 2. Fault modeling testing map of logic without a fault simulation algorithm. </span><strong><span class="fontstyle0">Res</span><span class="fontstyle0">ults. </span></strong><span class="fontstyle2">A mathematical apparatus of Cartesian logic is proposed, represented as a logic vector (matrix), which is the result of modeling Cartesian logical relations between the bits of logic vectors or addresses of the truth table. Based on the theory of vector logic, mechanisms and software have been developed for efficiently solving design and test tasks within the in-memory computing architecture.<br /></span><span class="fontstyle0"><strong>Conclusions.</strong> </span><span class="fontstyle2">The practical value of the study lies in addressing all design and test tasks using simple models of vector logic, oriented toward economical in-memory computing based on read-write transactions and free from processor instructions. It is proposed to use the specification logic vector within the in-memory computing architecture for testing, verification, diagnostics, and operation. In-memory vector logic computing is an economical solution to many computational problems in terms of energy, time, and cost efficiency. The novelty of the study lies in the use of vector logic within the in-memory computing architecture based on read-write transactions, which reduces resource consumption in terms of time and energy. Implementing vector logic in in-memory computing makes it scalable and energy-efficient, and frees it from complex big data analysis algorithms.</span></p> V. I. Hahanov, V. I. Obrizan, D. Y. Rakhlis, H. V. Khakhanova, I. V. Hahanov, O. I. Demchenko, A. О. Voronov Copyright (c) 2025 V. I. Hahanov, V. I. Obrizan, D. Y. Rakhlis, H. V. Khakhanova, I. V. Hahanov, O. I. Demchenko, A. О. Voronov https://creativecommons.org/licenses/by-sa/4.0 https://ric.zp.edu.ua/article/view/347219 Wed, 24 Dec 2025 00:00:00 +0200