https://ota-new.donntu.edu.ua/issue/feed Scientific Papers of Donetsk National Technical University. Series: “Computer Engineering and Automation" 2025-12-02T00:00:00+02:00 Iaroslav Dorohyi yaroslav.dorohyi@donntu.edu.ua Open Journal Systems <p><span class="HwtZe" lang="en"><span class="jCAhz ChMk0b"><span class="ryNqvb">All-Ukrainian scientific collection <strong>"Scientific papers of Donetsk National Technical University.</strong></span></span> <span class="jCAhz ChMk0b"><span class="ryNqvb"><strong>Series: "Computer engineering and automation"</strong> is a scientific specialist publication of Ukraine, in which the results of scientific research in the field of technical sciences can be published.</span></span> <span class="jCAhz ChMk0b"><span class="ryNqvb">The collection publishes articles by scientists, graduate students, masters of higher education institutions, as well as practicing scientists and engineers of leading enterprises, which contain the results of theoretical and practical research and development according to <strong>thematic sections</strong>:</span></span> </span></p> <p><span class="HwtZe" lang="en"><span class="jCAhz ChMk0b"><span class="ryNqvb">1. Automation of technological processes.</span></span> </span></p> <p><span class="HwtZe" lang="en"><span class="jCAhz ChMk0b"><span class="ryNqvb">2. Information technologies and telecommunications.</span></span> </span></p> <p><span class="HwtZe" lang="en"><span class="jCAhz ChMk0b"><span class="ryNqvb">3. Information and measurement systems, electronic and microprocessor devices.</span></span></span></p> https://ota-new.donntu.edu.ua/article/view/344511 COMPENSATION OF RANDOM DELAYS IN NETWORKED CONTROL SYSTEMS USING ADAPTIVE MPC 2025-11-25T09:41:52+02:00 Ivan Chystyk ivan.chystyk@donntu.edu.ua <p>The article addresses the pressing issue of compensating for random network-induced delays in Networked Control Systems (NCS), which arise from imperfect communication channels. These stochastic delays, caused by packet queues, collisions, and variable network load, lead to system destabilization, degraded accuracy, and reduced performance. This is especially critical in high-stakes applications such as industrial automation and telemedicine. Traditional compensation methods, including PID control and deterministic approaches (e.g., the Smith predictor), prove ineffective against the random nature of these delays. The proposed solution employs Model Predictive Control (MPC), which leverages a system model to predict future behavior and optimize control actions to compensate for anticipated signal distortions. The work aims to develop and verify a compensation method based on adaptive MPC. It presents a mathematical model of an NCS that accounts for random delays in the sensor-controller and controller-actuator channels, formalizing an extended state vector and constructing an observer. Practical significance is confirmed through a MATLAB/Simulink simulation model with reproducible stochastic delays and a quantitative analysis of control performance degradation. The study proposes and analyzes a hybrid solution combining a custom state prediction algorithm, an adaptive control buffer, and tools from the MPC Toolbox. Comparative simulations demonstrate the superiority of the hybrid approach over both standard MPC tools and custom predictors in key metrics: Integral Absolute Error (IAE), settling time, control energy, and robustness to increased delays. The results show a significant improvement in system stability and performance, confirming the promise of adaptive MPC for building reliable next-generation control systems capable of operating in non-ideal communication environments.</p> 2025-12-02T00:00:00+02:00 Copyright (c) 2025 https://ota-new.donntu.edu.ua/article/view/344525 ENHANCED IMAGE STEGANOGRAPHY WITH KERAS-STEGANOGAN: A TENSORFLOW-BASED GAN 2025-11-25T11:07:47+02:00 D.Yu. Khoma dmytro.khoma@donntu.edu.ua Y.O. Bashkov dmytro.khoma@donntu.edu.ua <p>Image steganography is the process of embedding secret information within digital images such that the very presence of the message remains undetectable. Recent advances in deep learning, particularly in generative adversarial networks (GANs), have significantly improved both the payload capacity and perceptual quality of steganographic systems. The original SteganoGAN, implemented in Torch, achieved state-of-the-art performance by embedding up to 4.4 bits per pixel while maintaining strong resistance to steganalysis methods. However, the influence of the critic network on steganographic quality and learning stability remains insufficiently explored. This paper presents Keras-SteganoGAN, a TensorFlow-based reimplementation and extension of SteganoGAN, designed to systematically analyze the role of the critic in adversarial steganographic training. Two variants of the model–one incorporating a critic and one without–were trained and compared across three encoder architectures: basic convolutional, residual, and dense. Each configuration was trained over five epochs with message depths ranging from 1 to 6 bits, allowing a comprehensive study of trade-offs between payload capacity, image distortion, and decoding accuracy. Quantitative evaluation was conducted using standard image quality and steganographic metrics, including PSNR, SSIM, RS-BPP, and decoder accuracy. The results indicate that the inclusion of a critic improves perceptual quality and visual similarity at lower payloads, but its contribution diminishes as the message depth increases. These findings provide new insights into the interaction between encoder complexity, critic dynamics, and steganographic performance, offering guidance for the design of future GAN-based steganography systems.</p> 2025-12-02T00:00:00+02:00 Copyright (c) 2025 https://ota-new.donntu.edu.ua/article/view/344520 COMPARATIVE ANALYSIS OF MODELING METHODS AND TECHNOLOGIES IN CYBERSECURITY 2025-11-25T10:31:43+02:00 Vladyslav Kravchuk yaroslav.dorohyi@donntu.edu.ua Tatiana Altukhova yaroslav.dorohyi@donntu.edu.ua Iaroslav Dorohyi yaroslav.dorohyi@donntu.edu.ua <p>The cybersecurity landscape is characterized by high complexity and dynamism, necessitating advanced modeling methods for threat analysis, risk prediction, and evaluation of protective measures. This article presents a detailed comparative analysis of traditional and contemporary modeling approaches in cybersecurity, including mathematical, logical, and hierarchical modeling, attack simulations and Breach and Attack Simulation (BAS), agent-based modeling, digital twins, as well as methods based on machine learning, deep learning, game theory, graph structures, and large language models (LLMs). Each method is examined in terms of its operational principles, key advantages, limitations, and practical applications. Particular attention is given to the synergy and complementarity of these approaches, which are critical for developing comprehensive and adaptive cybersecurity systems. Traditional methods, such as mathematical modeling, provide a formal basis for analysis but may oversimplify real-world scenarios. Contemporary approaches, including machine learning and digital twins, enable the processing of large data volumes and modeling of complex dynamic interactions, though they require significant computational resources and accurate data. Game theory and graph models offer strategic and contextual analysis, while large language models open new possibilities for automating threat analysis, despite their reliability limitations. The integration of these methods forms the foundation for hybrid solutions that mitigate the shortcomings of individual approaches, enhancing overall protection efficacy. The article also highlights challenges related to computational complexity, uncertainty, and ethical considerations, and outlines future directions, such as improving explainable AI, resilience to adversarial attacks, and simulation realism.</p> 2025-12-02T00:00:00+02:00 Copyright (c) 2025 https://ota-new.donntu.edu.ua/article/view/344522 COMPARATIVE ANALYSIS OF METHODS AND TECHNOLOGIES FOR PENETRATION TESTING MODELING 2025-11-25T10:59:33+02:00 Vitaly Kravchuk naukovipracidntu@gmail.com Iaroslav Dorohyi naukovipracidntu@gmail.com <p>The article conducts a detailed comparative analysis of contemporary methods and technologies for modeling penetration testing (pentesting), a fundamental aspect of ensuring cybersecurity in the digital world. The authors trace the evolution of these approaches: from classical manual techniques that require high expertise from specialists to innovative automated systems integrating artificial intelligence (AI) and machine learning (ML). Specifically, various vulnerability simulation models are compared, such as the popular Metasploit Framework for exploit emulation, virtualized environments based on VirtualBox, VMware, or containerization with Docker, which enable the creation of isolated test networks for simulating real attacks. Special attention is given to hybrid technologies that combine traditional tools with AI algorithms for attack prediction and automation, for example, using libraries like TensorFlow, PyTorch, or Scapy packages for generating network traffic. The analysis is performed based on key efficiency criteria: accuracy in vulnerability detection (considering false positives and false negatives), test execution speed, scalability for large systems, computational resource costs, error rates, and ease of integration. The advantages of each method are discussed–for instance, manual methods provide deep contextual understanding, while AI approaches enable real-time processing of large data volumes–and their disadvantages, such as vulnerability to evolving threats or the need for continuous model training. Particular emphasis is placed on adapting these technologies to modern scenarios, including cloud platforms (AWS, Microsoft Azure, Google Cloud), Internet of Things (IoT devices with limited resources), and mobile applications. The research is grounded in empirical data from tests on standardized models, such as OWASP Top 10 for web vulnerabilities and NIST Cybersecurity Framework, where it is shown that hybrid methods increase overall efficiency by 30-50% compared to traditional ones, reducing vulnerability detection time and minimizing risks. The authors offer practical recommendations for selecting optimal technologies for different types of organizations–from small businesses to large corporations–considering ethical aspects (e.g., adherence to ethical hacking principles), regulatory requirements (GDPR for data protection, ISO 27001 for information security management), and potential risks, such as unauthorized tool usage. The article serves as a valuable resource for cybersecurity professionals, software developers, IT project managers, and researchers, contributing to the development of more resilient strategies for protection against cyber threats in a dynamic digital technology environment.</p> 2025-12-02T00:00:00+02:00 Copyright (c) 2025 https://ota-new.donntu.edu.ua/article/view/344513 CLASSIFICATION OF NON-INTERACTIVE KNOWLEDGE ARGUMENT PROOF SYSTEMS 2025-11-25T09:55:31+02:00 Yurii Paslavskyi mykhailo.paslavskyi@nltu.edu.ua Ihor Kroshnyi mykhailo.paslavskyi@nltu.edu.ua <p>An important cryptographic mechanism that guarantees confidentiality (the zero-disclosure property) and ensures that it is impossible to prove a false statement to the verifier is zero-disclosure proofs. A popular implementation of zero-disclosure proofs is short, noninteractive proofs that can be quickly verified and that do not require interaction between the parties after the initial setup. The main direction in the development of modern proof systems is interactive proof, which is built in two steps. The first is sending a confirmation of the polynomial of an interactive oracle proof and the second is creating correct oracles of the polynomial commitment scheme using well-defined cryptographic methods for evaluating polynomials. Verifying the use of the same coefficients in each linear combination requires checking both polynomial consistency and variable consistency. To construct general schemes of concise non-interactive zerodisclosure knowledge argument, an interactive oracle proof polynomial was proposed that models messages as polynomial oracles. All tests are proved using polynomial commitment schemes and then evaluated with zero knowledge at a point specified by the person verifying the information. The reliability and confidentiality of all tests are based on three main categories of interactive oracle proof polynomials, namely polynomial commitment schemes with conjunction, with inner product argument and with code theory. The protocols of concise noninteractive zero-disclosure knowledge arguments are implemented through high-level programs (compilers), which are converted into an intermediate representation, i.e. a scheme defined by a system of constraints. The compilers used are divided into domain-oriented languages, embedded domain-oriented languages, and zero-knowledge virtual machines. Specialized domain-oriented hardware description languages or programming languages offer an adapted syntax for efficiently expressing constraints in arithmetic schemes. Embedded domain-oriented languages are implemented as functions in general-purpose programming languages and are oriented to the overhead schemes inherited from the embedded language. Zero-knowledge virtual machines process the opcode of the fetch-decodeexecute cycle, replicating the computation trace for general programs and generating corresponding zeroknowledge proofs. They are compatible with existing high-level programming languages and can use the features of existing compilers. Compilers are evaluated for cross- or syntactic compatibility. In general, the biggest obstacle to using non-interactive proof libraries is the lack of documentation. Standardization can help developers compare important features across libraries and establish a more consistent performance baseline. Library documentation for these core features is implicit, and developers need to understand the underlying cryptographic techniques to choose an appropriate scheme. Standardization of compiler options is important, making it difficult to reuse existing tools.</p> 2025-12-02T00:00:00+02:00 Copyright (c) 2025 https://ota-new.donntu.edu.ua/article/view/344519 INTELLIGENT TECHNOLOGIES FOR ANALYSIS, CLASSIFICATION AND RECOMMENDATIONS IN COLLECTION MANAGEMENT SYSTEMS 2025-11-25T10:20:00+02:00 Daniil Popereshniak spopereshnyak@gmail.com Svitlana Popereshnyak spopereshnyak@gmail.com <p>The article is devoted to the development and investigation of intelligent technologies for analysis, classification, and recommendation in collection management systems. The study addresses the challenge of organizing and processing collection data, which are characterized by rapid growth in volume and increasing heterogeneity. These factors complicate efficient storage, search, and structuring of collections. It is shown that traditional collection management systems are usually limited to isolated tasks such as basic storage or search and often fail to integrate similarity analysis, classification, and recommendation algorithms into a unified framework. The use of intelligent approaches enables overcoming these limitations and provides opportunities for building universal systems capable of handling large-scale datasets. The purpose of the research is to achieve enhanced efficiency and scalability in collection management by constructing a model of an intelligent system that integrates multimodal object representation, classification methods, duplicate detection, and recommendation mechanisms. To achieve this goal, existing approaches were analyzed, a model for representing collection objects was formalized, similarity computation techniques were developed, and an architecture combining classification and recommendation modules was proposed. Experimental evaluation demonstrated that the integration of these components ensures high classification accuracy, effective duplicate detection, and relevant personalized recommendations. Particular attention was paid to scalability: the system maintained a stable response time even under significant growth in the number of collection objects. The practical significance of the research lies in the universality of the proposed model, which can be applied both in private multimedia and digital collections and in corporate or scientific infrastructures, such as libraries, archives, and databases. The scientific novelty is defined by the creation of a comprehensive architecture that integrates several intelligent methods into a unified system. Future research perspectives include the application of deep learning approaches for multimodal feature processing, the improvement of recommendation algorithms, and the integration with security and access control mechanisms to ensure robustness in large-scale environments.</p> 2025-12-02T00:00:00+02:00 Copyright (c) 2025