Technology audit and production reserves <p align="justify"><strong>The aim</strong> of the «Technology audit and production reserves» journal is to publish research papers dealing with the search for opportunities to reduce costs and improve the competitiveness of products in industry. The peculiarity is that <strong>each problem is considered from two sides - the economist’s and the engineer’s</strong>, for example, in the context of forming the «price – quality» criterion, in which the first component concerns research in the field of business economics, and the second - engineering. The research result at the intersection of these disciplines can be used in the actual production to identify reserves, providing the opportunity to reduce costs and improve product competitiveness.</p> en-US <p>The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.</p> (Liliia Frolova) (Liliia Frolova) Fri, 05 Apr 2024 00:00:00 +0300 OJS 60 Exploring an LSTM-SARIMA routine for core inflation forecasting <p><em>The object of the research is the Core Inflation Forecasting. The paper investigates the performance of the novel model routine in the exercise of the Core Inflation Forecasting. It aggregates 300+ components into 6 by the similarity of their dynamics using an updated DTW algorithm fine-tuned for monthly time series and the K-Means algorithm for grouping. Then the SARIMA model extracts linear and seasonal components, which is followed by an LSTM model that captures non-linearities and interdependencies. It solves the problem of high-quality inflation forecasting using a disaggregated dataset.</em> <em>While standard and traditional econometric techniques are focused on the limited sets of data that consists just a couple of variables, proposed methodology is able to capture richer part of the volatility comprising more information. The model is compared with a huge pool of other models, simple ones like Random Walk and SARIMA, to ML models like XGBoost, Random Forest and simple LSTM. While all Data Science model shows decent performance, the DTW+K-Means+SARIMA+LSTM routine gives the best RMSE over 1-month ahead and 2-month ahead forecasts, which proves the high quality of the proposed forecasting model and solves the key problem of the paper.</em> <em>It is explained by the model's capability to capture both linear/seasonal patterns from the data using SARIMA part as long as it non-linear and interdependent using LSTM approach. Models are fitted for the case of Ukraine as long as they’ve been estimated on the corresponding data and may be actively used for further inflation forecasting.</em></p> Dmytro Krukovets Copyright (c) 2024 Dmytro Krukovets Fri, 05 Apr 2024 00:00:00 +0300 Development of decision-making technology for the provision of services in project implementation <p><em>The object of research is decision-making processes regarding the provision of services within the framework of cross-border projects.</em></p> <p><em>To achieve the aim of the research, an analysis of the service provision market was first conducted, its features have been revealed and problems arising in the processes of its functioning have been identified. The main problem is to find the optimal distribution of services between performers in the service management system. A mathematical model of the problem of single- and multi-criteria optimization has been developed, where the problem is decomposed into independent sub-problems. The problem is presented in the form of a linear programming problem. Various efficiency criteria of the found distributions are proposed. Depending on the number of criteria, the problem will be a single-criteria Boolean programming problem or a multi-criteria optimization problem. An iterative method for finding the optimal distribution of services has been created, and individual methods are laid out in the form of production rules, which is understandable and allows to gain new knowledge.</em></p> <p><em>Based on the obtained data, a decision-making technology has been developed regarding the distribution of service consumers between performers. At the same time, decision-making methods were used, which allow optimizing the processes of service provision. A systematic approach was used when designing information technology. This made it possible to create an effective and problem-relevant technology that helps in making informed decisions about the distribution of services between participants of cross-border projects. A structural and functional diagram of the decision support system has been developed. Its structural elements are detailed.</em></p> <p><em>The obtained results reflect a thorough analysis of the current state of the services market and the development of effective decision-making technology, which contributes to the optimization of work in the field of cross-border projects. This approach can be useful for various subjects involved in the implementation and coordination of international projects.</em></p> Oksana Mulesa, Evgen Yakob, Petro Valko, Oleksandra Sviezhentseva, Dmytro Marhitych Copyright (c) 2024 Oksana Mulesa, Evgen Yakob, Petro Valko, Oleksandra Sviezhentseva, Dmytro Marhitych Tue, 09 Apr 2024 00:00:00 +0300 Designing an Internet of Things solution for monitoring vital signs <p><em>The object of study is the process of monitoring vital signs using an automated system based on an Internet of Things (IoT) solution. The study investigates and analyses the best existing solutions for continuous monitoring of human health. The research is important in the context of a possible pandemic and general health monitoring.</em></p> <p><em>An IoT model of a solution for monitoring and analyzing vital signs in patients</em> <em>is proposed. The project involves the creation of hardware and software for tracking vital signs. The interaction of the two parts will ensure that the main task is to obtain the result and analyze the indicators of vital functions of the human body. The hardware is implemented using devices for scanning data on heart rate, temperature, saturation, and the ability to track electrocardiograms. It is possible to transmit data on the state of the body. The position of the sensors attached to the body is taken into account in case they come off. The device itself should be placed on the human body in the area of the front chest wall, wrists, and ankles. The device is also programmed to respond to sudden changes in these values. The software implementation is based on a web-based interface. The design of the final solutions for the interaction between the local and intermediate server was implemented using Django and Python. The ability to administer the intermediate server of the client's time zone was written using HTML, CSS, and JavaScript. The use of the IoT solution allows monitoring the indicators of vital functions of the body and their analysis. A scheme of information exchange in the system for monitoring health indicators has been built.</em></p> Iryna Zinko, Olha Kravchenko, Dmytro Syvoglaz Copyright (c) 2024 Iryna Zinko, Olha Kravchenko, Dmytro Syvoglaz Tue, 09 Apr 2024 00:00:00 +0300 Development of high-speed algorithm for binomial arithmetic addition <p><em>The object of research is the method and algorithm of arithmetic addition of binomial numbers generated by binary binomial counting systems. The lack of binomial arithmetic, in particular the operation of adding binary binomial numbers, in a certain way prevents their introduction into information systems and the construction of information and communication technologies based on them for combinatorial optimization, generation of combinatorial objects, data compression and encryption.</em></p> <p><em>In the framework of the proposed approach, instead of operating with binomial coefficients, only operations with their upper and lower parameters are carried out. At the same time, the weighting coefficients of binary binomial numbers, which are added to each other, are represented in the form of two-component tuples. Taking this into account, this paper presents an algorithm for binomial arithmetic addition using dynamic arrays.</em></p> <p><em>The main idea, which is included in the structure of the algorithm of binomial arithmetic addition based on dynamic arrays, is that the transition from a two-dimensional model of summation to a one-dimensional one is carried out. At the same time, only available, existing binomial coefficients are placed in the dynamic array. Accordingly, the search for binomial coefficients equal to or greater than the quantitative equivalent takes place in much smaller areas. In comparison with the algorithm based on matrix models, this quite significantly reduces the amount of time spent when performing the summation operation, and also reduces the requirements for the amount of memory required for placing two-component tuples of the assembly array.</em></p> <p><em>In the course of the research, a several-fold decrease in the number of machine cycles required to search for the necessary elements in the dynamic array was practically confirmed. This leads to an increase in the performance of the presented algorithm of binomial arithmetic addition based on dynamic arrays. In turn, this leads to the acceleration of solving information tasks of combinatorial optimization, generation of combinatorial objects, data compression and encryption, for the solution of which the operation of adding binary binomial numbers is used.</em></p> Igor Kulyk, Maryna Shevchenko, Anatolii Melnyk, Tetyana Protasova Copyright (c) 2024 Igor Kulyk, Maryna Shevchenko, Anatolii Melnyk, Tetyana Protasova Tue, 09 Apr 2024 00:00:00 +0300 Study of the process of identifying the authorship of texts written in natural language <p><em>The object of the research is the process of identifying the authorship of a text using computer technologies with the application of machine learning. The full process of solving the problem from text preparation to evaluation of the results was considered. Identification of the authorship of a text is a very complex and time-consuming task that requires maximum attention. This is because the identification process always requires taking into account a very large number of different factors and information related to each specific author. As a result, various problems and errors related to the human factor may arise in the identification process, which may ultimately lead to a deterioration in the results obtained.</em></p> <p><em>The subject of the work is the methods and means of analyzing the process of identifying the authorship of a text using existing computer technologies.</em> <em>As part of the work, the authors have developed a web application for identifying the authorship of a text. The software application was written using machine learning technologies, has a user-friendly interface and an advanced error tracking system, and can recognize both text written by one author and that written in collaboration.</em></p> <p><em>The effectiveness of different types of machine learning models and data fitting tools is analyzed. Computer technologies for identifying the authorship of a text are defined.</em> <em>The main advantages of using computer technology to identify text authorship are:</em></p> <p><em>– </em><em>Speed: computer algorithms can analyze large amounts of text in an extremely short period of time.</em></p> <p><em>– </em><em>Objectivity: computer algorithms use only proven algorithms to analyze text features and are not subject to emotional influence or preconceived opinions during the analysis process.</em></p> <p><em>The result of the work is a web application for identifying the authorship of a text developed on the basis of research on the process of identifying the authorship of a text using computer technology.</em></p> Yuliia Ulianovska, Oleksandr Firsov, Victoria Kostenko, Oleksiy Pryadka Copyright (c) 2024 Yuliia Ulianovska, Oleksandr Firsov, Victoria Kostenko, Oleksiy Pryadka Mon, 15 Apr 2024 00:00:00 +0300 Investigation of approaches to designing complex database structures in systems of integrated monitoring of environmental, economic, energy and social parameters of the territory <p><em>The object of research is the traditional and universal approach of designing the database structure in systems of integrated monitoring of ecological, economic, energy and social parameters of the territory, which include diverse data from various subject areas. In the course of the study, an analysis was performed based on a set of criteria such as scalability, ease of updating data, absence of empty fields, volume of the database, number of tables and fields, ease and speed of execution of requests for a sample set of indicators of the research object. The comparison of these approaches took place on the example of water resources monitoring, since it has several subsystems and a large number of indicators that are used for assessment. It</em> <em>is established that the proposed universal approach to designing complex database structures made it possible to reduce the volume of the database by 2.25 times due to the absence of empty fields. In particular, in the considered example, the filling factor of the database with the traditional approach is 1.75 times less than with the proposed universal approach. It should be noted that the rate of table filling for the traditional design approach can vary depending on the number of indicator values, while the table filling rate for the universal approach is always close to 100 %. Also, the proposed database design approach makes it possible to speed up data loading and processing. For example, with the same volume of significant information, the minimum speed of sampling the characteristics of one research object is 3.87 times greater in a database developed according to the principles of the universal approach than according to the rules of the traditional approach. The proposed structure of the database is successfully used in the system of complex eco-energy-economic monitoring. The developed structure of the database can serve as an effective basis for the formation of an electronic data bank at the level of the enterprise, region and country.</em></p> Volodymyr Slipchenko, Liubov Poliahushko, Olha Krush, Volodymyr Rudyk Copyright (c) 2024 Volodymyr Slipchenko, Liubov Poliahushko, Olha Krush, Volodymyr Rudyk Mon, 22 Apr 2024 00:00:00 +0300 Testing the suitability of vector normalization procedure in topsis method: application to wheel loader selection <p><em>The object of the research consists of testing the suitability of the vector normalization procedure (NP) in the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) method.</em> <em>One of the most problematic steps of the Multi-Criteria Decision Making (MCDM) process is related to the application of NPs by default to transform different measurement units of criteria into a comparable unit. This is because of the absence of a universal agreement that defines which NP is the most suitable for a given MCDM method. In the literature, there are thirty-one available NPs, each one of them has its strengths and weaknesses and, accordingly, can efficiently be applied to an MCDM method and even worst to another. Let’s note that many NPs (e.</em><em> </em><em>g., NPs of sum, max-min, vector, and max) have been used by default (i.</em><em> </em><em>e., without suitability study) in the TOPSIS method. Consequently, outcomes of multi-criteria evaluation and rankings of alternatives considered in the decision problems could have led to inconsistent solutions, and, therefore, decision-makers could have made irrational or inappropriate decisions. That’s why suitability studies of NPs become indispensable.</em> <em>Moreover, a description of the methodology, proposed in this research, is outlined as follows:</em></p> <p><em>1)</em><em> </em><em>method of weighting based on an ordinal ranking of criteria and Lagrange multiplier (for determining criteria weights);</em></p> <p><em>2)</em><em> </em><em>TOPSIS method (for ranking considered alternatives);</em></p> <p><em>3)</em><em> </em><em>a statistical approach with 3-estimate (for comparing effects generated by the used NPs).</em></p> <p><em>In the research, twelve different NPs are compared to each other in the TOPSIS method via a numerical example, which deals with the wheel loader selection problem.</em> <em>The results of the comparison indicate that, amongst the twelve different NPs analyzed in this suitability study, vector NP has the lesser effect on the considered alternatives’ evaluation outcomes, when used with the TOPSIS method.</em> <em>The vector NP-TOPSIS approach can therefore be applied to solve multi-criteria decision problems. Its application further allows the decision-makers and users to better select efficient solutions and, consequently, to make conclusive decisions.</em></p> Mohamed Bouhedja, Samir Bouhedja, Aissa Benselhoub Copyright (c) 2024 Mohamed Bouhedja, Samir Bouhedja, Aissa Benselhoub Tue, 16 Apr 2024 00:00:00 +0300 Development of a routing method for ground-air Ad-Hoc network of special purpose <p><em>The object of the study is the process of forming control decisions to ensure the operation of the ground-air communication network routing subsystem based on neural network algorithms. The carried-out research is based on the application of the numerical-analytical approach to the selection of modern scientific and applied solutions for building management models for promising Ad-Hoc communication networks. In the Google Collab simulation environment, using the Python programming language, it was possible: firstly, to simulate the operation of a ground-to-air communication network based on previously obtained models and a routing process management system based on the FA-OSELM algorithm. Secondly, in accordance with the scenario of route construction and maintenance described in the article, to experimentally determine the communication metrics of the proposed method of intelligent routing of the ground-air Ad-Hoc special-purpose network, in order to assess its efficiency, adequacy and reliability of the results obtained. Thus, in order to evaluate the effectiveness of the proposed solutions, a comparative analysis of the application of three existing routing methods (FLCA, Q-Routing, Neuro Routing) used in Ad-Hoc networks relative to the developed method was conducted.</em></p> <p><em>The result of the experiment showed that the proposed routing method MAODV-FA-OSELM provides significant advantages over analogs. Thus, the method exhibits the best network throughput (2.12e+06), the lowest average network latency (0.12), the lowest packet loss (6.32), the lowest bit error rate (2.41), and the lowest overhead (0.10e+06). However, it should be noted that a promising direction of further research may be the study of the computational complexity of the routing management process and the determination of the minimum allowable representative sample of initial data to ensure online decision-making.</em></p> Robert Bieliakov Copyright (c) 2024 Robert Bieliakov Sat, 20 Apr 2024 00:00:00 +0300