DEVELOPMENT OF A NEW LIGHTWEIGHT ENCRYPTION ALGORITHM

Lightweight encryption algorithms are considered a relatively new direction in the development of private key cryptography. This need arose as a result of the emergence of a large number of devices with little computing power and memory. Therefore, it became necessary to develop algorithms that can provide a sufficient level of security, with minimal use of resources. The paper presents a new lightweight LBC encryption algorithm. LBC is a 64–bit symmetric block algorithm. It supports 80 bit secret key. The number of rounds is 20. The algorithm has a Feistel network structure. The developed lightweight algorithm has a simple implementation scheme, and the transformations used in this algorithm have good cryptographic properties. This was verified by studying the cryptographic properties of the algorithm using the "avalanche effect" and statistical tests. The avalanche property was checked for each round when each bit of the source text was changed. Based on the work carried out, it was found that the proposed encryption algorithm is effective to ensure a good avalanche effect and the binary sequence obtained after encryption is close to random. Its security against linear and differential cryptanalysis is also evaluated. The results of the research revealed good cryptographic properties of this algorithm. The algorithm will be used for devices with small hardware resources, in information and communication systems where confidential information circulates, and it is also extremely necessary to exchange information in a protected form in an operationally acceptable time


Introduction
There is a trend of mass transition from the Internet of personal computers to the Internet of things. Electronic devices aimed at improving life and interacting with the Internet in one way or another are increasingly entering the life of society. Mobile phones, electronic keys, household appliances are all equipped with a processor and operate as part of a large Internet network. With the growing demand, lightweight ciphers are gaining more and more attention. In 2018, NIST issued a call for lightweight ciphers, accepting applications for simplified authenticated encryption and hashing for environments with software or hardware limitations. In this regard, the development and analysis of lightweight cryptographic algorithms have gained popularity among researchers. The studies have focused on the development of a lightweight block cipher, security analysis, and performance evaluation, etc. Such lightweight algorithms are designed to reduce power and memory consumption to meet the requirements of resource-constrained applications such as RFID and Internet of Things (IoT) devices.
There are many encryption algorithms that are focused on solving a variety of tasks. In the case of limited resources, each of them has different advantages in software and hardware efficiency. The trade-off between security, cost, and efficiency is a key issue that needs to be addressed when developing a lightweight block cipher. In particular, the key length of the lightweight block cipher, the total number of rounds, and the structure algorithm affect security, cost, and efficiency, but the security of the cryptographic algorithm itself plays the most significant role. However, this process takes up significant resources in the field of service data and leads to a high cost of hardware implementation and a significant decrease in efficiency and performance.
The block encryption algorithm is structured in two groups: the SP network and the Feistel network. The advantage of using an SP network is the diffusion level. Its circular function can modify all block messages in an iterative loop; hence, its safety is relatively high. To solve these problems, ciphers in traditional Feistel structures usually require many rounds, thus this increases energy consumption.
The intensive development of information technology capabilities, including computing power, contributes to the emergence of new modifications of existing attacks, which requires constant development and updating of protection systems.
Thus, the area of research under consideration is relevant. A comprehensive study of the block encryption components used in the development of lightweight algorithms, as well as their compliance with modern technologies, requires continuous and breakthrough scientific investigation.

Literature review and problem statement
Article [1] discusses a new lightweight encryption algorithm SIT (Secure Internet of Things), proposed by a group of Pakistani cryptographers. The structure of the algorithm is a combination of Feistel and SP (substitution-permutation) networks, which uses cryptographic transformations such as modulo 2 (xor), negation of modulo 2 (nxor), substitution (S-box), and permutation. According to the results of the study, the algorithm found that it provides security even after five rounds of encryption. According to the authors, differential and linear cryptanalysis will not bring success in a crypto attack. However, in each round, when using a 4-bit S-block, it would be better to demonstrate the complexity of the attack, taking into account the difference characteristic with sufficient probability.
Paper [2] proposes a new permutation-based streaming algorithm for encrypting video data. The developed stream algorithm becomes resistant to attacks with known plaintext, which is a common problem for modern video data encryption algorithms. As the authors emphasize, encryption before compression is used to prevent the loss of video data in the event of packet loss during transmission. Experiments were carried out to evaluate performance, taking into account the probability of data loss in case of packet loss and encryption speed. Still, the authors of the algorithm propose to conduct further research in order to minimize the increased encryption time to the level of encryption algorithms after compression, which is an urgent task, as well as any in-line and block algorithm aimed at IoT.
LT10 is a symmetric block encryption algorithm that uses a 128-bit key and plaintext. The key generation algorithm consists of rounds of encryption that use the Kasumi encryption algorithm. The authors argue that, as you know, increasing the number of rounds provides high security, therefore, directly affects the performance of encryption. But the cited article describes only the performance of the proposed algorithm and the study, and evaluation of the analysis of cryptographic strength and ensuring the protection of information was not considered [3].
Speck and Simon ciphers are two families of lightweight block ciphers that were introduced by the US National Security Agency in 2013, and in 2014 were included in the international standard ISO/IEC 29192. Both families support different block sizes for encryption and key: the input block sizes are 32, 48, 64, 96, 128 bits, and the secret key sizes are 64, 72, 96, 128, 144, 256 bits. The number of rounds for encryption depends on the selected block sizes and key. Article [4] gives a complete description of the algorithms themselves, a schedule of keys, and a comparative analysis of their performance, as well as their software and hardware implementations with different encryption parameters. The performance assessment is a practical benchmark in the development of new encryption algorithms for lightweight cryptography.
Paper [5] reports attacks on the Present encryption algorithm, including error analysis with statistical cryptanalysis methods. The analysis uses statistical cryptanalysis techniques in practice and exploits the weakness of bit permutation adopted by many lightweight block ciphers in an error attack. The results of the work show that it takes about a fifth of iterative rounds to protect these lightweight ciphers with a permutation of bits. However, this often requires erroneous encryption of the penultimate or last round, as the errors of the middle round are suitable for conducting an attack.
In recent years, the technology of cyber-physical systems (CPS) and the Internet of Things (IoT) have grown exponentially. This has led to the development of low-resource encryption algorithms. Paper [6] presents a lightweight BRISI cipher using modulo addition, shift and XOR, as well as the Fiestel structure. The encryption algorithm uses a 32-bit block of plaintext and a 64-bit key. These resource-constrained devices are designed to meet the requirements of heterogeneous IoT and CPS applications. In such devices, privacy and security have become the most difficult issues. Resource-constrained devices were not designed to provide security features as the 32-bit block used is not suitable for today's technological capabilities.
A lightweight encryption scheme based on Attribute-BasedEncryption is reported in [7], where it used elliptic curves to solve the security problem. A central authority was used to generate keys, attributes, and users, but this can be a problem when it comes to a multi-author application. However, their approach seems fair in terms of the cost of computing and communication for IoT devices.
In [1] a lightweight cipher called Secure IoT (SIT) is presented, this is a 64-bit block cipher, the key length is 64-bit. The architecture of the algorithm is a mixture of Feistel and a single substitution-permutation network. The results of the work show that the algorithm provides significant security in just five rounds of encryption and that the encrypted image is completely covered, i.e., no traces remain. Although known algorithms cannot achieve such a result without the use of encryption modes, which leads to doubts about what is claimed. Therefore, a detailed understanding of the algorithm's encryption process is required.

The aim and objectives of the study
The aim of this study is to develop a lightweight data encryption algorithm.
To accomplish the aim, the following tasks have been set: -to develop a block scheme of the algorithm, which meets the basic requirements for cryptographic transformations, and provides high performance in software and hardware implementation; -to investigate the reliability of the proposed encryption algorithm by methods of cryptographic analysis and avalanche effect.

1. The object and hypothesis of the study
The object of our study is a lightweight block data encryption algorithm. As part of this study, the main hypothesis was put forward that by using simple cryptographic transformations, it is possible to create a lightweight algorithm that meets the basic requirements for cryptographic ciphers and is not inferior in cryptographic strength to known lightweight algorithms. When developing a lightweight algorithm, it was assumed that the structure of the created algorithm could improve the cryptographic properties of previously created lightweight algorithms.
Eastern-European Journal of Enterprise Technologies ISSN 1729-3774 3/9 ( 123 ) 2023 The function f satisfies the Strict Avalanche Criterion The function f has correlation immunity of order k (CI(k)) if 1≤hw(u)≤m and W(u)=0 are fulfilled for all u.
A balanced correlation-immune of order t function is called a t-stable function. The analysis of systems of equations is an important and complex scientific task. Therefore, the function f cannot be randomly selected.

3. "Avalanche effect" in encryption algorithms
The avalanche effect is a manifestation of the dependence of all output bits of ciphertext on each input bit of plaintext. It manifests itself in the dependence of all output bits on each input bit.
When assessing the avalanche effect, the following statistical safety indicators are considered: the average number of output bits that change when one input bit changes (avalanche effect); degree of completeness (d c ); degree of avalanche effect (d a ); the degree of strict avalanche criterion (D sa ). They will be considered for a different number of loops and randomly taken encryption keys.
A cryptographic algorithm satisfies the avalanche criterion if an average of half of the output bits changes when one bit of the input sequence changes.
To characterize the degree of avalanche effect in the transformation, an avalanche parameter is determined and used -the numerical value of the deviation of the probability of a change in bits in the output sequence when the bits in the input sequence change from the required probability value equal to 0.5.
For the avalanche criterion, the value of the avalanche parameter is determined by the following formula: where i is the number of the bit to be changed at the input, k i is the probability of changing half of the bits in the output sequence when the i-th bit in the input sequence changes. The values of the specified avalanche parameter are in the range from 0 to 1 inclusive. At the same time, the lower the value of the avalanche parameter, the stronger the avalanche effect in the transformation [14].
Verification of the statistical security indicators of the cipher begins with the construction of the dependence matrix A and the distance matrix B of the cipher, described by the function f:(GF(2)) n ⟶(GF(2)) m for the set of X inputs, where X is the set itself (GF(2)) n or a randomly selected subset of the set (GF (2)) n [15].
A dependence matrix and a distance matrix are also defined as: ..,n and j=1,...,m, where X is a "suitable" randomly selected subset (GF(2)) n .

2. Nonlinear transformation in encryption algo rithms
Every modern encryption algorithm uses a nonlinear transformation in the form of a lookup table. It is proved that it is a strong cryptographic primitive against linear and differential cryptanalysis [8]. In lightweight block algorithms, table substitutions (S-blocks) are also used for nonlinear transformations. But according to the requirement for lightweight algorithms, S-blocks should not occupy a large amount of memory (ROM and RAM). To store a table that replaces 8-bit pieces of data, you need 256 bytes, and to store a 4-bit S-block, one requires only 16 bytes [9].
To increase the resistance of a block cipher to linear and differential cryptanalysis, two main approaches are used: 1) an increase in the number of active S-blocks; 2) the use of S-blocks with strong cryptographic properties [10].
The procedure of S-block transformation in the cipher provides scattering and mixing of plaintext bytes, which in turn significantly increases the cryptographic strength of the cipher test to the implementation of various cryptanalytic attacks [11][12][13]. The most important cryptographic properties that an S-block should have are balance, high nonlinearity, high algebraic degree, low differential homogeneity, and low autocorrelation.
The following are the basic definitions that are used in evaluating the cryptographic properties of nonlinear replacement nodes.
A Boolean function of n variables is an arbitrary mapping of the form The Hamming distance hd(f,g) between two Boolean functions f, g is defined as the number of vectors of space 2 n F on which these functions take different values, i. e.: The nonlinearity of the Boolean function f from n variables is the minimum Hamming distance between f and all affine functions: N f =min{d(f,φ)} where φ is the set of affine functions.
The Walsh transform F(ω) of the function f over a field 2 n F is defined as: where 〈ω,x〉 is the scalar product 1 1 The function f is called the correlation-immune of order m, 0<m≤n if for any vector 2 , n u F ∈ such that 1≤w(u)≤n, the equality w f (u)=0 is satisfied.
The autocorrelation function and SSI (sum-of-square indicators) are defined as follows: If such matrices are constructed, then the degree of completeness is defined as: The degree of avalanche effect is determined as follows: The degree of strict avalanche criterion is defined as: (10) Ciphers that have a good degree of completeness, a good avalanche effect, and satisfy a strict avalanche criterion must have values d c , d a , that satisfy the conditions:

Differential cryptanalysis of symmetric block encryption algorithms
A block cipher is considered secure enough for practical use after it has undergone extensive cryptanalysis. One of the main methods used for cryptanalysis of block ciphers is differential cryptanalysis. A successful differential attack is based on the discovery of a highly probable differential characteristic, which will be used as a statistical characteristic for key recovery [16].
The differential attack introduced in [17] assumes the existence of ordered pairs (α, β) of binary strings, such that m is a block of plaintext, c and c' are ciphertexts. The same texts are associated with m and m+α. The bitwise difference c⊕c' is more likely to be β than if c and c' were randomly selected binary strings. Such an ordered pair (α, β) is called a differential. The greater the probability of a differential, the more effective the attack. A related criterion for (n, m)function F used as an S-block in round cipher functions is that the output for its derivatives D a (x)=F(x)+F(x+a); 2 , , n x a F ∈ should be distributed as evenly as possible [18].
As you know, at the design stage of modern block ciphers, protection against differential and linear cryptanalysis was laid [19].
When conducting differential cryptanalysis, there are four stages: analysis of the differential properties of the transformations used in the algorithm; finding the most probable value of the difference; search for the right pairs of texts; analysis of the correct pairs of texts to determine the bits of the key.

5. Linear cryptanalysis of symmetric block encryp tion algorithms
Linear cryptanalysis has been used to analyze many ciphers [20]. It focuses on a linear approximation between plaintext, ciphertext, and key.
A linear approximation shows what kind of linear relationship exists between some bits of plaintext, bits of ciphertext, and bits of an unknown key: where a 1 ,a 2 ,…,a n , c 1 ,c 2 ,…,c m and k 1 ,k 2 ,…,k i denote fixed bit positions, and equation (11) is satisfied with probability p≠1/2 for arbitrarily given plaintext A, corresponding ciphertext C, and key K [21,22]. First, an approximation is searched for individual operations within the cipher, then they are combined into approximations that are valid for one round of the cipher. By appropriately concatenation of single-round approximations, the attacker eventually obtains an approximation for the entire cipher [23].
To determine the complexity of the attack, the probability of a linear characteristic is estimated. The linear approximation of a single round can be thought of as a random variable of the form 1 1 ⊕ α ⊕ ⊕ α ⊕β ⊕β ⊕ ⊕β which takes either a value of zero or one (depending on the bits of the key). Then the linear characteristic xor of these random variables and the probability of the linear characteristic can be calculated using the sign collision lemma (lemmas 1, 2 [21]).
Consider two independent random variables X 1 and X 2 . Hence, P(X i =0)=p i and P(X i =1)=1-p i for i∈{1,2}. Then, from the independence of X 1 and X 2 it follows that P( . Lemma 1. Let X i (1≤i≤n) be independent random variables that take values from Z 2 whose values are equal to zero with probability 1 . 2 + ε Then the probability that: Let N be the number of random plaintexts given and p be the probability that equation (12) holds and let |p-1/2| be small enough. Then the probability of success of the algorithm is:

6. Calculation of conditional logical elements
The area of the chip is usually measured in μm 2 , but this parameter is highly dependent on the technologies used and the standard cell libraries. In order to compare microcircuits manufactured using different technologies, there is a unification that allows you to measure the dimensions in conditional logic elements (Gate Equivalent -GE).
Conventionally, a logic element refers to the number of equivalent gates that can perform a specific logical operation. To count equivalent gates, you must first determine the type of valves used (e.g., AND, OR, NOT, NAND, etc.). Then, you can use a standard set of equivalent logic gates.
To count the equivalent gates of a particular logic function, it is necessary to determine the number of gates of each type required to implement the function, and then sum the equivalent gates for each type of valve. From the number of summations of equivalent implementation valves, they are classified as: -requiring less than 1000 GE -ultra-lightweight; -requiring no more than 2000 GE -low-cost; -requiring no more than 3000 GE -lightweight. The most compact hardware implementation of the PRESENT algorithm requires 1570 GE.

1. Development of a principal scheme of an encryp tion algorithm 1. 1. Overview of the encryption algorithm scheme
The LBC algorithm is designed to encrypt block-type data with a length of 64 bits with an 80-bit key. LBC performs 20 rounds when encrypted. Each round includes 4 types of transformation: The encryption process. In encryption, the original block of plaintext is divided into 4 subblocks of 16 bits. Encryption begins by adding the first 64 bits of the master key to the source text (Fig. 1). Next, round transformations are performed.
If we designate the round encryption process as E, we get: where Transformation S. The S transformation was designed to replace block bits with other bits using tables (S-block). This nonlinear transformation of the algorithm uses nibble operations (4 bits). A nonlinear bijective substitution is applied to each nibble, specified by a one-dimensional array consisting of 16 elements (Table 1).
A 16-bit subblock is fed into the input of the S transform and is divided into 4 groups of 4 bits. Each nibble of the input block is an index of the value in the lookup table (Table 1). Thus, the output of the transformation S will be a set of nibbles located at the corresponding indexes in the given lookup table: where A is the input 16 bits, a i is the nibbles of the subblock. Table 1 Lookup table The S-block is generated by obtaining the inverse element modulo x 4 +x 3 +1. After that, the following affine transformation was applied to each element of the table: where b i are bits of a half-byte, i b′ -new bits of a half-byte. Transformation RL. A linear transformation of the LBC algorithm is a cyclic shift of bits and subblocks to a certain position. The 64 bits of the input block are divided into 4 sub-blocks of 16 bits each. Linear transformation at the subblock level is carried out using the RL function, which applies only to the first subblock (Fig. 1). The result of the RL function is summed with the second subblock modulo 2 (xor operation). The RL function has the following form: where «  »operator of cyclic shift of bits of a subblock to the left. Transformation L. This linear transformation is carried out over the whole block. In this case, the subblocks are cyclically shifted to the left by one position, i. e., the second subblock will be in the first position, the third subblock will be in place of the second, the fourth subblock will be in place of the third, and the first subblock will move to the fourth position (Fig. 1).
Transformation K. After cyclic shifts of subblocks, round keys are added to the data block modulo 2. The round key, consisting of 64 bits, is divided into 4 connections of 16 bits each and summed up with the corresponding data subblocks. The decryption process. The decryption process is performed in reverse order. In this case, instead of the S and L transformations, the inverse transformations S -1 and L -1 are used, and the RL and K transformations remain the same as in the encryption process: Transformation S -1 . This is the inverse of the conversion to the tabular replacement of nibble. It is carried out using Table 2. Table 2 Reverse lookup table At the input of the S -1 transformation, half bytes of the input block are supplied. The same nibbles will serve as the index of the value located in the reverse lookup table (Table 2). Thus, the output is sets of nibbles.

1. 2. Generation of round keys
The LBC algorithm has a secret key that is 80 bits long. From this sequence of bits, the round keys of the algorithm are generated. The key generation process consists of several transformations. The scheme for generating round keys is shown in Fig. 2.
The master key of 80 bits is divided into 5 subblocks of 16 bits. At the first stage, the first subblock of keys is replaced with other bits using the S-block lookup table (Table 1). Modulo 2 constant Nr is added to the fourth subblock. Here, Nr is the number of the round, which has values from 1 to 20.
In the second step, the bits of each subblock are cyclically shifted to the left by a certain position. This procedure is performed by the function R i , here i is the number of positions by which the bits of the subblock will be shifted. As shown in Fig. 2, the bits of the first subblock are shifted cyclically to the left by position 6, the second subblock by 7, the third subblock by 8, and the remaining subblocks by 9 and 10, respectively.
At the third stage, the subblocks are summed up modulo 2 with the adjacent right subblock: the first with the second, the second with the third, the third with the fourth subblock.
At the last stage of the transformation, the subblocks are cyclically shifted to the left by one position, that is, the subblocks are swapped entirely.
After the above transformations of 5 subblocks (80 bits), the initial 4 subblocks (64 bits) are taken from the left side of the sequence and used as round encryption (decryption) keys. The 80 bits obtained from the previous transformations will be used as the basis for the next round keys. This process continues until all round encryption keys are obtained.

2. Investigation of the security of the proposed lightweight encryption algorithm 2. 1. Investigation of the properties of S block
The LBC algorithm uses a 4-bit S-block. The 4-bit S-block contains 16 unique and distinct elements, which range from 0 to F in hexadecimal format (Table 1). In the study of the characteristics of the S-block, such properties as balance, weight, and Hamming distance, nonlinearity, correlation value, etc. were studied. The results are given in Table 3. These indicators of the effectiveness of the S-block affect the level of resistance of the developed cipher to various crypto attacks, therefore, conclusions about its strength are based on their values.

2. 2. Estimation of the "avalanche effect" of the lightweight LBC encryption algorithm
The LBC encryption algorithm splits text into blocks that are 64 bits long. For the practical assessment of the avalanche effect, the avalanche criterion was used from text [25,26], the results of which are given in Tables 4, 5.
The lower the value of the avalanche parameter, the stronger the avalanche effect in the transformation. From Tables 4, 5 it follows that the LBC encryption algorithm satisfies the requirements of the avalanche criterion after the seventh round.
Let's look at the example of the spread of the avalanche effect. To do this, select two plain texts in hexadecimal form. Plaintext1: «AA AA AA AA AA AA AA» consists only of «AA» characters. Plaintext2: «AA AA AA AA AA BA AA AA AA» differs from the first plaintext by only one bit of the character «B». The following sequence of characters is selected as the key: "30 B5 13 CE BB 69 0B F3 95 33". Table 6 shows the avalanche effect of the LBC algorithm from one to seven rounds.   The LBC algorithm uses the following conversions: the S-block of substitution (S), the RL function, and the bitwise addition of the key (xor) operation. The xor operation does not affect the propagation of aggregate changes.
Column A indicates the number of changed bits, column B indicates the approximate percentage; gray are the changed positions of the ciphertexts. The avalanche effect reaches 100 % after the seventh round.
A similarly stronger criterion, called the strict avalanche criterion (SAC), requires that for any i and j, when the input bit i of the A-matrix is inverted, any output bit j changes with a probability of 1/2. Table 7 shows statistical safety indicators, where d c is the degree of completeness, d sa is the degree of strict avalanche criterion, d a is the degree of avalanche effect. The data given in Table 7 show that the depth of the avalanche effect of the LBC algorithm is reached in the seventh round.

2. 3. Differential cryptanalysis of the LBC algorithm
For differential cryptanalysis, two pairs of pre-selected ciphertexts A 1 and A 2 are taken, where the attacker calculates the differential: ∆A=A 1 ⊕A 2 , and with the help of a calculated differential tries to determine what the differential of ciphertexts should be ∆B=B 1 ⊕B 2 .
In most cases, the probability of an attacker guessing the exact value of ∆B is extremely low. An attacker is able to determine the frequency of returning ∆B for a given ∆A, which in turn gives him the opportunity to obtain part of the key or the whole key.
The operation of bitwise cyclic shift and xor do not have any effect on the difference, it is easy to check. However, in order to construct multi-round characteristics, it is important to determine exactly how the values of the desired bytes will be converted, which will be obtained after other transformations. When the round key is added modulo 2, its bits will be mutually destroyed. Therefore, the value of the key also does not affect the differentials. In this regard, the change in difference depends mainly on the differential properties of S-blocks.
The LBC algorithm uses a 4-bit S-block. For the S-block, it is necessary to build a table of the distribution of the difference. The methodology for constructing such a table is given in [27,28]. When conducting differential cryptanalysis, it is necessary to track all combinations of binary vectors, add all possible input and output elements of S-block, count its zeros, and enter the results into the difference distribution table (Table 8). Table 8 Table of the distribution of the difference for the S-block  Table 8 of the distribution of differences, the maximum value of the probability is the values 4/16=1/4. For Table 6 An example of the avalanche effect of the LBC algorithm convenience, Table 9 shows the input and corresponding output differences with maximum values (in hexadecimal form). Table 9 Result of difference conversion for S-block 0х01 0х0b 0х02 0х03 0х03 0х0d 0х04 0х09 0х05 0х0f 0х07 0х04 0х08 0х0c 0х09 0х08 0х0a 0х01 0х0b 0х0a 0х0d 0х06 0х0e 0х02 0х0f 0х05 0х06 0х0a 0х0c 0х07 In Table 9, ∆X is the input to the substitution block, and ∆Y is the value obtained at the output of the substitution block, corresponding to ∆X.
The next encryption step (RL transform) performs linear transformations using a cyclic shift and an xor operation. They do not have any effect on the change in the probability of the difference transformation since these shifts are performed only within one subblock.
Based on the differential properties of the transformations used in the encryption algorithm, we construct several round characteristics and obtain their probability. The main task is to find the segment of the ciphertext where the least number of active S-blocks was affected. The probability of obtaining the correct pair of texts for a given characteristic depends on this. Then, you need to determine the number of rounds for the algorithm, which will make it possible to analyze the ciphertext faster than the brute force method. The brute force complexity for a 64-bit block of data and an 80-bit key length is 2 80 .
The 64-bit input block is divided into 4 sub-blocks of 16 bits each and designated as X 1 , X 2 , X 3 , X 4 . During the analysis, it was established that the smallest mixing of bits occurs in the second sub-block (X 2 ). As it was noted earlier, all outgoing bits depend on all incoming bits after the seventh round. Therefore, the description of the output equation for each , i X 1,4 i = after the seventh round will be needed to find the correct pairs. Two transformations are used in the structure of the algorithm: S-block and ( ) ( ) ( ) .

RS X RS RS S X Y S RS RS RS
From the obtained equations, it can be seen that the second subblock (Y 1 ) uses the smallest number of transformations. Therefore, after going through the rounds of encryption step by step, we will show one pair with the highest probabilities.
Any pairs of input and output data with values in cells 4 can be selected from the distribution table (table 8)  If what we have found so far belongs to the first subblock (RS(X 1 )), then we can choose these differences as the correct pair for the second subblock. In general, you can choose any pair from Table 9 as they are considered correct. Each row of the table contains one cell with the number 4, that is, each input has a corresponding output element with equal probability. So, 0x888A→0x3331 are the correct pairs for the expression S(X 2 )⊕RS(X 1 ) and their probability is equal to 4 1 1 1 1 4 16 2 . 4 ⎛ ⎞ ⋅ = = ⎜ ⎟ ⎝ ⎠ Since the S-block of replacement is used in the next step of encryption, according to Table 9, for 0x3331 the difference will most likely be 0xDDDB.
According to the scheme of the encryption algorithm, if we choose pairs with a high probability, as indicated above, we will end up with the following input and output sequences for seven rounds: Therefore, for the input difference 0x1111, the most likely output difference will be 0xA745. It is not difficult to calculate that the S-block occurs in the output expression for the second sub-block 13 times, i. e., the probability of the desired difference will be equal to 13 26 Then, by analyzing the movement of each element of the open text and monitoring the execution of the substitution at each step, it was determined that the probability of finding the key for twelve rounds of the cipher will be equal to 98 1 . 2

2. 4. Linear cryptanalysis of the LBC algorithm
To start the analysis, we fill in the linear approximation table (LAT) for the S-block.
During the construction of the table, all possible combinations of input and output binary vectors are traced. Each pair of vectors is used as a mask, which is applied to all possible input-output pairs of the replacement block and is determined by the following ratio: where α, β∈Z 256 and the multiplication sign denotes the scalar product operation [13,29,30]. The obtained values are shown in Table 10.
In the linear approximation table (Table 10), the first column contains the input masks, and the first row contains the output masks. If the 4-bit linear equation is satisfied 0 times, then it can be concluded that this 4-bit linear relation is absent for this particular S-block. If the 4-bit linear equation is satisfied 16 times, then it is also possible to conclude that this 4-bit linear relation is present for this particular 4-bit S-block. In both cases, full information is given to cryptanalysts. The result is better for cryptanalysts if the probability of the presence or absence of unique 4-bit linear equations is far from 1/2 or close to 0 or 1. If the probabilities of the presence or absence of all unique 4-bit linear relationships are equal to 1/2 or close to 1/2, then they say that it is difficult to perform linear cryptanalysis on a 4-bit S-block. Therefore, the result for the cryptanalyst will be better if the number of eights in the table is smaller. If the number of eights is much greater than the second numbers in the table, then it is said that the 4-bit cryptographic S-block is more resistant to linear cryptanalysis [12,31,32].
In LAT, cells with numbers farthest from the number 8 consist of 12 or 4. Therefore, according to the table, the effective linear equations necessary for further analysis and the probability of each of them is 3/4, i. e., with the largest deviations from 1/2: and j is the position number in the block, ( ) 0,64 . j = As in differential cryptanalysis, we will analyze the encryption algorithm on seven rounds since all output bits depend on all input bits after the seventh round. Also, we

[ ] [ ] [ ]
From among the linear e ffective equations obtained from S-blocks, we choose x [1]⊕s [1]=0. Since the input and output variables are involved only once, this in turn is convenient for further analysis. Let's write this equation for the first subblock in the first round (taking into account that the key is added):  Fig. 3 shows the R-function for the first sub-block.
According to the scheme of the R-function shown in Fig. 3 Then, according to lemma 1, the probability of a given effective equation will be equal to To perform an effective attack using linear cryptanalysis, 2 64 open/closed text pairs are needed.

2. Investigation of the hardware performance of the LBC algorithm
The main characteristics of the hardware implementation of the algorithm are given in Table 11. The calculation of the number of conditional logical elements was carried out according to the methodology presented in article [33]. From Table 11, it follows that the main part of the area is occupied by the place for storing the key, the state of the data, and the S-block. They are followed by the addition of a key. The permutation can be implemented with simple wiring and therefore does not require GE. It follows from the above that the total GE in the developed algorithm is 1966.32, and thus the LBC cipher can be attributed to the low-cost group. Therefore, this architecture is suitable for low-cost devices, in addition, it can be used to create a cryptographic coprocessor with low area consumption. A comparison of the obtained results with other ciphers is given in Table 12. The developed LBC algorithm, like Present-80, can be effectively used as a hardware implementation for devices with low hardware resources.
The results presented in Table 13 indicate that the encryption time is much less than Present.

Discussion of results of the study of the LBC algorithm
In the development and research of the cryptographic properties of the LBC algorithm, the main reference was to take the lightweight PRESENT algorithm. In 2012, the PRESENT algorithm was included in the international standard for lightweight encryption ISO/IEC 29192-2:2012.
The developed lightweight LBC algorithm is structurally different from the lightweight algorithms that exist today.
The LBC algorithm has its own 4-bit lookup table (Table 1). For the linear transformation, the RL function was used. This function is added to achieve a good quality of the "avalanche effect". And the cyclic shift of subblocks provides good bit shuffling along with the RL function.
The number of rounds of the LBC algorithm is 20. According to the results of the study, a good avalanche effect is achieved starting from the 7 th round (Table 7). Therefore, 20-round conversions provide sufficient and good crypto strength.
Also, the most distinctive feature of the LBC algorithm from Present is the generation of round keys. It uses the shift functions R i , similar to RL, which is used in transformations of the main encryption algorithm. While in the Present algorithm only the leftmost 4 bits of the master key are replaced via the S-block, then in the LBC algorithm, 16 bits of the leftmost subblock of the key are replaced using the S-block used in the main encryption process. Here, the R i function and the cyclic shift of the subblocks of the master key provide a good avalanche effect, thereby improving the properties of round keys. The algorithm has a simple but flexible structure. The flexibility relates to the fact that, if necessary, you can increase the block size and the size of the master key by adding an additional subblock with a size of 16 bits. At the same time, the basic structure of the algorithm is preserved.
Theoretical and experimental tests have shown that the algorithm fully complies with the basic cryptographic requirements.
The first study was to test the indicators of the avalanche effect. For this, a special program was created. The program read the avalanche criterion and the values of the avalanche parameter as each bit of the input data block changed. Table 4 shows the results of the study of the avalanche effect in the first round with changes in the first bit of the input block in each round. Table 4 shows that the average value of the avalanche effect is 0.8766, the maximum value is 0.969. After the seventh round, the average value is 0.0992. Here you can see that, starting from the 7th round, the average value approaches 0.09, in turn, this property shows a good quality of the avalanche effect. The smaller the value of the avalanche parameter, the stronger the avalanche effect in the transformation. From Tables 4, 5 it follows that the LBC encryption algorithm satisfies the requirements of the avalanche criterion after the seventh round. Therefore, we believe that 20-round transformations sufficiently provide good cryptographic strength. And in the study of the avalanche effect, various options for the input block were taken and the results are shown in Table 6. During the research, the per-cycle values of the statistical safety indicators of the LBC algorithm were also calculated. For this, another computer program has been created. To compare the values of statistical safety indicators, the per-cycle values of the Present algorithm were additionally calculated. As can be seen from the table, the value of the degree of completeness (d c ) for the Present algorithm acquires a good indicator starting from the second round (Table 5). And for the LBC algorithm, the value of the degree of completeness (d c ) from the 8 th round acquires the values 1 (Table 7). This indicator also proves the good qualities of the avalanche effect.
The second stage of the study of the properties of the LBC algorithm was two types of cryptanalysis: differential and linear.
When conducting differential cryptanalysis, it was found that for an input difference of 0x1111, the most probable output difference would be 0xA745. It is easy to calculate that the S-block occurs 13 times in the output expression for the second sub-block, i. e., the probability of the desired difference is equal to and, as a result, a suffi cient level of security is not provided. When analyzing the movement of each element and tracking the substitution at each step, it was found that after the twelfth round, the S-block will be used 127, 49, 67, and 92 times on each subblock. Repeatedly choosing differentials with maximum probabilities, it turns out that the probability of finding the key for twelve rounds of the cipher will be equal to 98 1 . 2 The results of linear cryptanalysis showed that after the seventh round, the ciphertexts repeat the properties of random substitutions, for an effective attack using linear cryptanalysis, 2 64 open/closed text pairs are needed. Due to the fact that 64-bit blocks are fed into the encryption algorithm, the maximum possible number of open/closed text pairs is 2 64 . Thus, there is no need to perform linear cryptanalysis for more rounds.
In practical use, the developed encryption algorithm does not require significant restrictions, but when implementing this algorithm in devices, it is necessary to know the main characteristics of this device. The results of the study, obtained during the assessment of reliability and speed, showed that the developed lightweight encryption algorithm fully complies with the basic requirements. It was noted that, taking into account modern technological capabilities, the length of the block and the master key can be increased.
Research work in this direction can be continued in terms of a deeper analysis of the security of the developed lightweight encryption algorithm. In particular, it is possible to carry out a number of other cryptographic attacks to improve the algorithm. Table 13 The results of research on the performance of ciphers

Conclusions
1. D uring the implementation of our task, a block diagram of the algorithm was developed, which meets the basic requirements for cryptographic transformations. The scheme of the algorithm is built in such a way as to increase performance due to parallel computing, by dividing the block into subblocks. If necessary, the structure of the algorithm make it possible to increase the size of the block and the key master by adding an additional subblock, which in turn enhances the cryptographic strength.
Also, counting the number of conditional logic elements and comparing the performance of ciphers with the Present algorithm showed the effectiveness of its hardware implementation. That is, the number of GE counts confirms that the algorithm falls into the "low-cost" implementation, which requires no more than 2000 GE. The time spent by the LBC encryption algorithm is 20 times faster than Present, and by the LBC key generation algorithm it is 1.5 times faster than Present.
2. Based on the tests and studies carried out, it was concluded that the LBC encryption algorithm is effective in providing an avalanche effect since it meets the requirements of the avalanche criterion after the seventh round.
Differential linear cryptanalysis of the algorithm showed that the obtained probability value by differential cryptanalysis exceeds the complexity of exhaustive search, i.e., after the eleventh round, the encryption algorithm is resistant to differential cryptanalysis. Due to the extremely low probability of obtaining correct pairs of texts, the algorithm for finding such texts cannot be implemented by the means that are currently available.
The obtained results of linear cryptanalysis indicate that in the LBC encryption algorithm, linear indicators after the seventh round repeat the properties of random substitutions. The probability of finding correct pairs is very small and tends to the complexity of the exhaustive search of the algorithm, which allows us to conclude that it is resistant to linear cryptanalysis.

Conflicts of interest
The authors declare that they have no conflicts of interest in relation to the current study, including financial, personal, authorship, or any other, that could affect the study and the results reported in this paper.

Funding
The research work was carried out within the framework of the project AP09259570 "Development and research of a domestic lightweight encryption algorithm with limited resources" at the Institute of Information and Computing Technologies of CS MES RK.

Data availability
Manuscript has no associated data.