Development of a Method for Compressing Images on the Basis of JPEG Algorithm

The problem of image optimization, namely the reduction of the physical size of the image by minimizing image quality as little as possible, is considered. The object of research are methods for processing and compressing images. When analyzing the methods, one of the biggest problems was discovered, which consists in the fact that when solving the problem of image processing and compression, the studied methods allow to achieve the slightest loss in quality, but as a result, the compression ratio is significantly reduced. To overcome this problem, it was decided to develop a modification of the JPEG compression algorithm. The proposed modification consists in additional quantization of the spectrum after a discrete cosine transform, and then the resulting spectrum is fed to a Huffman encoder, which makes compression even more efficient. A method is obtained for solving the image optimization problem, which allows one to obtain an image with a smaller size and a large compression ratio while maintaining optimal quality. This is due to the fact that the proposed method has a number of features, as the original color image can have 24 bits per point, in particular, the ability to set the compression ratio. Thanks to this, it is possible to obtain a signal-to-noise ratio of 54.2 dB at a quality factor of zero. Compared with the well-known LZW algorithm, which is much better, as a result of which it allows to get a processed image with a much smaller physical size. The assessment of image quality, depending on the parameters of the task. It is shown that for problems of small and medium dimensions, the developed method provides minimal quality loss. The results of solving the problem for a specific example demonstrate the advantage of the developed method over existing ones. The results can be successfully applied to solve the problem of optimizing image size while maintaining maximum quality.


Introduction
Recently, compression and image processing methods have been widely used in various fields of information technology, for example, to increase the speed of loading content on sites with a large number of images. However, the known methods do not allow to achieve optimal image compression, as they are based on a not quite perfect balance between image quality and size. With large compression, the size decreases significantly, but the image quality is also significantly lost, and vice versa, if to maintain the quality, the image size is reduced not in the amount in which it is necessary. Therefore, the urgent task is to develop a modification of the image processing and compression method [1]. The object of research is methods for processing and compressing images. The aim of research is to develop a method for processing and compressing images in order to minimize image size, without significantly affecting image quality.

Methods of research
Saving space in JPEG starts at the image submission stage. Instead of the usual RGB, it uses YCbCr. The image is divided into blocks of 8×8 pixels. Each block undergoes a discrete cosine transform (DCT), which translates the block spatial in spectral form. A spectrum can be compressed more efficiently than individual pixels. Additionally, the spectrum is quantized, which increases the number of zeros in the block. Quantization spectral coefficients are compressed in series. After the series is compressed, the result is fed to the Huffman encoder [2], which makes the compression even more efficient. Further, the entire process is repeated for each image block. It follows that the higher the degree of quantization, the more concise the output image.
The main stage of the method is the discrete cosine transform (DCT), which is a type of Fourier transform [3]. It allows to move from the spatial representation of the image to its spectral representation and vice versa. All the transformations that are usually done on the signals during their digital processing, one way or another, are reduced to the decomposition of the function into other, so-called basic functions [4].
But such a two-dimensional representation allows to notice interesting features -it can be seen that the horizontal coordinate of the position of the basic function characterizes the horizontal component of the changes in ISSN 2664-9969 the image in the original square, the vertical coordinatethe vertical component. The larger, for example, the coefficient in front of the basis function, located to the right of the beginning, the more abrupt the transitions of the image in the horizontal plane are [5].
Applying the DCT to each working matrix, let's obtain a matrix in which the coefficients in the upper left corner correspond to the low-frequency component of the image, and in the lower right to the high-frequency component.
It is possible to create the DCT matrix using the following formula [6]: where N -the size of the matrix is 8; i -the row of the matrix 0 < i < 7 and j -the column of the matrix 0 < j < 7.
Let's carry out matrix multiplication using: where P -image block of size 8×8 elements; P DCT -block after passing the DCT; DCT -cosine transform matrix; DCT T -corresponding transposed matrix [7]. The quantization process uses the fact that in real images high-frequency components are almost never found, but not always a pixel-wide grid. Therefore, high-frequency components can be encoded more schematically. And here we are once again helping out by presenting the DCT results in a two-dimensional form [8].
Now it is necessary to divide each number in the matrix by the number in the corresponding position in the quantization matrix. When obtain fractional numbers, their main thing is to properly round to integers. If the inverse transformation is carried out, the deviation from the reference values does not exceed 10 %. This ends the loss of information, other transformations relate to methods without loss of information. So, the significant positive aspects of the JPEG algorithm are that: a) compression degree is specified; b) original color image may have 24 bits per point. The negative sides of the algorithm are that: a) with increasing compression ratio, the image splits into individual squares (8×8). This is due to the fact that large losses occur at low frequencies during quantization, and it becomes impossible to restore the original data; b) Gibbs-halo effect [9] appears along the boundaries of sharp color transitions.

Research results and discussion
One of the problems with computer graphics is that an adequate criterion for assessing image quality losses has not yet been found. Better loss of image quality appreciates our eyes. The measure that is now being used in practice is called the peak to peak signal-to-noise ratio (PSNR) [10]: Let's study the signal-to-noise ratio for various parameters of the quality factor Q. With an increase in Q, a decrease in the signal-to-noise ratio is observed. It should be noted that at Q = 0, the signal-to-noise ratio is 54.2 dB. This is due to the fact that the image is still processed. In this case, the losses do not occur during the quantization process, but at the rounding stages of the variables. Two areas should be noted: first, a sharp decrease in the signal-to-noise ratio, and then a slight change with a significant increase in the quality factor. This is important from a practical point of view, as it allows the user to greatly facilitate the search for a compromise between image quality and compression ratio. Fig. 1 shows a graph of the dependence of the compression coefficient on the quality factor, that is, a characteristic of the degree of the image of the package depending on its quality. At maximum quality (Q = 0). The compression ratio is minimal and equal to 4. Reducing the requirements for image quality allows to increase the compression ratio. The dependence is saturation in nature, which is due to the use of the adaptive Huffman method, which in itself is limited by the compression ratio.

Conclusions
A solution to the problem of image processing and compression is proposed. An image processing and compression method has been developed, which consists in solving the multicriteria problem of optimizing the physical size of an image without significant quality loss. The developed method allows to obtain a processed image with a much smaller physical size, it is very advisable to save space on storage media and to speed up the loading of Internet pages with a large number of images.
The calculation time is estimated depending on the parameters of the problem. It is established that for problems of small and medium dimensions the developed method provides an acceptable calculation time. The results of solving the problem for a specific example demonstrate the advantage of the developed method over existing ones. Thus, JPEG algorithm is modified for compressing the image and optimizing the physical size of the image. The use of this compression method in solving this problem has shown its high computational efficiency and provides an effective tool for solving this problem.
Thus, the obtained results allow to conclude that the proposed method for processing and compressing images is expedient and effective for solving the problem of optimizing the physical size of the image without significant loss of quality.