Determination of the influence of the choice of the pruning procedure parameters on the learning quality of a multilayer perceptron
DOI:
https://doi.org/10.15587/1729-4061.2022.253103Keywords:
multilayer perceptron, neural network, pruning, learning curve, weight coefficients, image classificationAbstract
Pruning connections in a fully connected neural network allows to remove redundancy in the structure of the neural network and thus reduce the computational complexity of its implementation while maintaining the resulting characteristics of the classification of images entering its input. However, the issues of choosing the parameters of the pruning procedure have not been sufficiently studied at the moment. The choice essentially depends on the configuration of the neural network. However, in any neural network configuration there is one or more multilayer perceptrons. For them, it is possible to develop universal recommendations for choosing the parameters of the pruning procedure. One of the most promising methods for practical implementation is considered – the iterative pruning method, which uses preprocessing of input signals to regularize the learning process of a neural network. For a specific configuration of a multilayer perceptron and the MNIST (Modified National Institute of Standards and Technology) dataset, a database of handwritten digit samples proposed by the US National Institute of Standards and Technology as a standard when comparing image recognition methods, dependences of the classification accuracy of handwritten digits and learning rate were obtained on the learning step, pruning interval, and the number of links removed at each pruning iteration. It is shown that the best set of parameters of the learning procedure with pruning provides an increase in the quality of classification by about 1 %, compared with the worst set in the studied range. The convex nature of these dependencies allows a constructive approach to finding a neural network configuration that provides the highest classification accuracy with the minimum amount of computational costs during implementation.
References
- Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F. E. (2017). A survey of deep neural network architectures and their applications. Neurocomputing, 234, 11–26. doi: http://doi.org/10.1016/j.neucom.2016.12.038
- Tolstikhin, I., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T. et. al. (2021). MLP-Mixer: An all-MLP Architecture for Vision. ArXiv. Available at: https://arxiv.org/abs/2105.01601
- Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T. et. al. (2021). An image is worth 16x16 words: transformers for image recognition at scale. ArXiv. Available at: https://arxiv.org/abs/2010.11929
- Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J., Shi, H. (2021). Escaping the Big Data Paradigm with Compact Transformers. ArXiv. Available at: https://arxiv.org/abs/2104.05704
- Patches Are All You Need? (2021). Under review as a conference paper at ICLR 2022. Available at: https://openreview.net/pdf?id=TVHS5Y4dNvM
- Guo, M.-H., Liu, Z.-N., Mu, T.-J., Hu, S.-M. (2021). Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks. ArXiv. Available at: https://arxiv.org/abs/2105.02358
- Lee-Thorp, J., Ainslie, J., Eckstein, I., Ontañón, S. (2021). FNet: Mixing Tokens with Fourier Transforms. ArXiv. Available at: https://arxiv.org/abs/2105.03824
- Liu, H., Dai, Z., So, D. R., Le, Q. V. (2021). Pay Attention to MLPs. ArXiv. Available at: https://arxiv.org/abs/2105.08050
- Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B. (2021). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. ArXiv. Available at: https://arxiv.org/abs/2103.14030
- Denil, M., Shakibi, B., Dinh, L., Ranzato, M. A., Freitas, N. (2014). Predicting Parameters in Deep Learning. ArXiv. Available at: https://arxiv.org/abs/1306.0543
- Blalock, D., Gonzalez Ortiz, J. J., Frankle, J., Guttag, J. (2020). What is the state of neural network pruning? ArXiv. Available at: https://arxiv.org/abs/2003.03033
- Han, S., Pool, J., Tran, J., Dally, W. J. (2015). Learning bothWeights and Connections for Efficient Neural Networks. ArXiv. Available at: https://arxiv.org/pdf/1506.02626v3.pdf
- LeCun, Y., Denker, J. S., Solla, S. A. (1990). Optimal Brain Damage. NIPS. Available at: http://yann.lecun.com/exdb/publis/pdf/lecun-90b.pdf
- Hinton, G., Vinyals, O., Dean, J. (2015). Distilling the knowledge in a neural network. NIPS Deep Learning and Representation Learning Workshop. ArXiv. Available at: https://arxiv.org/abs/1503.02531
- Li, C., Peng, J., Yuan, L., Wang, G., Liang, X., Lin, L., Chang, X. (2020). Block-Wisely Supervised Neural Architecture Search With Knowledge Distillation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi: http://doi.org/10.1109/cvpr42600.2020.00206
- Aflalo, Y., Noy, A., Lin, M., Friedman, I., Zelnik, L. (2020). Knapsack Pruning with Inner Distillation. ArXiv. Available at: https://arxiv.org/abs/2002.08258
- Wang, Z., Li, F., Shi, G., Xie, X., Wang, F. (2020). Network pruning using sparse learning and genetic algorithm. Neurocomputing, 404, 247–256. doi: http://doi.org/10.1016/j.neucom.2020.03.082
- Denton, E. L., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R. (2014). Exploiting linear structure within convolutional networks for efficient evaluation. Advances in Neural Information Processing Systems, 1269–1277.
- Li, Y., Gu, S., Mayer, C., Van Gool, L., Timofte, R. (2020). Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi: http://doi.org/10.1109/cvpr42600.2020.00804
- Han, S., Mao, H., Dally, W. J. (2015). Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. ArXiv. Available at: https://arxiv.org/abs/1510.00149
- Qiu, J., Wang, J., Yao, S., Guo, K., Li, B., Zhou, E. et. al. (2016). Going Deeper with Embedded FPGA Platform for Convolutional Neural Network. Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. Monterey. doi: http://doi.org/10.1145/2847263.2847265
- Paupamah, K., James, S., Klein, R. (2020). Quantisation and Pruning for Neural Network Compression and Regularisation. 2020 International SAUPEC/RobMech/PRASA Conference. doi: http://doi.org/10.1109/saupec/robmech/prasa48453.2020.9041096
- Louizos, C., Welling, M., Kingma, D. P. (2018). Learning sparse neural networks through l0 regularization. ICLR 2018. ArXiv. Available at: https://arxiv.org/abs/1712.01312
- Li, J., Qi, Q., Wang, J., Ge, C., Li, Y., Yue, Z., Sun, H. (2019). OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi: http://doi.org/10.1109/cvpr.2019.00721
- Gomez, A. N., Zhang, I., Kamalakara, S. R., Madaan, D., Swersky, K., Gal, Y. et. al. (2019). Learning Sparse Networks Using Targeted Dropout. ArXiv. Available at: https://arxiv.org/abs/1905.13678
- Tabik, S., Peralta, D., Herrera-Poyatos, A., Herrera, F. (2017). A snapshot of image pre-processing for convolutional neural networks: case study of MNIST. International Journal of Computational Intelligence Systems, 10 (1), 555–568. doi: http://doi.org/10.2991/ijcis.2017.10.1.38
- Cireşan, D. C., Meier, U., Gambardella, L. M., Schmidhuber, J. (2010). Deep, Big, Simple Neural Nets for Handwritten Digit Recognition. Neural Computation, 22 (12), 3207–3220. doi: http://doi.org/10.1162/neco_a_00052
- Simard, P. Y., Steinkraus, D., Platt, J. C. (2003). Best practices for convolutional neural networks applied to visual document analysis. Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings. San Mateo: IEEE Computer Society Press, 958–962. doi: http://doi.org/10.1109/icdar.2003.1227801
- Galchonkov, O., Nevrev, A., Glava, M., Babych, M. (2020). Exploring the efficiency of the combined application of connection pruning and source data preprocessing when training a multilayer perceptron. Eastern-European Journal of Enterprise Technologies, 2 (9 (104)), 6–13. doi: http://doi.org/10.15587/1729-4061.2020.200819
- LeCun, Y., Cortes, C., Burges, C. J. C. The mnist database of handwritten digits. Available at: http://yann.lecun.com/exdb/mnist/
- Brownlee, J. (2021). Weight Initialization for Deep Learning Neural Networks. Available at: https://machinelearningmastery.com/weight-initialization-for-deep-learning-neural-networks/
- Colab. Available at: https://colab.research.google.com/notebooks/welcome.ipynb
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Oleg Galchonkov, Alexander Nevrev, Bohdan Shevchuk, Nikolay Baranov
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.