Using a neural network in the second stage of the ensemble classifier to improve the quality of classification of objects in images
DOI:
https://doi.org/10.15587/1729-4061.2022.258187Keywords:
multilayer perceptron, neural network, ensemble classifier, weights, classification of imagesAbstract
Object recognition in images is used in many areas of practical use. Very often, progress in its application largely depends on the ratio of the quality of object recognition and the required amount of calculations. Recent advances in recognition are related to the development of neural network architectures with a very significant amount of computing that are trained on large data sets over a very long time on state-of-the-art computers. For many practical applications, it is not possible to collect such large datasets for training and only computing machines with limited computing power can be used. Therefore, the search for solutions that meet these practical restrictions is relevant. This paper reports an ensemble classifier, which uses stacking in the second stage. The use of significantly different classifiers in the first stage and the multilayer perceptron in the second stage has made it possible to significantly improve the ratio of the quality of classification and the required volume of calculations when training on small data sets. The current study showed that the use of a multilayer perceptron in the second stage makes it possible to reduce the error compared to the use of the second stage of majority voting. On the MNIST dataset, the error reduction was 29‒39 %. On the CIFAR-10 dataset, the error reduction was 13‒17 %. A comparison of the proposed architecture of the ensemble classifier with the architecture of the transformer-type classifier demonstrated a decrease in the volume of calculations while reducing the error. For the CIFAR-10 dataset, an error reduction of 8 % was achieved with a calculation volume of less than 22 times. For the MNIST dataset, the error reduction was 62 % when winning by the volume of calculations by 50 times
References
- Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25 (NIPS 2012). Available at: https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
- Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition. doi: https://doi.org/10.1109/cvpr.2009.5206848
- Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T. et. al. (2021). An image is worth 16x16 words: transformers for image recognition at scale. arXiv. Available at: https://arxiv.org/pdf/2010.11929.pdf
- Sun, C., Shrivastava, A., Singh, S., Gupta, A. (2017). Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. 2017 IEEE International Conference on Computer Vision (ICCV). doi: https://doi.org/10.1109/iccv.2017.97
- Brownlee, J. Deep Learning for Computer Vision. Image Classification, Object Detection, and Face Recognition in Python. Available at: https://machinelearningmastery.com/deep-learning-for-computer-vision/
- Denil, M., Shakibi, B., Dinh, L., Ranzato, M. A., Freitas, N. (2014). Predicting Parameters in Deep Learning. arXiv. doi: https://doi.org/10.48550/arXiv.1306.0543
- Blalock, D., Gonzalez Ortiz, J. J., Frankle, J., Guttag, J. (2020). What is the state of neural network pruning? arXiv. doi: https://doi.org/10.48550/arXiv.2003.03033
- Tolstikhin, I., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T. et. al. (2021). MLP-Mixer: An all-MLP Architecture for Vision. arXiv. doi: https://doi.org/10.48550/arXiv.2105.01601
- Aggarwal, C. C., Sathe, S. (2017). Outlier Ensembles. An Introduction. Springer, 276. doi: https://doi.org/10.1007/978-3-319-54765-7
- Rokach, L. (2019). Ensemble Learning. Pattern Classification Using Ensemble Methods. World Scientific, 300. doi: https://doi.org/10.1142/11325
- Hirata, D., Takahashi, N. (2020). Ensemble learning in CNN augmented with fully connected subnetworks. arXiv. doi: https://doi.org/10.48550/arXiv.2003.08562
- Ting, K. M., Witten, I. H. (1999). Issues in Stacked Generalization. Journal of Artificial Intelligence Research, 10, 271–289. doi: https://doi.org/10.1613/jair.594
- LeCun, Y., Cortes, C., Burges, C. J. C. The MNIST database of handwritten digits. Available at: http://yann.lecun.com/exdb/mnist/
- Krizhevsky, A. The CIFAR-10 dataset. Available at: https://www.cs.toronto.edu/~kriz/cifar.html
- Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J., Shi, H. (2021). Escaping the Big Data Paradigm with Compact Transformers. arXiv. doi: https://doi.org/10.48550/arXiv.2104.05704
- Guo, M.-H., Liu, Z.-N., Mu, T.-J., Hu, S.-M. (2021). Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks. arXiv. doi: https://doi.org/10.48550/arXiv.2105.02358
- Lee-Thorp, J., Ainslie, J., Eckstein, I., Ontañón, S. (2021). FNet: Mixing Tokens with Fourier Transforms. arXiv. doi: https://doi.org/10.48550/arXiv.2105.03824
- Liu, H., Dai, Z., So, D. R., Le, Q. V. (2021). Pay Attention to MLPs. arXiv. doi: https://doi.org/10.48550/arXiv.2105.08050
- Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z. et. al. (2021). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv. doi: https://doi.org/10.48550/arXiv.2103.14030
- Code examples. Computer vision. Keras. Available at: https://keras.io/examples/vision/
- Brownlee, J. Better Deep Learning. Train Faster, Reduce Overfitting, and Make Better Predictions. Available at: https://machinelearningmastery.com/better-deep-learning/
- Brownlee, J. (2021). Weight Initialization for Deep Learning Neural Networks. Available at: https://machinelearningmastery.com/weight-initialization-for-deep-learning-neural-networks/
- Colab. Available at: https://colab.research.google.com/notebooks/welcome.ipynb
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Oleg Galchonkov, Mykola Babych, Andrii Zasidko, Serhii Poberezhnyi
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.