Comparative characteristics of the ability of convolutional neural networks to the concept of transfer learning

Authors

DOI:

https://doi.org/10.15587/2706-5448.2022.252695

Keywords:

neural networks, transfer learning, convolutional neural networks, computer resources

Abstract

The object of research is the ability to combine a previously trained model of a deep neural network of direct propagation with user data when used in problems of determining the class of one object in the image. That is, the processes of transfer learning in convolutional neural networks in classification problems are considered. The conducted researches are based on application of a method of comparison of theoretical and practical results received at training of convolutional neural networks. The main objective of this research is to conduct two different learning processes. Traditional training during which in each epoch of training there is an adjustment of values of all weights of each layer of a network. After that there is a process of training of a neural network on a sample of the data presented by images. The second process is learning using transfer learning methods, when initializing a pre-trained network, the weights of all its layers are «frozen» except for the last fully connected layer. This layer will be replaced by a new one with the number of outputs, which should be equal to the number of classes in the sample. After that, to initialize its parameters by the random values distributed according to the normal law. Then conduct training of such convolutional neural network on the set sample. When the training was conducted, the results were compared. In conclusion, learning from convolutional neural networks using transfer learning techniques can be applied to a variety of classification tasks, ranging from numbers to space objects (stars and quasars). The amount of computer resources spent on research is also quite important. Because not all model of a convolutional neural network can be fully taught without powerful computer systems and a large number of images in the training sample.

Author Biography

Vladimir Khotsyanovsky, National Aviation University

Postgraduate Student

Department of Aviation Computing and Integration Complexes

References

  1. Zhu, Y., Chen, Y., Lu, Z., Pan, S. J., Xue, G.-R., Yu, Y., Yang, Q. (2011). Heterogeneous Transfer Learning for Image Classification. Twenty-Fifth AAAI Conference on Artificial Intelligence, 1304–1309.
  2. Raina, R., Battle A., Lee H., Packer, B., Ng, A. Y. (2007). Self-taught Learning: Transfer Learning from Unlabeled Data. Proceedings of the 24th International conference on Machine learning, 767–774. doi: http://doi.org/10.1145/1273496.1273592
  3. Govind, L., Kumar, D. (2017). Diabetic retinopathy detection using transfer learning. Journal for advanced research in applied science, 4, 463–471.
  4. Deep Learning: Transfer learning i tonkaia nastroika glubokikh svertochnykh neironnykh setei (2016). Khabrakhabr. Available at: https://habrahabr.ru/company/microsoft/blog/314934
  5. Joey, S. (2019). Creating AlexNet on Tensorflow from Scratch. Part 2: Creati ng AlexNet. Available at: https://joeyism.medium.com/creating-alexnet-on-tensorflow-from-scratch-part-2-creating-alexnet-e0cd948d7b04
  6. Lee, H., Grosse, R., Ranganath, R., Ng, A. Y. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009. Montreal. Available at: http://robotics.stanford.edu/~ang/papers/icml09-ConvolutionalDeepBeliefNetworks.pdf
  7. VGG16 – Convolutional Network for Classification and Detection (2018). Available at: https://neurohive.io/en/popular-networks/vgg16/
  8. PyTorch Documentation (2017). Torch Contributors. Available at: http://pytorch.org/docs/0.3.0/index.html
  9. He, K., Zhang, X., Ren, S., Sun, J. (2015). Deep Residual Learning for Image Recognition. Cornell University. Available at: https://arxiv.org/abs/1512.03385v1
  10. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K. Q. (2018). Densely Connected Convolutional Networks. Cornell University. Available at: https://arxiv.org/abs/1608.06993v5
  11. Wan, L., Zeiler, M., Zhang S., Cun, Y. L., Fergus, R. (2013). Regularization of Neural Networks using DropConnect. Proceedings of the 30th International Conference on Machine Learning, PMLR, 28 (3), 1058–1066.
  12. Ballan, L., Bertini, M., Del Bimbo, A., Serain, A. M., Serra, G., Zaccone. B. F. (2012). Combining generative and discriminative models for classifying social images from 101 object categories. Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 1731–1734.

Downloads

Published

2022-02-11

How to Cite

Khotsyanovsky, V. (2022). Comparative characteristics of the ability of convolutional neural networks to the concept of transfer learning. Technology Audit and Production Reserves, 1(2(63), 10–13. https://doi.org/10.15587/2706-5448.2022.252695

Issue

Section

Information Technologies: Reports on Research Projects