Impact of the compilation method on determining the accuracy of the error loss in neural network learning

Authors

DOI:

https://doi.org/10.15587/2706-5448.2020.217613

Keywords:

assessment metric, learning quality, optimization algorithms, entropy error, neural network.

Abstract

In the field of NLP (Natural Language Processing) research, the use of a neural network has become important. The neural network is widely used in the semantic analysis of texts in different languages. In connection with the actualization of the processing of big data in the Kazakh language, a neural network was built for deep learning. In this study, the object is the learning process of a deep neural network, which evaluates the algorithm for constructing an LDA model. One of the most problematic places is determining the correct arguments, which, when compiling the model, will give an estimate of the algorithm’s performance. During the research, the compile () method from the Keras modular library was used, the main arguments of which are the loss function, optimizers, and metrics. The neural network is implemented in the Python programming language. The main arguments of the neural network deep learning compiler for evaluating the LDA model is the selection of arguments to obtain the correct evaluation of the algorithm of the constructed model using deep learning of the neural network. A corpus of text in the Kazakh language with no more than 8000 words is presented as learning data. Using the above methods, an experiment was carried out on the selection of arguments for the model compiler when learning a text corpus in the Kazakh language. As a result, the optimizer – SGD, the loss function – binary_crossentropy, and the estimation metric – ‘cosine_proximity’ were chosen as the optimal arguments, which, as a result of learning, showed a tendency to 0 loss (errors)=0.1984, and cosine_proximity (learning accuracy)=0.2239, which is considered acceptable learning measures. The results indicate the correct choice of compilation arguments. These arguments can be applied when conducting deep learning of a neural network, where the sample data is a pair of «topic and keywords».

Author Biographies

Akerke Аkanova, S. Seifullin Kazakh Agro Technical University, 62, Zhenis ave., Nur-Sultan, Kazakhstan, 010011

Department of Computer Engineering and Software

Mira Kaldarova, S. Seifullin Kazakh Agro Technical University, 62, Zhenis ave., Nur-Sultan, Kazakhstan, 010011

Department of Computer Engineering and Software

References

  1. Trask, E. (2020). Glubokoe obuchenie. Saint Petersburg: Piter, 352.
  2. Sholle, F. (2018). Glubokoe obuchenie na Python. Saint Petersburg: Piter, 400.
  3. Duchi, J., Hazan E., Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12, 2121–2159.
  4. Kononiuk, A. E. (2011). Informatsiologiia. Obschaia teoriia informatsii. Kniga 3. Kyiv: Osvіta Ukraini, 412.
  5. Dzhulli, A., Pal, S. (2018). Biblioteka Keras – instrument glubokogo obucheniia. Moscow: DMK Press, 294.
  6. Koyuncu, H. (2020). Loss Function Selection in NN based Classifiers: Try-outs with a Novel Method. 2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). doi: http://doi.org/10.1109/ecai50035.2020.9223208
  7. Hung, C.-C., Song, E., Lan, Y. (2019). Foundation of Deep Machine Learning in Neural Networks. Image Texture Analysis, 201–232. doi: http://doi.org/10.1007/978-3-030-13773-1_9
  8. Metriki. Available at: https://ru-keras.com/metric/
  9. Ketkar, N. (2017) Introduction to Keras. Deep Learning with Python. Berkeley: Apress. doi: http://doi.org/10.1007/978-1-4842-2766-4_7
  10. Chollet, F. (2017). Deep Learning With Python. Black & White, 384. Available at: https://github.com/fchollet/deep-learning-with-python-notebooks

Published

2020-12-30

How to Cite

Аkanova A., & Kaldarova, M. (2020). Impact of the compilation method on determining the accuracy of the error loss in neural network learning. Technology Audit and Production Reserves, 6(2(56), 34–37. https://doi.org/10.15587/2706-5448.2020.217613

Issue

Section

Reports on research projects