HUMAN EMOTION RECOGNITION SYSTEM USING DEEP LEARNING ALGORITHMS

Authors

DOI:

https://doi.org/10.30837/ITSSI.2022.21.060

Keywords:

object detection, classification of objects, supervised learning, recognition of emotions

Abstract

The subject of research in the article is the software implementation of a neural image classifier. The work examines emotions as a special type of mental processes that express a person’s experience of his attitude to the surrounding world and himself. They can be expressed in different ways: facial expressions, posture, motor reactions, voice. However, the human face has the greatest expressiveness. Technologies for recognizing companies to improve customer service use human emotions make decisions about interviewing candidates and optimize the emotional impact of advertising. Therefore, the purpose of the work is to find and optimize the most satisfactory in terms of accuracy algorithm for classifying human emotions based on facial images. The following tasks are solved: review and analysis of the current state of the problem of "recognition of emotions"; consideration of classification methods; choosing the best method for the given task; development of a software implementation for the classification of emotions; conducting an analysis of the work of the classifier, formulating conclusions about the work performed, based on the received data. An image classification method based on a densely connected convolutional neural network is also used. Results: the results of this work showed that the method of image classification, based on a densely connected convolutional neural network, is well suited for solving the problems of emotion recognition, because it has a fairly high accuracy. The quality of the classifier was evaluated according to the following metrics: accuracy; confusion matrix; precision, recall, f1-score; ROC curve and AUC values. The accuracy value is relatively high – 63%, provided that the data set has unbalanced classes.  AUC is also high at 89%. Conclusions. It can be concluded that the obtained model with weights has high indicators of recognition of human emotions, and can be successfully used for its purpose in the future.

Author Biographies

Kateryna Yuvchenko, Kharkiv National University of Radioelectronics

Bachelor of Applied Mathematics

Valentyn Yesilevskyi, Kharkiv National University of Radio Electronics

PhD (Engineering Sciences), Associate Professor

Olena Sereda, Kharkiv National University of Radio Electronics

Senior Lecturer

References

Hess, U., Thibault, P. (2009), "Darwin and Emotion Expression", American Psychologist, Vol. 64, No. 2, P. 120–129. DOI: https://doi.org/10.1037/a0013386

Turabzadeh, S. (2018), "Facial expression emotion detection for real-time embedded systems", Technologies, No. 6 (1), 17 p. DOI: https://doi.org/10.3390/technologies6010017

Ekman, P. (1970), "Universal facial expressions of emotion", California Mental Health Research Digest, Vol. 8, No. 4,

P. 151–158.

Viola, P., Jones, M. (2001), "Rapid object detection using a boosted cascade of simple features", Proceedings of

the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA,

–14 December, P. 1–9.

Coots, T., Edwards, G., Taylor, Ch. (2001), "Active Appearance Model", IEEE Transactions on Pattern Analysis and

Machine Intelligence, Vol. 23, No. 6, P. 681–685.

Coots, T., Taylor, K. (2004), "Statistical appearance models for computer vision", Technical Report, University of Manchester, Wolfson Image Analysis Group, Imaging Science and Biomedical Engineering, Manchester M13 9PT, United Kingdom, 125 p.

Van Dyke, Parunak H. (1995), Neural Networks for Pattern Recognition. New York: Christopher M. Bishop, Oxford University Press, 482 p.

Yashina, E., Artiukh, R., Pan, N., Zelensky, A. (2019), "Information technology for recognition of road signs using

a neural network", Innovative technologies and scientific solutions for industries, No. 2 (8). DOI: https://doi.org/10.30837/2522-9818.2019.8.130

Mehryar, M., Rostamizadeh, A., Talwalkar, A. (2018), Foundations of Machine Learning, Cambridge: The MIT Press, 505 p.

Krizhevsky, A., Sutskever, I., Hinton, G. E. (2017), "Image Netclassification with deep convolutional neural networks", Communications of the ACM, Vol. 60, No. 6, P. 84–90.

Huang, G., Sun, Y., Liu, Z. (2016), "Deep networks with stochastic depth", European Conf. on Computer Vision (ECCV), Amsterdam, Netherlands, October, P. 646–661.

Srivastava, R. K., Greff, K., Schmidhuber, J. (2015), "Training very deepnetworks", Advances in Neural Information Processing Systems 28 (NIPS 2015), Montreal, Canada, December, P. 2377–2385.

He, K., Zhang, X., Ren, S. (2016), "Deep residual learning for image recognition", The IEEE Conf. on Computer Vision

and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June – 1 July, P. 1–9.

Larsson, G., Maire, M., Shakhnarovich, G. (2017), "Fractalnet: ultra-deepneural networks without residuals", 5th International Conference on Learning Representa-tions, Toulon, France, April, P. 1–11.

Gao, H., Zhuang, L., Laurens van der Maaten, Kilian, W. (2017), "Weinberger Densely Connected Convolutional Networks", The IEEE on Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July, P. 1–9.

FER2013 (Facial Expression Recognition 2013 Dataset), available at: https://paperswithcode.com/dataset/fer2013 (last accessed 12.05.2022).

Downloads

Published

2022-11-18

How to Cite

Yuvchenko, K., Yesilevskyi, V., & Sereda, O. (2022). HUMAN EMOTION RECOGNITION SYSTEM USING DEEP LEARNING ALGORITHMS. INNOVATIVE TECHNOLOGIES AND SCIENTIFIC SOLUTIONS FOR INDUSTRIES, (3 (21), 60–69. https://doi.org/10.30837/ITSSI.2022.21.060