Development of an advanced ai-based model for human psychoemotional state analysis
DOI:
https://doi.org/10.15587/1729-4061.2023.293011Keywords:
speech emotion recognition, deep learning in SER, MEL spectrogram, MFCC analysis, audio signal processing, emotional classification, acoustic features, machine learning, emotion detection, psycholinguisticsAbstract
The research focuses on developing a novel method for the automatic recognition of human psychoemotional states (PES) using deep learning technology. This method is centered on analyzing speech signals to classify distinct emotional states. The primary challenge addressed by this research is to accurately perform multiclass classification of seven human psychoemotional states, namely joy, fear, anger, sadness, disgust, surprise, and a neutral state. Traditional methods have struggled to accurately distinguish these complex emotional nuances in speech. The study successfully developed a model capable of extracting informative features from audio recordings, specifically mel spectrograms and mel-frequency cepstral coefficients. These features were then used to train two deep convolutional neural networks, resulting in a classifier model. The uniqueness of this research lies in its use of a dual-feature approach and the employment of deep convolutional neural networks for classification. This approach has demonstrated high accuracy in emotion recognition, with an accuracy rate of 0.93 in the validation subset. The high accuracy and effectiveness of the model can be attributed to the comprehensive and synergistic use of mel spectrograms and mel-frequency cepstral coefficients, which provide a more nuanced analysis of emotional expressions in speech. The method presented in this research has broad applicability in various domains, including enhancing human-machine interface interactions, implementation in the aviation industry, healthcare, marketing, and other fields where understanding human emotions through speech is crucial
Supporting Agency
- This research was funded by the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan (Grant No. AP09258659).
References
- Semigina, T., Vysotska, Z., Kyianytsia, I., Kotlova, L., Shostak, I., Kichuk, A. (2021). Psycho-Emotional State Of Students: Research And Regulation. Studies of Applied Economics, 38 (4). doi: https://doi.org/10.25115/eea.v38i4.4049
- Amirgaliyev, Y. N., Bukenova, I. N. (2021). Recognition of a psychoemotional state based on video surveillance: review. Journal of Mathematics, Mechanics and Computer Science, 112 (4). doi: https://doi.org/10.26577/jmmcs.2021.v112.i4.11
- Alharbi, S., Alrazgan, M., Alrashed, A., Alnomasi, T., Almojel, R., Alharbi, R. et al. (2021). Automatic Speech Recognition: Systematic Literature Review. IEEE Access, 9, 131858–131876. doi: https://doi.org/10.1109/access.2021.3112535
- Khare, S. K., Blanes-Vidal, V., Nadimi, E. S., Acharya, U. R. (2024). Emotion recognition and artificial intelligence: A systematic review (2014–2023) and research recommendations. Information Fusion, 102, 102019. doi: https://doi.org/10.1016/j.inffus.2023.102019
- Jwaid, W. M., Al-Husseini, Z. S. M., Sabry, A. H. (2021). Development of brain tumor segmentation of magnetic resonance imaging (MRI) using U-Net deep learning. Eastern-European Journal of Enterprise Technologies, 4 (9 (112)), 23–31. doi: https://doi.org/10.15587/1729-4061.2021.238957
- Lu, X. (2022). Deep Learning Based Emotion Recognition and Visualization of Figural Representation. Frontiers in Psychology, 12. doi: https://doi.org/10.3389/fpsyg.2021.818833
- Ahmed, N., Aghbari, Z. A., Girija, S. (2023). A systematic survey on multimodal emotion recognition using learning algorithms. Intelligent Systems with Applications, 17, 200171. doi: https://doi.org/10.1016/j.iswa.2022.200171
- Nosov, P., Zinchenko, S., Ben, A., Prokopchuk, Y., Mamenko, P., Popovych, I. et al. (2021). Navigation safety control system development through navigator action prediction by data mining means. Eastern-European Journal of Enterprise Technologies, 2 (9 (110)), 55–68. doi: https://doi.org/10.15587/1729-4061.2021.229237
- de Lope, J., Graña, M. (2023). An ongoing review of speech emotion recognition. Neurocomputing, 528, 1–11. doi: https://doi.org/10.1016/j.neucom.2023.01.002
- Costantini, G., Parada-Cabaleiro, E., Casali, D., Cesarini, V. (2022). The Emotion Probe: On the Universality of Cross-Linguistic and Cross-Gender Speech Emotion Recognition via Machine Learning. Sensors, 22 (7), 2461. doi: https://doi.org/10.3390/s22072461
- Chen, P., Wu, L., Wang, L. (2023). AI Fairness in Data Management and Analytics: A Review on Challenges, Methodologies and Applications. Applied Sciences, 13 (18), 10258. doi: https://doi.org/10.3390/app131810258
- Bankins, S., Formosa, P. (2023). The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work. Journal of Business Ethics, 185 (4), 725–740. doi: https://doi.org/10.1007/s10551-023-05339-7
- Mahdi, Q. A., Shyshatskyi, A., Prokopenko, Y., Ivakhnenko, T., Kupriyenko, D., Golian, V. et al. (2021). Development of estimation and forecasting method in intelligent decision support systems. Eastern-European Journal of Enterprise Technologies, 3 (9 (111)), 51–62. doi: https://doi.org/10.15587/1729-4061.2021.232718
- Cai, Y., Li, X., Li, J. (2023). Emotion Recognition Using Different Sensors, Emotion Models, Methods and Datasets: A Comprehensive Review. Sensors, 23 (5), 2455. doi: https://doi.org/10.3390/s23052455
- Coretta, S., Casillas, J. V., Roettger, T. B. (2022). Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses. doi: https://doi.org/10.31234/osf.io/q8t2k
- Vakkantula, P. C. (2020). Speech Mode Classification using the Fusion of CNNs and LSTM Networks. West Virginia University. doi: https://doi.org/10.33915/etd.7845
- Brownlee, J. (2021). Gentle Introduction to the Adam Optimization Algorithm for Deep Learning. Machine Learning Mastery. Available at: https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/
- Hashem, A., Arif, M., Alghamdi, M. (2023). Speech emotion recognition approaches: A systematic review. Speech Communication, 154, 102974. doi: https://doi.org/10.1016/j.specom.2023.102974
- Chauhan, C., Parida, V., Dhir, A. (2022). Linking circular economy and digitalisation technologies: A systematic literature review of past achievements and future promises. Technological Forecasting and Social Change, 177, 121508. doi: https://doi.org/10.1016/j.techfore.2022.121508
- Kamath, U., Liu, J., Whitaker, J. (2019). Deep Learning for NLP and Speech Recognition. Springer International Publishing. doi: https://doi.org/10.1007/978-3-030-14596-5
- Ekman, P. (1971). Universals and cultural differences in facial expressions of emotion. Nebraska Symposium on Motivation, 19, 207–283.
- Rabiner, L. R., Schafer, R. W. (2007). Introduction to Digital Speech Processing. Foundations and Trends® in Signal Processing, 1 (1-2), 1–194. doi: https://doi.org/10.1561/2000000001
- Livingstone, S. R., Russo, F. A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLOS ONE, 13 (5), e0196391. doi: https://doi.org/10.1371/journal.pone.0196391
- Haq, S., Jackson, P. J. B. (2009). Speaker-Dependent Audio-Visual Emotion Recognition. In Proc. Int'l Conf. on Auditory-Visual Speech Processing, 53–58.
- Pichora-Fuller, M. K., Dupuis, K. (2020). Toronto emotional speech set (TESS). doi: https://doi.org/10.5683/SP2/E8H2MF
- Zhou, K., Sisman, B., Liu, R., Li, H. (2021). Seen and Unseen Emotional Style Transfer for Voice Conversion with A New Emotional Speech Dataset. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). doi: https://doi.org/10.1109/icassp39728.2021.9413391
- Zwicker, E., Fastl, H. (1999). Psychoacoustics. Facts and Models. Springer-Verlag, 417. doi: https://doi.org/10.1007/978-3-662-09562-1
- Davis, S., Mermelstein, P. (1980). Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech, and Signal Processing, 28 (4), 357–366. doi: https://doi.org/10.1109/tassp.1980.1163420
- Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep Learning. MIT Press. Available at: https://www.deeplearningbook.org/
- Sundgren, D., Rahmani, R., Larsson, A., Moran, A., Bonet, I. (2015). Speech emotion recognition in emotional feedback for Human-Robot Interaction. International Journal of Advanced Research in Artificial Intelligence, 4 (2). doi: https://doi.org/10.14569/ijarai.2015.040204
- Martin, O., Kotsia, I., Macq, B., Pitas, I. (2006). The eNTERFACE' 05 Audio-Visual Emotion Database. 22nd International Conference on Data Engineering Workshops (ICDEW’06). doi: https://doi.org/10.1109/icdew.2006.145
- Koshekov, К. Т., Savostin, А. А., Seidakhmetov, B. K., Anayatova, R. K., Fedorov, I. O. (2021). Aviation Profiling Method Based on Deep Learning Technology for Emotion Recognition by Speech Signal. Transport and Telecommunication Journal, 22 (4), 471–481. doi: https://doi.org/10.2478/ttj-2021-0037
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Zharas Ainakulov, Kayrat Koshekov, Alexey Savostin, Raziyam Anayatova, Beken Seidakhmetov, Gulzhan Kurmankulova
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.