Розробка аудіовізуальної системи розпізнавання мови
DOI:
https://doi.org/10.15587/2313-8416.2017.118212Słowa kluczowe:
аудіовізуальна система, приховані Марківські моделі, візема, зв’язані приховані Марківські моделіAbstrakt
Запропонована модель аудіовізуальної системи на базі прихованих Марківських моделей, яка дозволяє розпізнавати мову в реальному часі. Модель дає інструментарій розпізнавання мови, який можна використати в умовах, де інші засоби можуть бути неможливими, наприклад, в умовах відсутності аудіо складової. Досліджена та перевірена працездатність моделі на прикладі розпізнавання цифр, отримані очікувані результати
Bibliografia
Liang, L., Liu, X., Zhao, Y., Pi, X., Nefian, A. V. (2002). Speaker independent audio-visual continuous speech recognition. International Conference on Acoustics, Speech and Signal Processing. Lausanne. doi: 10.1109/icme.2002.1035365
Nefian, A. V., Liang, L., Pi, X., Liu, X., Mao, C. (2002). An coupled hidden Markov model for audio-visual speech recognition. International Conference on Acoustics, Speech and Signal Processing. Lausanne. doi: 10.1109/icassp.2002.1006167
Liang, L., Liu, X., Zhao, Y., Pi, X., Nefian, A. V. (2002). Audio-Visual continuous speech recognition using a coupled hidden Markov models. International Conference on Acoustics, Speech and Signal Processing. Lausanne. doi: 10.1109/icassp.2002.1006166
Gurban, M., Thiran, J. P. (2005). Audio-visual speech recognition with a hybrid SVM-HMM system. 13th European Signal Processing Conference. Available at: https://infoscience.epfl.ch/record/87309/files/Gurban2005_1391.pdf
Raskinis, G., Raskinien˙e, D. (2003). Building Medium-Vocabulary Isolated-Word Lithuanian HMM Speech Recognition System. Informatica, 14 (1), 75–84.
Kass, M., Witkin, A., Terzopoulos, D. (1988). Snakes: Active contour models. International Journal of Computer Vision, 1 (4), 321–331. doi: 10.1007/bf00133570
Rao, R. R., Mesereau, R. M. (1994). Lip modeling for visual speech recognition. 28th Annual Asilomar Conference on Signals, Systems, and Computers, 1, 587–590. doi: 10.1109/acssc.1994.471520
Sanchez, M. U. R., Matas, J., Kittler, J. (1997). Statistical chromaticity-based lip tracking with B-splines. IEEE International Conference on Acoustics, Speech and Signal Processing. Munich. doi: 10.1109/icassp.1997.595416
Malcangi, M., Ouazzane, K., Patel, P. (2013). Audio-visual fuzzy fusion for robust speech recognition. The 2013 International Joint Conference on Neural Networks (IJCNN). Dallas. doi: 10.1109/ijcnn.2013.6706789
Malcangi, M., Quan, H. (2016). Bio-inspired Audio-Visual Speech Recognition Towards the Zero Instruction Set Computing. International Conference on Engineering Applications of Neural Networks EANN 2016: Engineering Applications of Neural Networks, 326–334. doi: 10.1007/978-3-319-44188-7_25
Hernando, J. (1997). Maximum likelihood weighting of dynamic speech features for CDHMM speech recognition. IEEE International Conference on Acoustics, Speech, and Signal Processing. Munich. doi: 10.1109/icassp.1997.596176
Gravier, G., Axelrod, S., Potamianos, G. (2002). Maximum entropy and MCE based HMM stream weight estimation for audio-visual ASR. IEEE International Conference on Acoustics Speech and Signal Processing. Orlando. doi: 10.1109/icassp.2002.5743873
Peng, L., Zuoying, W. (2005). Stream weight training based on MCE for audio-visual LVCSR. Tsinghua Science and Technology, 10 (2), 141–144. doi: 10.1016/s1007-0214(05)70045-6
Estellers, V., Gurban, M., Thiran, J.-P. (2012). On dynamic stream weighting for audio-visual speech recognition. IEEE Trans. Audio, Speech, and Language Processing, 20 (4), 1145–1157. doi: 10.1109/tasl.2011.2172427
Garg, A., Potamianos, G., Neti, C., Huang, T. S. (2003). Frame-dependent multi-stream reliability indicators for audio-visual speech recognition. International Conference on Multimedia and Expo. Baltimore. doi: 10.1109/icme.2003.1221384
Lienhart, R., Maydt, J. (2002). An extended set of Haar-like features for rapid objection detection. Proceedings. International Conference on Image Processing. Rochester, 900–903. doi: 10.1109/icip.2002.1038171
Cordea, M. D., Petriu, E. M., Georganos, N. D., Petriu, D. C., Whalen, T. E. (2001). Real-time 2(1/2)-D head pose recovery for model-based video-coding. IEEE Transactions on Instrumentation and Measurement, 50 (4), 1007–1013. doi: 10.1109/19.948316
Neti, C., Potamianos, G., Luettin, J. et. al. (2000). Audio-visual speech recognition: Final Workshop 2000 Report, Center for Language and Speech Processing. Baltimore: The Johns Hopkins University.
Jensen, F. V. (1998). An Introduction to Bayesian Networks. London: UCL Press Limited, 178.
Young, S. et. al. (1995). The HTK Book. Cambridge: Entropic Cambridge Research Laboratory.
##submission.downloads##
Opublikowane
Numer
Dział
Licencja
Copyright (c) 2017 Alexandr Gornostal, Yaroslaw Dorogyy
Utwór dostępny jest na licencji Creative Commons Uznanie autorstwa 4.0 Międzynarodowe.
Our journal abides by the Creative Commons CC BY copyright rights and permissions for open access journals.
Authors, who are published in this journal, agree to the following conditions:
1. The authors reserve the right to authorship of the work and pass the first publication right of this work to the journal under the terms of a Creative Commons CC BY, which allows others to freely distribute the published research with the obligatory reference to the authors of the original work and the first publication of the work in this journal.
2. The authors have the right to conclude separate supplement agreements that relate to non-exclusive work distribution in the form in which it has been published by the journal (for example, to upload the work to the online storage of the journal or publish it as part of a monograph), provided that the reference to the first publication of the work in this journal is included.