DEVELOPMENT OF A METHOD FOR THE INTERACTIVE CONSTRUCTION OF EXPLANATIONS IN INTELLIGENT INFORMATION SYSTEMS BASED ON THE PROBABILISTIC APPROACH

Authors

DOI:

https://doi.org/10.30837/ITSSI.2021.16.039

Keywords:

intelligent system, explanation, pattern, explained artificial intelligence, regulations

Abstract

Subject: the use of the apparatus of temporal logic and probabilistic approaches to construct an explanation of the results of the work of an intelligent system in order to increase the efficiency of using the solutions and recommendations obtained. Purpose: development of a method for constructing explanations in intelligent systems with the ability to form and evaluate several alternative interpretations of the results of the operation of such a system. Tasks: justification for the use of the black box principle for interactive construction of explanations; development of a pattern explanation model that provides for probabilistic estimation; development of a method of interactive construction of explanations on the basis of the probabilistic approach. Methods: methods of data analysis, methods of system analysis, methods of constructing explanations, models of knowledge representation. Results: A model of the explanation pattern is proposed, which contains temporal regulations reflecting the sequence of user interaction with an intelligent system, which allows the formation of explanations based on a comparison of the actions of the current user and other well-known users. An interactive method for constructing explanations based on a probabilistic approach has been developed; the method uses patterns of user interaction with an intelligent system and contains phases of constructing patterns of explanations and forming explanations using the obtained patterns. The method organizes the received explanations according to the likelihood of use, which makes it possible to form target and alternative explanations for the user. Conclusions: The use of the black box principle for the development of a probabilistic approach to the construction of explanations in intelligent systems has been substantiated. A model of a pattern of explanations based on temporal regulations is proposed. The model reflects the sequence of user interaction with the intelligent system when receiving decisions and recommendations and contains an interaction pattern as part of temporal regulations that have weight, and also determines the likelihood of using the user interaction pattern. An interactive method for constructing explanations has been developed, considering the interaction of the user with the intelligent system. The method includes phases and stages of the formation of regulations and patterns of user interaction with the determination of the probability of their implementation, as well as the ordering of patterns according to the probability of their implementation. The implementation of the method was carried out when constructing explanations for recommender systems.

Author Biographies

Serhii Chalyi, Kharkiv National University of Radio Electronics

Doctor of Sciences (Engineering), Professor, Professor of the Department of Information Control Systems

Volodymyr Leshchynskyi, Kharkiv National University of Radio Electronics

PhD (Engineering Sciences), Associate Professor, Associate Professor of the Department of Software Engineering

References

Zhang, Y., Chen, X. (2020), "Explainable recommendation: A survey and new perspectives", Foundations and Trends in Information Retrieval, No. 14 (1), P. 1–101.

Miller T. (2019), "Explanation in artificial intelligence: Insights from the social sciences", Artificial Intelligence, No. 267, P. 1–38.

Barredo, А., Díaz-Rodríguez, N., et al. (2020), "Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI", Information Fusion, No. 58, P. 82–115.

Lipton, P. (1990), "Contrastive explanation", Royal Institute of Philosophy Supplement, No. 27, P. 247–266.

Lombrozo, T. (2012), "Explanation and abductive inference", Oxford handbook of thinking and reasoning, P. 260–276.

Phillips-Wren, G. (2017), "Intelligent systems to support human decision making", Artificial Intelligence: Concepts, Methodologies, Tools, and Applications, P. 3023–3036. DOI: http://doi:10.4018/978-1-5225-1759-7.ch125

Wang, T., Lin, Q. (2019), "Hybrid decision making: When interpretable models collaborate with black-box models", Journal of Machine Learning Research, No. 1, P. 1–48.

Ribeiro, M., Singh, S., Guestrin, C. (2016), "Why should I trust you?": Explaining the predictions of any classifier", Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, P. 97–101.

Chalyi, S. F., Leshchynskyi, V. O. (2020), "Temporal patterns of user preferences in the tasks of forming explanations in the recommendation system" ["Temporalni paterny vpodoban korystuvachiv v zadachakh formuvannia poiasnen v rekomendatsiinii systemi"], Bionics of Intelligence, No. 2 (95), P. 21–27.

Gogate, V., Domingos, P. (2010), "Formula-Based Probabilistic Inference", Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, P. 210–219.

Levykin, V., Chala, O. (2018), "Development of a method for the probabilistic inference of sequences of a business process activities to support the business process management", Eastern-European Journal of Enterprise Technologies, No. 5 (95), P. 16–24. DOI: http://doi.org/10.15587/1729-4061.2018.142664

Zeiler, M. D., Fergus, R. (2014), "Visualizing and Understanding Convolutional Networks", Computer Vision – ECCV 2014. Lecture Notes in Computer Science, No. 8689, P. 818–833. DOI: http://doi.org/10.1007/978-981-10-0557-2_87

Gan, C., et al. (2015), "DevNet: A Deep Event Network for multimedia event detection and evidence recounting", 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P. 2568–2577. DOI: http://doi.org/10.1109/CVPR.2015.7298872

LeCun, Y., Bengio, Y., Hinton, G. (2015), "Deep learning", Nature, No. 521, P. 436–444. DOI: http://doi.org/10.1038/nature14539

Hendricks, L. A. (2016), "Generating visual explanations", Computer Vision – ECCV 2016. Lecture Notes in Computer Science, No. 9908, P. 1–17. DOI: http://doi.org/10.1007/978-981-10-0557-2_87

Letham, B., Rudin, C., McCormick, T. H., Madigan, D. (2015), "Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model", Annals of Applied Statistics, No. 9 (3), P. 1350–1371. DOI: http://doi.org/10.1214/15-AOAS848

Lake, B. M., Salakhutdinov, R., Tenenbaum, J. B. (2015), "Human-level concept learning through probabilistic program induction", Science, No. 350 (6266), P. 1332–1338. DOI: http://doi.org/10.1126/science.aab3050

Maierand, M., Taylor, B., Oktay, H., Jensen, D. (2010), "Learning causal models of relational domains", Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, P. 1–8.

Chalyi, S. F., Leshchynskyi, V. O., Leshchynska, I. O. (2020), "Declarative-temporal approach to the construction of explanations in intelligent information systems", Bulletin of the National Technical University "KhPI", No. 2 (4), P. 51–56. DOI: http://doi.org/10.20998/2079-0023.2020.02.09

Chala, O. (2018), "Method for detecting anomalous states of a control object in information systems based on the analysis of temporal data and knowledge", EUREKA: Physics and Engineering, No. 6, P. 28–35. DOI: http://doi.org/10.21303/2461-4262.2018.00787

Downloads

Published

2021-07-06

How to Cite

Chalyi, S., & Leshchynskyi, V. (2021). DEVELOPMENT OF A METHOD FOR THE INTERACTIVE CONSTRUCTION OF EXPLANATIONS IN INTELLIGENT INFORMATION SYSTEMS BASED ON THE PROBABILISTIC APPROACH. INNOVATIVE TECHNOLOGIES AND SCIENTIFIC SOLUTIONS FOR INDUSTRIES, (2 (16), 39–45. https://doi.org/10.30837/ITSSI.2021.16.039