Explanation Detected Brain Tumours in MRI Images Using YOLOv8 with LIME-Based Interpretation

Authors

DOI:

https://doi.org/10.30837/2522-9818.2026.1.017

Keywords:

Explainable Artificial Intelligence (XAI); LIME; medical image analysis, brain tumour detection; MRI; YOLOv8

Abstract

Relevance. Precise identification of brain tumours in magnetic resonance imaging (MRI) is a critical task in medical image analysis. Although deep learning–based object detectors achieve high localisation accuracy, their limited transparency restricts trust and routine adoption in clinical practice, highlighting the need for explainable artificial intelligence (XAI) approaches. Object of research. The object of this research is the automated detection of brain tumours in MRI scans using convolutional neural network – based object detection models. Subject of research. The subject of the research is the integration of YOLOv8 object detection models with the Local Interpretable Model-Agnostic Explanations (LIME) method to interpret individual detection outputs in medical imaging. Purpose. The aim of this paper is to develop and evaluate an explainable framework for brain tumour detection in MRI images by integrating YOLOv8-based object detection with LIME-based interpretation and by quantitatively assessing the quality of the generated explanations. Methods. Two YOLOv8 variants (YOLOv8n and YOLOv8s) were trained and evaluated on a publicly available MRI dataset containing glioma, meningioma, and pituitary tumour classes. LIME was applied to generate superpixel-based, box-conditioned local explanations for individual detections. Detection performance was assessed using precision, recall, mAP@50, and mAP@50–95. Explanation quality was quantitatively evaluated using stability, sparsity, maximum superpixel weight, and entropy metrics. Results. Experimental results demonstrate that both YOLOv8 models achieve high detection performance, with YOLOv8s providing slightly improved accuracy. LIME successfully highlights image regions that most influence model decisions, and the proposed quantitative metrics confirm that the generated explanations are stable, informative, and aligned with clinically relevant tumour regions. Conclusions. The proposed framework provides a practical approach for combining accurate tumour localisation with interpretable and quantitatively validated explanations, supporting reliability-oriented evaluation of AI-based clinical decision-support systems.

Author Biographies

Kristina Dostalova, University of Žilina

postgraduate student at the Department of Informatics, Faculty of Management Science and Informatics

Alexandra Cizmarova, University of Žilina

postgraduate student at the Department of Informatics, Faculty of Management Science and Informatics

Roman Yaroshevych, Kharkiv National University of Radio Electronics

PhD, senior lecturer at the Department of Electronic Computers, Faculty of Computer Engineering and Information Technology

References

References

Di Fazio, N., Zanza, C., Longhitano, Y. et al. (2026), "Artificial intelligence for early diagnosis in emergency department", J Anesth Analg Crit Care, Vol. 6, No.7, 2026, DOI: https://doi.org/10.1186/s44158-025-00334-y

Zaitseva, E., Levashenko, V., Rabcan, J., Kvassay, M. (2023), "A New Fuzzy-Based Classification Method for Use in Smart/Precision Medicine", Bioengineering, Vol. 10(7), No. 838, DOI: https://doi.org/10.3390/bioengineering10070838

Francisco, K.K.Y., Apuhin, A.E.C., et al. (2026), "Personalized medicine and health equity: overcoming cost barriers and ethical challenges", Int J Equity Health , Vol.25, No. 4. DOI: https://doi.org/10.1186/s12939-025-02710-0

Zaitseva, E., Levashenko, V., Rabcan, J., Krsak, E. (2020), "Application of the structure function in the evaluation of the human factor in healthcare", Symmetry, Vol.12(1), No. 93. DOI: https://doi.org/10.3390/SYM12010093

Zaitseva, E., Levashenko, V. (2026), "Reliability engineering in healthcare: Opportunities and challenges", Reliability Engineering and System Safety, Vol. 267, No. 111933. DOI: https://doi.org/10.1016/j.ress.2025.111933

Sharma, R.M. (2025), "Artificial intelligence in medical image analysis and molecular diagnostics: recent advances and applications", J Med Artif Intell., Vol.8, 53 р. DOI: https://doi.org/10.21037/jmai-24-412

Menze, B.H., et al. (2015), "The Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) ", IEEE Trans. Med. Imaging, Vol. 34, No. 10, Oct. 2015, pp. 1993–2024. DOI: https://doi.org/10.1109/TMI.2014.2377694

Aleid, A., Alhussaini, K., Alanazi, R., Altwaimi, M., Altwijri, O., Saad, A.S., (2023), "Artificial Intelligence Approach for Early Detection of Brain Tumours Using MRI Images", Applied Science, Vol. 13, No. 3808. DOI: https://doi.org/10.3390/app13063808

Gökmen, N. (2025), "AI techniques for brain tumour segmentation in MRI: a review (2019–2024)", Netw Model Anal Health Inform Bioinforma, Vol.14, 168 р. DOI: https://doi.org/10.1007/s13721-025-00650-x

Kolarik, M., Sarnovsky, M., Paralic, J., Babic, F. (2023), "Explainability of deep learning models in medical video analysis: a survey", Peer J Comput. Sci., Vol. 9, 1253 р, DOI: https://doi.org/10.7717/peerj-cs.1253

Kondratenko, Y., Sidenko, I., Kondratenko, G., Petrovych, V., Taranov, M., Sova, I. (2021), "Artificial Neural Networks for Recognition of Brain Tumours on MRI Images". Information and Communication Technologies in Education, Research, and Industrial Applications. ICTERI 2020. Communications in Computer and Information Science, Vol 1308. Springer, Cham, DOI: https://doi.org/10.1007/978-3-030-77592-6_6

Ali, S., Abuhmed, T., El-Sappagh, Sh., et al. (2023), "Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence", Information Fusion, Vol. 99, 101805 р. DOI: https://doi.org/10.1016/j.inffus.2023.101805

Hassan, S.U., Abdulkadir, S.J., Zahid, M.S.M., Al-Selwi, S.M. (2025), "Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review", Comput. Biol. Med., Vol. 185, 109569 р. DOI: https://doi.org/10.1016/j.compbiomed.2024.109569

Zaitseva, E., Levashenko, V. (2025), "Reliability Analysis Based on Aleatory and Epistemic Uncertainty Using Binary Decision Diagrams", International Journal of Intelligent Systems, Vol.2025, No. 6471577. DOI: https://doi.org/0.1155/int/6471577

Zhou, Z., Hooker, G., Wang, F. (2021), "S-LIME: Stabilized-LIME for Model Explanation", in Proc. of the 27th ACM SIGKDD Conf. on Knowledge Discovery & Data Mining, pp. 2429-2438. DOI: https://doi.org/10.1145/3447548.3467274

Zaitseva, E., Rabcan, J., Levashenko, V., Kvassay, M. (2023), "Importance analysis of decision-making factors based on fuzzy decision trees", Applied Soft Computing, Vol. 134, No. 109988. DOI: https://doi.org/10.1016/j.asoc.2023.109988

"Medical Image DataSet: Brain Tumour Detection". available: https://www.kaggle.com/datasets/pkdarabi/medical-image-dataset-brain-tumour-detection (last accessed Feb. 04, 2026).

Alsufyani, A. (2025), "Performance comparison of deep learning models for MRI-based brain tumour detection", AIMS Bioeng., Vol. 12, No. 1, pp. 1-21. DOI: https://doi.org/10.3934/bioeng.2025001

Ultralytics, ‘Explore Ultralytics YOLOv8’. available: https://docs.ultralytics.com/models/yolov8/ (last accessed Feb. 04, 2026).

Explainable AI (XAI) Using LIME’, GeeksforGeeks. available: https://www.geeksforgeeks.org/artificial-intelligence/introduction-to-explainable-aixai-using-lime (last accessed Feb. 04, 2026).

Ultralytics, ‘Performance Metrics Deep Dive’. available: https://docs.ultralytics.com/guides/yolo-performance-metrics (last accessed Feb. 04, 2026).

Downloads

Published

2026-03-30

How to Cite

Dostalova, K., Cizmarova, A., Klimo, M., & Yaroshevych, R. (2026). Explanation Detected Brain Tumours in MRI Images Using YOLOv8 with LIME-Based Interpretation. INNOVATIVE TECHNOLOGIES AND SCIENTIFIC SOLUTIONS FOR INDUSTRIES, (1(35), 17–27. https://doi.org/10.30837/2522-9818.2026.1.017