334 386
Full Length Article
Journal of Artificial Intelligence and Metaheuristics
Volume 4 , Issue 1, PP: 24-33 , 2023 | Cite this article as | XML | Html |PDF

Title

Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks

  Wei Hong Lim 1 * ,   Marwa M. Eid 2

1  Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia
    (limwh@ucsiuniverisity.edu.my)

2  Faculty of Artiļ¬cial Intelligence, Delta University for Science and Technology, Mansoura 11152, Egypt
    (mmm@ieee.org)


Doi   :   https://doi.org/10.54216/JAIM.040103

Received: October 12, 2022 Revised: January 23, 2023 Accepted: June 11, 2023

Abstract :

Deep Convolutional Networks (CNNs) have revolutionized various fields, including computer vision, but their decision-making process remains largely opaque. To address this interpretability challenge, numerous visual explanation methods have been proposed. However, a comprehensive evaluation and benchmarking of these methods are essential to understand their strengths, limitations, and comparative performance. In this paper, we present a systematic study that benchmarks and compares various visual explanation techniques for deep CNNs. We propose a standardized evaluation framework consisting of benchmark explain ability methods. Through extensive experiments, we analyze the effectiveness, and interpretability of popular visual explanation methods, including gradient-based methods, activation maximization, and attention mechanisms. Our results reveal nuanced differences between the methods, highlighting their trade-offs and potential applications. We conduce a comprehensive evaluation of visual explanation methods on different deep CNNs, the results demonstrate the ability to achieve informed selection and adoption of appropriate techniques for interpretability in real-world applications.

Keywords :

Convolutional Neural Networks (CNNs); Benchmarking , Interpretability; Class Activation Maps (CAM); Deep learning , Image classification; Explainable AI.

References :

[1] Ramaswamy, Harish Guruprasad, Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020.

[2] Fei Z. et al., Deep convolution networkbased emotion analysis towards mental health care. Neurocomputing, 388, 212-227, 2020.

[3] Hägele M et al., Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. Scientific reports, 10(1), 1-12, 2020.

[4] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, 618-626, 2017.

[5] Nayak R., Pati U. C., Das S. K., A comprehensive review on deep learning-based methods for video anomaly detection. Image and Vision Computing, 106, 104078, 2021.

[6] Mohamed Saber, Efficient phase recovery system, IJEECS, 5(1), 2017.

[7] Zhong B., Pan X., Love P. E., Ding L., Fang W. , Deep learning and network analysis: Classifying and visualizing accident narratives in construction. Automation in Construction, 113, 103089, 2020

[8] Lee J. H., Han S. S., Kim Y. H., Lee, C., Kim I, Application of a fully deep convolutional neural network to the automation of tooth segmentation on panoramic radiographs. Oral surgery, oral medicine, oral pathology and oral radiology, 129(6), 635-642, 2020.

[9] Mohamed Saber, A novel design and implementation of FBMC transceiver for low power applications. IJEEI, 8(1), 83-93, 2020.

[10] Kitaguchi D., Takeshita N., Matsuzaki H., Takano H., Owada Y., Enomoto T., Ito M., Real-time automatic surgical phase recognition in laparoscopic sigmoidectomy using the convolutional neural network-based deep learning approach. Surgical endoscopy, 34, 4924-4931, 2020.

[11] Van der Velden, B. H. Kuijf, H. J. Gilhuijs, K. G., Viergever, M. A., Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis, 102470, 2022.

[12] Roy S., Menapace W., Oei S., Luijten B., Fini E., Saltori C., Demi L., Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound. IEEE transactions on medical imaging, 39(8), 2676-2687, 2020.

[13] Wang H., Wang Z., Du M., Yang F., Zhang Z., Ding S.,Hu X., Score-CAM: Score-weighted visual explanations for convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 24-25, 2020.

[14] Naeem H., Ullah F., Naeem M. R., Khalid S., Vasan D., Jabbar S., Saeed S., Malware detection in industrial internet of things based on hybrid image visualization and deep learning model. Ad Hoc Networks, 105, 102154, 2020.

[15] Arshad H., Khan M. A., Sharif M. I., Yasmin M., Tavares J. M. R., Zhang Y. D., Satapathy S. C., A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition. Expert Systems, 39(7), e12541, 2022.

[16] Abouelatta, Mohamed A., Sayed A. Ward, Ahmad M. Sayed, Karar Mahmoud, Matti Lehtonen, and Mohamed MF Darwish, Measurement and assessment of corona current density for HVDC bundle conductors by FDM integrated with full multigrid technique. Electric Power Systems Research, 199, 107370, 2021.


Cite this Article as :
Style #
MLA Wei Hong Lim, Marwa M. Eid. "Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks." Journal of Artificial Intelligence and Metaheuristics, Vol. 4, No. 1, 2023 ,PP. 24-33 (Doi   :  https://doi.org/10.54216/JAIM.040103)
APA Wei Hong Lim, Marwa M. Eid. (2023). Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks. Journal of Journal of Artificial Intelligence and Metaheuristics, 4 ( 1 ), 24-33 (Doi   :  https://doi.org/10.54216/JAIM.040103)
Chicago Wei Hong Lim, Marwa M. Eid. "Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks." Journal of Journal of Artificial Intelligence and Metaheuristics, 4 no. 1 (2023): 24-33 (Doi   :  https://doi.org/10.54216/JAIM.040103)
Harvard Wei Hong Lim, Marwa M. Eid. (2023). Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks. Journal of Journal of Artificial Intelligence and Metaheuristics, 4 ( 1 ), 24-33 (Doi   :  https://doi.org/10.54216/JAIM.040103)
Vancouver Wei Hong Lim, Marwa M. Eid. Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks. Journal of Journal of Artificial Intelligence and Metaheuristics, (2023); 4 ( 1 ): 24-33 (Doi   :  https://doi.org/10.54216/JAIM.040103)
IEEE Wei Hong Lim, Marwa M. Eid, Interpreting the Incomprehensible: Benchmarking Visual Explanation Methods for Deep Convolutional Networks, Journal of Journal of Artificial Intelligence and Metaheuristics, Vol. 4 , No. 1 , (2023) : 24-33 (Doi   :  https://doi.org/10.54216/JAIM.040103)