420 162
Full Length Article
Fusion: Practice and Applications
Volume 3 , Issue 1, PP: 54-69 , 2021 | Cite this article as | XML | Html |PDF

Title

Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection

Authors Names :   Ahmed Abdelmonem   1 *     Nehal N. Mostafa   2  

1  Affiliation :  Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt

    Email :  aabdelmounem@zu.edu.eg


2  Affiliation :  Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt

    Email :  nihal.nabil@fci.zu.edu.eg



Doi   :   https://doi.org/10.54216/FPA.030104

Received: October 05, 2020 Revised: December 22, 2020 Accepted: March 21, 2021

Abstract :

Explainable artificial intelligence received great research attention in the past few years during the widespread of Black-Box techniques in sensitive fields such as medical care, self-driving cars, etc. Artificial intelligence needs explainable methods to discover model biases. Explainable artificial intelligence will lead to obtaining fairness and Transparency in the model. Making artificial intelligence models explainable and interpretable is challenging when implementing black-box models. Because of the inherent limitations of collecting data in its raw form, data fusion has become a popular method for dealing with such data and acquiring more trustworthy, helpful, and precise insights. Compared to other, more traditional-based data fusion methods, machine learning's capacity to automatically learn from experience with nonexplicit programming significantly improves fusion's computational and predictive power. This paper comprehensively studies the most explainable artificial intelligent methods based on anomaly detection. We proposed the required criteria of the transparency model to measure the data fusion analytics techniques. Also, define the different used evaluation metrics in explainable artificial intelligence. We provide some applications for explainable artificial intelligence. We provide a case study of anomaly detection with the fusion of machine learning. Finally, we discuss the key challenges and future directions in explainable artificial intelligence.

Keywords :

artificial intelligence; Black-Box; machine learning; explainable artificial intelligence; Information fusion; intelligent methods; data fusion; anomaly detection; 

References :

[1] X. Liu, L. Xie, Y. Wang, J. Zou, J. Xiong, Z. Ying, et al., "Privacy and Security Issues in Deep Learning: A

Survey," IEEE Access, 2020.

[2] I. Northpointe, "Practitioner's Guide to COMPAS Core," 2015.

[3] M. MCGOUGH, "How bad is Sacramento's air, exactly? Google results appear at odds with reality, some say," in

Sacramento BEE, ed, 2018.

[4] W. Samek and K.-R. Müller, "Towards explainable artificial intelligence," in Explainable AI: interpreting,

explaining and visualizing deep learning, ed: Springer, 2019, pp. 5-22.

[5] F. Bodria, F. Giannotti, R. Guidotti, F. Naretto, D. Pedreschi, and S. Rinzivillo, "Benchmarking and survey of

explanation methods for black-box models," arXiv preprint arXiv:2102.13076, 2021.

[6] P. Linardatos, V. Papastefanopoulos, and S. Kotsiantis, "Explainable AI: A Review of Machine Learning

Interpretability Methods," Entropy, vol. 23, p. 18, 2021.

[7] S. Lapuschkin, S. Wäldchen, A. Binder, G. Montavon, W. Samek, and K.-R. Müller, "Unmasking clever Hans

predictors and assessing what machines really learn," Nature communications, vol. 10, pp. 1-8, 2019.

[8] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning vol. 1: MIT Press Cambridge, 2016.

[9] C. Molnar, Interpretable machine learning: Lulu. com, 2020.

[10] M. T. Ribeiro, S. Singh, and C. Guestrin, "Why should i trust you?" Explaining the predictions of any classifier,"

in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,

2016, pp. 1135-1144.

[11] M. T. Ribeiro, S. Singh, and C. Guestrin, "Anchors: High-precision model-agnostic explanations," in Proceedings

of the AAAI Conference on Artificial Intelligence, 2018.

[12] S. Lundberg and S.-I. Lee, "A unified approach to interpreting model predictions," arXiv preprint

arXiv:1705.07874, 2017.

[13] A. Goldstein, A. Kapelner, J. Bleich, and E. Pitkin, "Peeking inside the black box: Visualizing statistical learning

with plots of individual conditional expectation," Journal of Computational and Graphical Statistics, vol. 24, pp.

44-65, 2015.

[14] D. W. Apley and J. Zhu, "Visualizing the effects of predictor variables in black box supervised learning models,"

Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 82, pp. 1059-1086, 2020.

[15] L. Breiman, "Random forests," Machine learning, vol. 45, pp. 5-32, 2001.

[16] M. Sato and H. Tsukimoto, "Rule extraction from neural networks via decision tree induction," in IJCNN'01.

International Joint Conference on Neural Networks. Proceedings (Cat. No. 01CH37222), 2001, pp. 1870-1875.

[17] M. Craven and J. Shavlik, "Extracting Tree-Structured Representations of Trained Networks, Advances, Neural

Information Processing Systems 8," 1996.

[18] P. Biecek and T. Burzykowski, Explanatory model analysis: explore, explain, and examine predictive models:

CRC Press, 2021.

[19] R. Agarwal, N. Frosst, X. Zhang, R. Caruana, and G. E. Hinton, "Neural additive models: Interpretable machine

learning with neural nets," arXiv preprint arXiv:2004.13912, 2020.

[20] S. Anjomshoae, T. Kampik, and K. Främling, "Py-CIU: A Python Library for Explaining Machine Learning

Predictions Using Contextual Importance and Utility," in IJCAI-PRICAI 2020 Workshop on Explainable Artificial

Intelligence (XAI), 2020.

[21] S. Tan, M. Soloviev, G. Hooker, and M. T. Wells, "Tree space prototypes: Another look at making tree ensembles

interpretable," in Proceedings of the 2020 ACM-IMS on Foundations of Data Science Conference, 2020, pp. 23-

34.

[22] R. Luss, P.-Y. Chen, A. Dhurandhar, P. Sattigeri, Y. Zhang, K. Shanmugam, et al., "Generating contrastive

explanations with monotonic attribute functions," arXiv preprint arXiv:1905.12698, 2019.

[23] M. R. Zafar and N. M. Khan, "DLIME: a deterministic local interpretable model-agnostic explanations approach

for computer-aided diagnosis systems," arXiv preprint arXiv:1906.10263, 2019.

[24] R. K. Mothilal, A. Sharma, and C. Tan, "Explaining machine learning classifiers through diverse counterfactual

explanations," in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp.

607-617.

[25] E. Albini, A. Rago, P. Baroni, and F. Toni, "Relation-based counterfactual explanations for Bayesian network

classifiers," in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI

(2020, To Appear), 2020.

[26] R. Poyiadzi, K. Sokol, R. Santos-Rodriguez, T. De Bie, and P. Flach, "FACE: feasible and actionable

counterfactual explanations," in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp.

344-350.

[27] B. Kim, O. Koyejo, and R. Khanna, "Examples are not enough, learn to criticize! Criticism for Interpretability," in

NIPS, 2016, pp. 2280-2288.

[28] K. Simonyan, A. Vedaldi, and A. Zisserman, "Deep inside convolutional networks: Visualising image

classification models and saliency maps," arXiv preprint arXiv:1312.6034, 2013.

[29] M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European conference on

computer vision, 2014, pp. 818-833.

[30] S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, "On pixel-wise explanations for

non-linear classifier decisions by layer-wise relevance propagation," PloS one, vol. 10, p. e0130140, 2015.

[31] T. Lei, R. Barzilay, and T. Jaakkola, "Rationalizing neural predictions," arXiv preprint arXiv:1606.04155, 2016.

[32] A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, "Grad-cam++: Generalized gradient-based

visual explanations for deep convolutional networks," in 2018 IEEE Winter Conference on Applications of

Computer Vision (WACV), 2018, pp. 839-847.

[33] X. Wang, D. Wang, C. Xu, X. He, Y. Cao, and T.-S. Chua, "Explainable reasoning over knowledge graphs for

recommendation," in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, pp. 5329-5336.

[34] W. Ma, M. Zhang, Y. Cao, W. Jin, C. Wang, Y. Liu, et al., "Jointly learning explainable rules for recommendation

with knowledge graph," in The World Wide Web Conference, 2019, pp. 1210-1221.

[35] H. Yang, C. Rudin, and M. Seltzer, "Scalable Bayesian rule lists," in International Conference on Machine

Learning, 2017, pp. 3921-3930.

[36] B. Ustun and C. Rudin, "Supersparse linear integer models for optimized medical scoring systems," Machine

Learning, vol. 102, pp. 349-391, 2016.

[37] M. Hind, D. Wei, M. Campbell, N. C. Codella, A. Dhurandhar, A. Mojsilović, et al., "TED: Teaching AI to

explain its decisions," in Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp.

123-129.

[38] J. Wu, B. Zhou, D. Peck, S. Hsieh, V. Dialani, L. Mackey, et al., "Deepminer: Discovering interpretable

representations for mammogram classification and explanation," arXiv preprint arXiv:1805.12323, 2018.

[39] A. J. Barnett, F. R. Schwartz, C. Tao, C. Chen, Y. Ren, J. Y. Lo, et al., "IAIA-BL: A Case-based Interpretable

Deep Learning Model for Classification of Mass Lesions in Digital Mammography," arXiv preprint

arXiv:2103.12308, 2021.

[40] A. Fisher, C. Rudin, and F. Dominici, "All Models are Wrong, but Many are Useful: Learning a Variable's

Importance by Studying an Entire Class of Prediction Models Simultaneously," Journal of Machine Learning

Research, vol. 20, pp. 1-81, 2019.

[41] M. Robnik-Šikonja and I. Kononenko, "Explaining classifications for individual instances," IEEE Transactions on

Knowledge and Data Engineering, vol. 20, pp. 589-600, 2008.

[42] S. Wachter, B. Mittelstadt, and C. Russell, "Counterfactual explanations without opening the black box:

Automated decisions and the GDPR," Harv. JL & Tech., vol. 31, p. 841, 2017.

[43] M. Du, N. Liu, F. Yang, S. Ji, and X. Hu, "On the attribution of recurrent neural network predictions via additive

decomposition," in The World Wide Web Conference, 2019, pp. 383-393.

[44] D. Alvarez-Melis and T. S. Jaakkola, "On the robustness of interpretability methods," arXiv preprint

arXiv:1806.08049, 2018.

[45] P.-J. Kindermans, S. Hooker, J. Adebayo, M. Alber, K. T. Schütt, S. Dähne, et al., "The (un) reliability of saliency

methods," in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, ed: Springer, 2019, pp.

267-280.

[46] F. Kamiran and T. Calders, "Data preprocessing techniques for classification without discrimination," Knowledge

and Information Systems, vol. 33, pp. 1-33, 2012.

[47] B. H. Zhang, B. Lemoine, and M. Mitchell, "Mitigating unwanted biases with adversarial learning," in

Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 335-340.

[48] M. Hardt, E. Price, and N. Srebro, "Equality of opportunity in supervised learning," arXiv preprint

arXiv:1610.02413, 2016.

[49] J. Kauffmann, L. Ruff, G. Montavon, and K.-R. Müller, "The Clever Hans effect in anomaly detection," arXiv

preprint arXiv:2006.10609, 2020.


Cite this Article as :
Ahmed Abdelmonem , Nehal N. Mostafa, Interpretable Machine Learning Fusion and Data Analytics Models for Anomaly Detection, Fusion: Practice and Applications, Vol. 3 , No. 1 , (2021) : 54-69 (Doi   :  https://doi.org/10.54216/FPA.030104)