177 94
Full Length Article
Fusion: Practice and Applications
Volume 9 , Issue 2, PP: 27-47 , 2022 | Cite this article as | XML | Html |PDF

Title

Deep Learning Fusion for Attack Detection in Internet of Things Communications

Authors Names :   Ossama Embarak   1 *     Mhmed Algrnaodi   2  

1  Affiliation :  Higher Colleges of Technology (HCT), UAE

    Email :  : oembarak@hct.ac.ae


2  Affiliation :  Electrical Engineering Department, Ecole de technologie superieure, Montreal, Canada

    Email :  mhmed.algrnaodi.1@ens.etsmtl.ca



Doi   :   https://doi.org/10.54216/FPA.090203

Received: May 19, 2022 Accepted: September 11, 2022

Abstract :

The increasing deep learning techniques used in multimedia and network/IoT solve many problems and increase performance. Securing the deep learning models, multimedia, and network/IoT has become a major area of research in the past few years which is considered to be a challenge during generative adversarial attacks over the multimedia or network/IoT. Many efforts and studies try to provide intelligent forensics techniques to solve security issues. This paper introduces a holistic organization of intelligent multimedia forensics that involve deep learning fusion, multimedia, and network/IoT forensics to attack detection. We highlight the importance of using deep learning fusion techniques to obtain intelligent forensics and security over multimedia or Network/IoT. Finally, we discuss the key challenges and future directions in the area of intelligent multimedia forensics using deep learning fusion techniques.

Keywords :

Deep Learning Fusion; IoT; Network; Multimedia; Attack Detection.

References :

[1] S. Agarwal, H. Farid, T. El-Gaaly, and S.-N. Lim, "Detecting deep-fake videos from appearance and

behavior," in 2020 IEEE International Workshop on Information Forensics and Security (WIFS), 2020, pp. 1-6.

[2] R. Agrawal and D. K. Sharma, "A Survey on Video-Based Fake News Detection Techniques," in 2021 8th

International Conference on Computing for Sustainable Global Development (INDIACom), 2021, pp. 663-669.

[3] X. Liu, L. Xie, Y. Wang, J. Zou, J. Xiong, Z. Ying, et al., "Privacy and Security Issues in Deep Learning:

A Survey," IEEE Access, 2020.

[4] I. Castillo Camacho and K. Wang, "A Comprehensive Review of Deep-Learning-Based Methods for Image

Forensics," Journal of Imaging, vol. 7, p. 69, 2021.

[5] N. Koroniotis, N. Moustafa, and E. Sitnikova, "A new network forensic framework based on deep learning

for Internet of Things networks: A particle deep framework," Future Generation Computer Systems, vol. 110, pp.

91-106, 2020.

[6] N. Koroniotis, N. Moustafa, and E. Sitnikova, "Forensics and deep learning mechanisms for botnets in

internet of things: A survey of challenges and solutions," IEEE Access, vol. 7, pp. 61764-61785, 2019.

[7] N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in 2017 ieee

symposium on security and privacy (sp), 2017, pp. 39-57.

[8] P. Yang, D. Baracchi, R. Ni, Y. Zhao, F. Argenti, and A. Piva, "A survey of deep learning-based source

image forensics," Journal of Imaging, vol. 6, p. 9, 2020.

[9] M. Barni, Q.-T. Phan, and B. Tondi, "Copy move source-target disambiguation through multi-branch

CNNs," IEEE Transactions on Information Forensics and Security, 2020.

[10] M. Chaumont, "Deep learning in steganography and steganalysis," in Digital Media Steganography, ed:

Elsevier, 2020, pp. 321-349.

[11] X. Yuan, P. He, Q. Zhu, and X. Li, "Adversarial examples: Attacks and defenses for deep learning," IEEE

transactions on neural networks and learning systems, vol. 30, pp. 2805-2824, 2019.

[12] S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha, "Privacy risk in machine learning: Analyzing the

connection to overfitting," in 2018 IEEE 31st Computer Security Foundations Symposium (CSF), 2018, pp. 268-

282.

[13] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, et al., "Intriguing properties of

neural networks," arXiv preprint arXiv:1312.6199, 2013.

[14] I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv

preprint arXiv:1412.6572, 2014.

[15] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep

neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp.

2574-2582.

[16] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, "The limitations of deep

learning in adversarial settings," in 2016 IEEE European symposium on security and privacy (EuroS&P), 2016, pp.

372-387.

[17] A. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial machine learning at scale," arXiv preprint

arXiv:1611.01236, 2016.

[18] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, "Universal adversarial perturbations," in

Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1765-1773.

[19] P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.-J. Hsieh, "Ead: elastic-net attacks to deep neural networks

via adversarial examples," in Proceedings of the AAAI Conference on Artificial Intelligence, 2018.

[20] J. Su, D. V. Vargas, and K. Sakurai, "One pixel attack for fooling deep neural networks," IEEE

Transactions on Evolutionary Computation, vol. 23, pp. 828-841, 2019.

[21] C. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song, "Generating adversarial examples with adversarial

networks," arXiv preprint arXiv:1801.02610, 2018.

[22] B. Hitaj, G. Ateniese, and F. Perez-Cruz, "Deep models under the GAN: information leakage from

collaborative deep learning," in Proceedings of the 2017 ACM SIGSAC Conference on Computer and

Communications Security, 2017, pp. 603-618.

[23] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks

against machine learning," in Proceedings of the 2017 ACM on Asia conference on computer and communications

security, 2017, pp. 506-519.

[24] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, "Zoo: Zeroth order optimization based black-box

attacks to deep neural networks without training substitute models," in Proceedings of the 10th ACM workshop on

artificial intelligence and security, 2017, pp. 15-26.

[25] A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, "Black-box adversarial attacks with limited queries and

information," in International Conference on Machine Learning, 2018, pp. 2137-2146.

[26] D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, and J. Schmidhuber, "Natural evolution

strategies," The Journal of Machine Learning Research, vol. 15, pp. 949-980, 2014.

[27] M. Alzantot, Y. Sharma, S. Chakraborty, H. Zhang, C.-J. Hsieh, and M. B. Srivastava, "Genattack:

Practical black-box attacks with gradient-free optimization," in Proceedings of the Genetic and Evolutionary

Computation Conference, 2019, pp. 1111-1119.

[28] A. Gleave, M. Dennis, C. Wild, N. Kant, S. Levine, and S. Russell, "Adversarial policies: Attacking deep

reinforcement learning," arXiv preprint arXiv:1905.10615, 2019.

[29] C. Yang, Q. Wu, H. Li, and Y. Chen, "Generative poisoning attack method against neural networks," arXiv

preprint arXiv:1703.01340, 2017.

[30] A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, et al., "Poison frogs! targeted cleanlabel

poisoning attacks on neural networks," arXiv preprint arXiv:1804.00792, 2018.

[31] O. Suciu, R. Marginean, Y. Kaya, H. Daume III, and T. Dumitras, "When does machine learning {FAIL}?

generalized transferability for evasion and poisoning attacks," in 27th {USENIX} Security Symposium ({USENIX}

Security 18), 2018, pp. 1299-1316.

[32] G. Lovisotto, S. Eberz, and I. Martinovic, "Biometric backdoors: A poisoning attack against unsupervised

template updating," in 2020 IEEE European Symposium on Security and Privacy (EuroS&P), 2020, pp. 184-197.

[33] W. Jiang, H. Li, S. Liu, X. Luo, and R. Lu, "Poisoning and evasion attacks against deep learning algorithms

in autonomous vehicles," IEEE transactions on vehicular technology, vol. 69, pp. 4439-4449, 2020.

[34] L. Truong, C. Jones, B. Hutchinson, A. August, B. Praggastis, R. Jasper, et al., "Systematic evaluation of

backdoor data poisoning attacks on image classifiers," in Proceedings of the IEEE/CVF Conference on Computer

Vision and Pattern Recognition Workshops, 2020, pp. 788-789.

[35] T. Liu, W. Wen, and Y. Jin, "SIN 2: Stealth infection on neural network—a low-cost agile neural trojan

attack methodology," in 2018 IEEE International Symposium on Hardware Oriented Security and Trust (HOST),

2018, pp. 227-230.

[36] H. Kwon, H. Yoon, and K.-W. Park, "Multi-targeted backdoor: Indentifying backdoor attack for multiple

deep neural networks," IEICE Transactions on Information and Systems, vol. 103, pp. 883-887, 2020.

[37] J. Dai, C. Chen, and Y. Li, "A backdoor attack against LSTM-based text classification systems," IEEE

Access, vol. 7, pp. 138872-138878, 2019.

[38] X. Chen, A. Salem, M. Backes, S. Ma, and Y. Zhang, "Badnl: Backdoor attacks against nlp models," arXiv

preprint arXiv:2006.01043, 2020.

[39] L. Sun, "Natural backdoor attack on text data," arXiv preprint arXiv:2006.16176, 2020.


Cite this Article as :
Ossama Embarak , Mhmed Algrnaodi, Deep Learning Fusion for Attack Detection in Internet of Things Communications, Fusion: Practice and Applications, Vol. 9 , No. 2 , (2022) : 27-47 (Doi   :  https://doi.org/10.54216/FPA.090203)