96 100
Full Length Article
Journal of Cognitive Human-Computer Interaction
Volume 7 , Issue 1, PP: 41-47 , 2024 | Cite this article as | XML | Html |PDF

Title

Defense Against Adversarial Ai

  Bhavani G. 1 * ,   Soundarya S. 2 ,   Tejashwini V. 3 ,   Sumitha S. 4

1  Panimalar Engineering College, Chennai, Tamil Nadu, India
    (bhavanigovar@gmail.com)

2  Panimalar Engineering College, Chennai, Tamil Nadu, India
    (soundisweetie.kolathur@gmail.com)

3  Panimalar Engineering College, Chennai, Tamil Nadu, India
    (tejasreedevi6@gmail.com)

4  Panimalar Engineering College, Chennai, Tamil Nadu, India
    (sumithasuresh07@gmail.com.)


Doi   :   https://doi.org/10.54216/JCHCI.070105

Received: August 21, 2023 Revised: October 14, 2023 Accepted: January 27, 2024

Abstract :

The increasing prevalence of deep learning technology has paved the way for a new era of AI-powered capabilities, promising revolutionary advancements across various societal domains such as healthcare and autonomous vehicles. Despite offering potent solutions to complex problems, the formidable power of these AI systems is accompanied by a susceptibility that malicious actors could exploit. Adversarial attacks, particularly targeting deep learning models, involve the crafting of altered inputs, often imperceptible changes to images, to deceive or undermine the functionality of the AI system. Within the domain of autonomous driving systems, adversarial attacks pose a severe risk. Envision a situation where a precisely manipulated adversarial attack targets a red traffic light sign, causing the AI system to misclassify it as an entirely unrelated object, perhaps identifying it as a bird. The potential consequences of such misclassifications underscore the serious impact that adversarial attacks can exert on the safety and dependability of autonomous vehicles. The potential repercussions of such misclassification are severe, with the risk of causing traffic accidents and posing a notable safety threat. Ensuring the resilience and security of AI technologies against adversarial threats is of utmost importance as AI continues to play a pivotal role in critical applications such as healthcare, finance, and autonomous systems. It necessitates a holistic strategy that melds advanced research, meticulous testing, and the deployment of robust security measures. This comprehensive approach is essential for fostering trust and mitigating potential harm in an ever- growing, AI-driven world.

Keywords :

Deep Learning; AI; Smart Systems; Complex Problems

References :

[1]  Y. Ding, F. Tan, Z. Qin, M. Cao, K. -K. R.Choo and Z. Qin, "Deep Keygen: A Deep Learning- Based Stream Cipher Generator for Medical Image Encryption and Decryption," in IEEE Transactions onNeural Networks and Learning Systems, vol. 33, no. 9, pp. 4915-4929, doi: 10.1109/TNNLS.2021.3062754, Sept. 2022.

 

[2]  Y. Chen, M. Zhang, J. Li and X. Kuang, "Adversarial Attacks and Defenses in ImageClassification: A Practical Perspective," 2022 7th International Conference on Image, Vision andComputing (ICIVC), Xi’an, China, pp. 424- 430,doi: 10.1109/ICIVC55077.2022.9886997,2022.

 

[3]. K. Khullar, S. Kathuria, N. Chahar, P. Gupta and P. Kaur, "A Quantitative Comparison of Image Classification Models under Adversarial Attacks and Defenses," 2021 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, pp. 6-10, doi: 10.1109/SPIN52536.2021.9565948,2021.

 

[3] Sweeta Bansal,Karan Kohli,K. K. Vishwakarma,Kush Gupta. "Graph Algo Visualizer." Journal of Cognitive Human-Computer Interaction, Vol. 3, No. 2, 2022 ,PP. 36-41.

 

[4]. Pedraza A, Deniz O, Bueno G. Approaching Adversarial Example Classification withChaos Theory. Entropy (Basel).

Oct 24;22(11):1201. doi: 10.3390/e22111201. PMID:33286969; PMCID: PMC7712112,2020.

 

[5]. Y. Wang, L. Xie, X. Liu, J. -L. Yin and T. Zheng, "Model-Agnostic Adversarial Example Detection Through Logit Distribution Learning," 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, pp. 3617-3621, doi: 10.1109/ICIP42928.2021.9506292,2021.

[6]. M. A. Pandya, P. Siddalingaswamy and S. Singh, "Explainability of Image Classifiers for Targeted Adversarial Attack," 2022 IEEE 19th India Council International Conference (INDICON), Kochi, India,pp.1-6,doi: 10.1109/INDICON56171.2022.10039871,2022.

 

[7]. C. Xiao and C. Zheng, "One Man’s Trash Is Another Man’s Treasure: Resisting Adversarial Examples by Adversarial Examples," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), Seattle, WA, USA, 2020pp.409-418. doi: 10.1109/CVPR42600.2020.00049,2020.

 

[8] M. Prakash,M.Sumithra,B.Buvaneswari. "General ChatBot for Medical Applications." Journal of Cognitive Human-Computer Interaction, Vol. 4, No. 1, 2022 ,PP. 08-14.

 

[8].Y. Wang, T. Li, S. Li, X. Yuan and W. Ni,"New Adversarial Image Detection Based on Sentiment Analysis," in IEEE Transactions on NeuralNetworks and Learning Systems, doi: 10.1109/TNNLS.2023.3274538.

 

[9].Y. Liu, W. Zhang and N. Yu, "Query-Free Embedding Attack Against Deep Learning," 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, pp. 380-386, doi: 10.1109/ICME.2019.00073,2020.

 

[10]. Zhang YA, Xu H, Pei C, Yang G.Adversarial example defense based on image reconstruction. PeerJ Comput Sci.

  Dec24;7: e811. doi: 10.7717/peerj-cs.811. PMID: 35036533; PMCID: PMC8725667,2021.

 

    [11]. R. Venkatesan,M.Sumithra,B. Buvaneswari,R. Selvalingeshwaran. (2022). Food Ordering Systems'        

              Newness. Journal of Cognitive Human-Computer Interaction, 4 ( 1 ), 15-20.

 

[11].Tian, Xuejun, Tian, Xinyuan, and Pan, Bingqin. ‘Similarity Attack: An Adversarial Attack Game for Image Classification Based on Deep Learning’. 1 Jan. 2023 : 1467 – 1478

 

[12]. A.Agarwal, R. Singh, M. Vatsa and N. Ratha, "Image Transformation-Based DefenseAgainst Adversarial Perturbation on Deep Learning Models," in IEEE Transactions on Dependable and Secure Computing, vol. 18, no. 5, pp. 2106-2121, 1 Sept.-Oct. 2021, doi: 10.1109/TDSC.2020.3027183

 

[13]. J. Ji, B. Zhong and K. -K. Ma, "Multi-ScaleDefense of Adversarial Images," 2019 IEEEInternational Conference on Image Processing (ICIP),Taipei, Taiwan, 2019, pp. 4070-4074, doi: 10.1109/ICIP.2019.8803408.

 

[14]. C. -Y. Lin, F. -J. Chen, H. -F. Ng and W. -Y. Lin, "Invisible Adversarial Attacks on Deep Learning-Based Face Recognition Models," in IEEE Access, vol. 11, pp. 51567-51577, 2023, doi: 10.1109/ACCESS.2023.3279488.

 

[15]  M. Ashok Kumar,M.Sumithra,B.Buvaneswari,R. Selvalingeshwaran. "Online Cafeteria E." Journal of Cognitive Human-Computer Interaction, Vol. 4, No. 1, 2022 ,PP. 21-28.

 

[15].J. -H. Choi, H. Zhang, J. -H. Kim, C. -J. Hsieh and J. -S. Lee, "Deep Image Destruction: Vulnerability of Deep Imageto- Image Models against Adversarial Attacks," 2022 26th International Conference on Pattern Recognition (ICPR), Montreal,QC, Canada, 2022, pp. 1287-1293, doi: 10.1109/ICPR56361.2022.9956577.

 

[16].D. Vyas and V. V. Kapadia, "Evaluation ofAdversarial Attacks and Detection on Transfer Learning Model," 2023 7th International Conference on Computing Methodologies and Communication(ICCMC), Erode, India, 2023, pp. 1116- 1124, doi: 10.1109/ICCMC56507.2023.10084164.

 

[17].S. Niu, Y. Liu, J. Wang, and H. Song, “A decade survey of transfer learning (2010-2020),” IEEE Transactions on Artificial Intelligence, 2021.

 

[18]. S. Rezaei and X. Liu, “A target-agnostic attack on deep models: Exploiting security vulnerabilities of transfer learning,” arXiv preprint arXiv:1904.04334, 2019.

 

[19]. Ajith R. ,Mercy Beullah. "Automated System for Management of Exam Cell." Journal of Cognitive Human-Computer Interaction, Vol. 4, No. 1, 2022 ,PP. 29-38.

 

[19].A.Abdelkader, M. J. Curry, L. Fowl, T. Goldstein, A. Schwarzschild, M. Shu, C. Studer, and C. Zhu, “Headless horseman: Adversarial attacks on transfer learning models,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 3087–3091.

 

[20].Meenakshi and G. Maragatham, “A reviewon security attacks and protective strategies of machine learning,” in Int.

Conf. on Emerging CurrentTrends in Computing and Expert Technology, Cham, Springer, pp. 1076–1087, 2019.

 

[21].B.Pal and S. Tople, “To transfer or not to transfer: Misclassification attacks against transfer learned text classifiers,” arXiv preprint arXiv:2001.02438, 2020.

[22].Z. Yan, Y. Guo and C. Zhang, “Deep defe nse: Training DNNs with improved adversarial robustness,” arXiv preprint arXiv: 1803.00404, 2018.

[23]. P. Yang, J. Chen, C. J. Hsieh, J. L. Wang and M. Jordan, “Ml-loo: Detecting adversarial exampl es with feature attribution,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 4, pp. 6639–6647, 2020.

 

[24].Athalye, N. Carlini and D. Wagner, “Obfu scated gradients give a false sense of security: Circumventing defenses to adversarial examples,” in Int. Conf. on Machine Learning, PMLR, Stockholm, Sweden,pp. 274–283, 2018.

 

[25].Y. Ji, X. Zhang, S. Ji, X. Luo, and T. Wang,“Model-reuse attacks on deep learning systems,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018, pp.349– 363.


Cite this Article as :
Style #
MLA Bhavani G., Soundarya S., Tejashwini V., Sumitha S.. "Defense Against Adversarial Ai." Journal of Cognitive Human-Computer Interaction, Vol. 7, No. 1, 2024 ,PP. 41-47 (Doi   :  https://doi.org/10.54216/JCHCI.070105)
APA Bhavani G., Soundarya S., Tejashwini V., Sumitha S.. (2024). Defense Against Adversarial Ai. Journal of Journal of Cognitive Human-Computer Interaction, 7 ( 1 ), 41-47 (Doi   :  https://doi.org/10.54216/JCHCI.070105)
Chicago Bhavani G., Soundarya S., Tejashwini V., Sumitha S.. "Defense Against Adversarial Ai." Journal of Journal of Cognitive Human-Computer Interaction, 7 no. 1 (2024): 41-47 (Doi   :  https://doi.org/10.54216/JCHCI.070105)
Harvard Bhavani G., Soundarya S., Tejashwini V., Sumitha S.. (2024). Defense Against Adversarial Ai. Journal of Journal of Cognitive Human-Computer Interaction, 7 ( 1 ), 41-47 (Doi   :  https://doi.org/10.54216/JCHCI.070105)
Vancouver Bhavani G., Soundarya S., Tejashwini V., Sumitha S.. Defense Against Adversarial Ai. Journal of Journal of Cognitive Human-Computer Interaction, (2024); 7 ( 1 ): 41-47 (Doi   :  https://doi.org/10.54216/JCHCI.070105)
IEEE Bhavani G., Soundarya S., Tejashwini V., Sumitha S., Defense Against Adversarial Ai, Journal of Journal of Cognitive Human-Computer Interaction, Vol. 7 , No. 1 , (2024) : 41-47 (Doi   :  https://doi.org/10.54216/JCHCI.070105)