Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Volume 18 , Issue 2 , PP: 187-204, 2026 | Cite this article as | XML | Html | PDF | Full Length Article

Deep Fake Image Detection Using Ensemble Approach

Vijay Madaan 1 , Raghad Tohmas Esfandiyar 2 , Shahad Hussein Jasim 3 , Oday Ali Hassen 4 * , Neha Sharma 5 , Ansam A. Abdulhussein 6

  • 1 Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India - (Vijaymadaan1@gmail.com)
  • 2 Ministry of Higher Education and Scientific Research, Minister Office, Bagdad, Iraq - (eng.raghadlalawi@gmail.com)
  • 3 Ministry of Higher Education and Scientific Research, Minister Office, Bagdad, Iraq - (shahadhusseinjasim94@mohesr.edu.iq)
  • 4 Ministry of Education, Wasit Education Directorate, Bagdad, Iraq; Computer Department, College of Education for Pure Sciences, Wasit University, Iraq - (odayali@uowasit.edu.iq)
  • 5 Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India - (nehasharma0110@gmail.com)
  • 6 College of Engineering, University of Information Technology and Communications, Baghdad, Iraq - (an8225124@gmail.com)
  • Doi: https://doi.org/10.54216/JISIoT.180214

    Received: March 04, 2025 Revised: May 26, 2025 Accepted: July 04, 2025
    Abstract

    This paper offers a comprehensive framework for real or fake image classification based on three classifiers: a Standard Convolutional Neural Network (CNN), an EfficientNetV2 model based on transfer learning, and a re-trained GAN discriminator to address the challenges in deepfake detection. The CNN, with four convolutional blocks and dropout regularization, offers computational efficiency (87.2% accuracy, 15 ms/image inference), while EfficientNetV2 utilizes pre-trained ImageNet weights to achieve state-of-the-art performance (94.7% ac-curacy, AUC: 0.98) using hierarchical feature extraction. The fine-tuned and adversarial-pretrained GAN discriminator demonstrates niche strength in the detection of synthetic artifacts (91% recall for GAN-generated fakes). Training used augmented sets (rotation, shifts, and shear) to increase the generalization boost, with loss optimization and early stopping (binary cross-entropy) controlled through validation. Normalized test set validation affirmed EfficientNetV2's capability at balancing recall (94%) with precision (95%), although the GAN discriminator recorded a lead in adversarial resilience. All the models blended, an ensemble model achieved maximum accuracy (96.1%), under complementarities. Computational baselines showed trade-offs EfficientNetV2 accu-racy vs. resource bias (2.5-hour training), the CNN edge-compatibility, and the GAN discriminator arti-fact-sensitive specialization. The work encourages hybrid architectures and ensemble approaches to balance out single-model vulnerabilities, offering a flexible toolkit for deepfake warfare while emphasizing the need for hardware-aware deployment techniques and ongoing adaptation to changing synthetic approaches.

    Keywords :

    Deepfake Detection , Real vs. Fake Image Classification , Convolutional Neural Network , Transfer Learning (EfficientNetV2) , Generative Adversarial Network

    References

    [1]          Y. O. Bang and S. S. Woo, “DA-FDFtNet: Dual attention fake detection fine-tuning network to detect various AI-generated fake images,” arXiv preprint arXiv: 2112.12001, 2021.

     

    [2]          L. Guarnera, O. Giudice, and S. Battiato, “Level up the deepfake detection: A method to effectively discriminate images generated by GAN architectures and diffusion models,” arXiv preprint arXiv: 2303.00608, 2023.

     

    [3]          U. Ojha, Y. Li, and Y. J. Lee, “Towards universal fake image detectors that generalize across generative models,” arXiv preprint arXiv: 2302.10174, 2023.

     

    [4]          L. Zhang et al., “X-Transfer: A transfer learning-based framework for GAN-generated fake image detection,” arXiv preprint arXiv: 2310.04639, 2023.

     

    [5]          T. Dam, N. Swami, S. G. Anavatti, and H. A. Abbass, “Multi-fake evolutionary generative adversarial networks for imbalance hyperspectral image classification,” arXiv preprint arXiv: 2111.04019, 2021.

     

    [6]          N. Luo, Y. Zhang, J. Yan et al., “FD-GAN: Generalizable and robust forgery detection via generative adversarial networks,” Int. J. Comput. Vis., vol. 132, pp. 5801–5819, 2024.

     

    [7]          J. Jheelan and S. Pudaruth, “Using deep learning to identify deepfakes created using generative adversarial networks,” Computers, vol. 14, no. 2, p. 60, 2025.

     

    [8]          S. Tiwari, A. K. Dixit, and A. K. Pandey, “Fake image detection using generative adversarial networks (GANs) and deep learning models,” J. Dyn. Control, vol. 8, no. 10, pp. 37–45, 2024.

     

    [9]          S. Sürücü and B. Diri, “A hybrid approach for the detection of images generated with multi generator MS-DCGAN,” Eng. Sci. Technol. Int. J., vol. 63, p. 101969, 2025.

     

    [10]       T. Say, M. Alkan, and A. Kocak, “Advancing GAN deepfake detection: Mixed datasets and comprehensive artifact analysis,” Appl. Sci., vol. 15, no. 2, p. 923, 2025.

     

    [11]       M. Wyawahare, S. Bhorge, M. Rane, V. Parkhi, M. Jha, and N. Muhal, “Comparative analysis of deepfake detection models on diverse GAN-generated images,” Int. J. Electr. Comput. Eng. Syst., vol. 16, no. 1, pp. 9–18, 2024.

     

    [12]       G. Kalaimani, G. Kavitha, and S. Mylapalli, “Optimally configured generative adversarial networks to distinguish real and AI-generated human faces,” Signal Image Video Process. vol. 18, pp. 7921–7938, 2024.

     

    [13]       J. Wang et al., “GAN-generated fake face detection via two-stream CNN with PRNU in the wild,” Multimedia Tools Appl., vol. 81, no. 29, pp. 42527–42545, 2022.

     

    [14]       S. A. Raza, U. Habib, M. Usman, A. A. Cheema, and M. S. Khan, “MMGANGuard: A robust approach for detecting fake images generated by GANs using multi-model techniques,” IEEE Access, 2024.

     

    [15]       Y. Zhu, Y. Dong, B. Song, and S. Yao, “Hiding image into image with hybrid attention mechanism based on GANs,” IET Image Process., 2024.

     

    [16]       T. Fu, M. Xia, and G. Yang, “Detecting GAN-generated face images via hybrid texture and sensor noise based features,” Multimedia Tools Appl., vol. 81, pp. 26345–26359, 2022.

     

    [17]       Z. Abidin et al., “Realistic smile expression recognition approach using ensemble classifier with enhanced bagging,” Comput., Mater. Continua, vol. 70, no. 2, 2022.

     

    [18]       Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-DF: A large-scale challenging dataset for DeepFake forensics,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 3207–3216.

     

    [19]       F. Marra, D. Gragnaniello, D. Cozzolino, and L. Verdoliva, “Detection of GAN-generated fake images over social networks,” in Proc. IEEE Conf. Multimedia Inf. Process. Retrieval, 2018, pp. 384–389.

     

    [20]       L. Nataraj et al., “Detecting GAN generated fake images using co-occurrence matrices,” arXiv preprint arXiv: 1903.06836, 2019.

     

    [21]       N. A. Abu et al., “A new descriptor for smile classification based on cascade classifier in unconstrained scenarios,” Symmetry, vol. 13, no. 5, p. 805, 2021.

     

    [22]       Vijaymadaan, “GAN CNN 2,” Kaggle, Feb. 27, 2025. [Online]. Available:  https://www.kaggle.com/code/vijaymadaan7/gan-cnn-2

     

    [23]       K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770–778.

     

    [24]       K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Int. Conf. Learn. Represent. (ICLR), 2015.

     

    [25]       G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 4700–4708.

     

    [26]       F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 1251–1258.

     

    [27]       J. Goodfellow et al., “Generative adversarial networks,” Adv. Neural Inf. Process. Syst., vol. 27, 2014.

     

    [28]       M. Tan and Q. Le, “EfficientNetV2: Smaller models and faster training,” in Proc. 38th Int. Conf. Mach. Learn. (ICML), 2021, pp. 10096–10106.

     

    [29]       T. G. Dietterich, “Ensemble methods in machine learning,” in Multiple Classifier Syst., First Int. Workshop, 2000, pp. 1–15.

     

    [30]       K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770–778.

     

    [31]       F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 1251–1258.

     

    [32]       J. Goodfellow et al., “Generative adversarial networks,” Adv. Neural Inf. Process. Syst., vol. 27, 2014.

     

    [33]       D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “MesoNet: A compact facial video forgery detection network,” in IEEE Int. Workshop Inf. Forensics Secur. (WIFS), 2018.

     

    [34]       L. Bondi et al., “CIFAKE: Image classification and explainable identification of AI-generated synthetic images,” arXiv preprint arXiv: 2403.14126, 2024.

     

    [35]       S. Mashhadani et al., “Fusion of Type-2 Neutrosophic Similarity Measure in Signatures Verification Systems: A New Forensic Document Analysis Paradigm,” Intell. Autom. Soft Comput, vol. 39, no. 5, 2024.

     

    [36]       J. Konečný et al., “Federated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv: 1610.05492, 2016.

     

    [37]       N. Clarke and F. Li, “Identification and extraction of digital forensic evidence from multimedia data sources using multi-algorithmic fusion,” in Proc. 5th Int. Conf. Inf. Syst. Secur. Privacy, 2019.

     

    [38]       M. M. S. Ali, M. A. Alzahrani, and R. M. Alhassan, “A novel approach for digital forensics using machine learning techniques,” J. Inf. Secur. Appl., vol. 64, pp. 103–115, 2023.

     

    [39]       S. M. Darwish et al., “An enhanced document source identification system for printer forensic applications based on the boosted quantum KNN classifier,” Eng., Technol. Appl. Sci. Res., vol. 15, no. 1, pp. 19983–19991, 2025.

    Cite This Article As :
    Madaan, Vijay. , Tohmas, Raghad. , Hussein, Shahad. , Ali, Oday. , Sharma, Neha. , A., Ansam. Deep Fake Image Detection Using Ensemble Approach. Journal of Intelligent Systems and Internet of Things, vol. , no. , 2026, pp. 187-204. DOI: https://doi.org/10.54216/JISIoT.180214
    Madaan, V. Tohmas, R. Hussein, S. Ali, O. Sharma, N. A., A. (2026). Deep Fake Image Detection Using Ensemble Approach. Journal of Intelligent Systems and Internet of Things, (), 187-204. DOI: https://doi.org/10.54216/JISIoT.180214
    Madaan, Vijay. Tohmas, Raghad. Hussein, Shahad. Ali, Oday. Sharma, Neha. A., Ansam. Deep Fake Image Detection Using Ensemble Approach. Journal of Intelligent Systems and Internet of Things , no. (2026): 187-204. DOI: https://doi.org/10.54216/JISIoT.180214
    Madaan, V. , Tohmas, R. , Hussein, S. , Ali, O. , Sharma, N. , A., A. (2026) . Deep Fake Image Detection Using Ensemble Approach. Journal of Intelligent Systems and Internet of Things , () , 187-204 . DOI: https://doi.org/10.54216/JISIoT.180214
    Madaan V. , Tohmas R. , Hussein S. , Ali O. , Sharma N. , A. A. [2026]. Deep Fake Image Detection Using Ensemble Approach. Journal of Intelligent Systems and Internet of Things. (): 187-204. DOI: https://doi.org/10.54216/JISIoT.180214
    Madaan, V. Tohmas, R. Hussein, S. Ali, O. Sharma, N. A., A. "Deep Fake Image Detection Using Ensemble Approach," Journal of Intelligent Systems and Internet of Things, vol. , no. , pp. 187-204, 2026. DOI: https://doi.org/10.54216/JISIoT.180214