  <?xml version="1.0"?>
<journal>
 <journal_metadata>
  <full_title>Journal of Intelligent Systems and Internet of Things</full_title>
  <abbrev_title>JISIoT</abbrev_title>
  <issn media_type="print">2690-6791</issn>
  <issn media_type="electronic">2769-786X</issn>
  <doi_data>
   <doi>10.54216/JISIoT</doi>
   <resource>https://www.americaspg.com/journals/show/4056</resource>
  </doi_data>
 </journal_metadata>
 <journal_issue>
  <publication_date media_type="print">
   <year>2019</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2019</year>
  </publication_date>
 </journal_issue>
 <journal_article publication_type="full_text">
  <titles>
   <title>Integrating Artificial Intelligence Driven Computer Vision Framework for Enhanced Sign Language Recognition in Hearing and Speech-Impaired People</title>
  </titles>
  <contributors>
   <organization sequence="first" contributor_role="author">Department of Computer Science and Engineering, Ajay Kumar Garg Engineering College Ghaziabad, India</organization>
   <person_name sequence="first" contributor_role="author">
    <given_name>Naif</given_name>
    <surname>Naif</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Computer Science and Engineering, Akshaya College of Engineering and Technology, Kinathukadavu, Coimbatore, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>P.</given_name>
    <surname>Udayakumar</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of EEE, Vignan's Institute of Engineering for Women, Visakhapatnam, Andhra Pradesh, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>B.</given_name>
    <surname>Arundhati</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Information Technology, Aditya University, Surampalem, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>M. V.</given_name>
    <surname>Rajesh</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Industrial Engineering, College of Engineering, King Khalid University, Abha, Saudi Arabia</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Naif</given_name>
    <surname>Almakayeel</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Science, Innovations and Technology, Mamun University, 220900, Khiva, Uzbekistan</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Elvir</given_name>
    <surname>Akhmetshin</surname>
   </person_name>
  </contributors>
  <jats:abstract xml:lang="en">
   <jats:p>Sign language (SL) detection and classification for deaf persons is an essential application of machine learning (ML) and computer vision (CV) techniques. It covers emerging forms, which acquire SL implemented by entities and convert them into auditory or textual output. It is highly significant to understand that determining a correct and robust SL detection approach is a very challenging due to many tasks such as alterations in occlusions, and lighting states in hand actions and forms. Consequently, the CV and ML models is must for testing and training. A Hand gesture detection method discovers beneficial for hearing and speaking-impaired individuals by creating usage of convolutional neural network (CNN) and human-computer interface (HCI) for classifying the constant signals of SL. In this article, an Improved Fennec Fox Algorithm for Deep Learning-Based Sign Language Recognition in Hearing and Speaking Impaired People (IFFADL-SLRHSIP) technique is proposed. The presented IFFADL-SLRHSIP technique main intention is to provide effectual communication between deaf and dumb persons and normal persons utilizing CV and artificial intelligence techniques. In the IFFADL-SLRHSIP model, the enhanced SqueezeNet model is used to capture the intricate patterns and nuances of SL gestures. For detection of the SL classification process, the recurrent neural network (RNN) method is used. To optimize model performance, the improved fennec fox algorithm (IFFA) is applied for parameter tuning, enhancing the model's precision and efficiency. The experimental outputs of the IFFADL-SLRHSIP algorithm are legalized on the SL dataset. The simulation outcomes demonstrate the greater outcomes of the IFFADL-SLRHSIP approach in terms of diverse measures.</jats:p>
  </jats:abstract>
  <publication_date media_type="print">
   <year>2026</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2026</year>
  </publication_date>
  <pages>
   <first_page>304</first_page>
   <last_page>314</last_page>
  </pages>
  <doi_data>
   <doi>10.54216/JISIoT.180221</doi>
   <resource>https://www.americaspg.com/articleinfo/18/show/4056</resource>
  </doi_data>
 </journal_article>
</journal>
