  <?xml version="1.0"?>
<journal>
 <journal_metadata>
  <full_title>Journal of Cybersecurity and Information Management</full_title>
  <abbrev_title>JCIM</abbrev_title>
  <issn media_type="print">2690-6775</issn>
  <issn media_type="electronic">2769-7851</issn>
  <doi_data>
   <doi>10.54216/JCIM</doi>
   <resource>https://www.americaspg.com/journals/show/3404</resource>
  </doi_data>
 </journal_metadata>
 <journal_issue>
  <publication_date media_type="print">
   <year>2019</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2019</year>
  </publication_date>
 </journal_issue>
 <journal_article publication_type="full_text">
  <titles>
   <title>Text Categorization for Information Retrieval Using NLP Models</title>
  </titles>
  <contributors>
   <organization sequence="first" contributor_role="author">Sulaimani Polytechnic University College of Health and Medical Technology Anaesthesia, Iraq</organization>
   <person_name sequence="first" contributor_role="author">
    <given_name>Oday</given_name>
    <surname>Oday</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Chitkara University Institute of Engineering &amp; Technology, Chitkara University, Rajpura, Punjab, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Vijay</given_name>
    <surname>Madaan</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">University of Information Technology and Communication, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Rajaa Daami</given_name>
    <surname>Resen</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Chitkara University Institute of Engineering &amp; Technology, Chitkara University, Rajpura, Punjab, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Neha</given_name>
    <surname>Sharma</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Ministry of Education, Wasit Education Directorate, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Oday Ali</given_name>
    <surname>Hassen</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Wasit University, College of Arts, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Jamal kh</given_name>
    <surname>kh-madhloom</surname>
   </person_name>
  </contributors>
  <jats:abstract xml:lang="en">
   <jats:p>The paper presents the state-of-the-art natural language processing (NLP) models and methods, such as BERT and DistilBERT, to evaluate textual data and extract noteworthy insights. Preprocessing textual input, tokenization, and the implementation of deep learning architectures such as bidirectional LSTMs for classification tasks are all components of the approach that has been presented. To achieve the goal of producing accurate prediction models with the least amount of validation loss possible. Natural language processing (NLP) is a major focus of the manuscript in multiple areas such as sentiment analysis, language understanding, and text classification. The results show that our proposed NLP models perform exceptionally well. Long-term memory and natural language processing (NLP) go hand in hand. Therefore, these results demonstrate the value and relevance of our natural language processing approach to obtaining unstructured text data to improve and develop a variety of applications, such as chatbots, virtual assistants, and information retrieval systems, as well as to gain insights and help make better decisions, and the flexibility and generalizability of the models, while confirming their ability to handle a range of activities and textual materials. Excellent and accurate results were obtained in terms of validation, with the experimental models often exceeding the 99.85% accuracy benchmark. Another crucial factor to consider is that the average validation loss metrics for all tests remained remarkably low at 0.0058.</jats:p>
  </jats:abstract>
  <publication_date media_type="print">
   <year>2025</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2025</year>
  </publication_date>
  <pages>
   <first_page>305</first_page>
   <last_page>321</last_page>
  </pages>
  <doi_data>
   <doi>10.54216/JCIM.150223</doi>
   <resource>https://www.americaspg.com/articleinfo/2/show/3404</resource>
  </doi_data>
 </journal_article>
</journal>
