  <?xml version="1.0"?>
<journal>
 <journal_metadata>
  <full_title>Fusion: Practice and Applications</full_title>
  <abbrev_title>FPA</abbrev_title>
  <issn media_type="print">2692-4048</issn>
  <issn media_type="electronic">2770-0070</issn>
  <doi_data>
   <doi>10.54216/FPA</doi>
   <resource>https://www.americaspg.com/journals/show/3294</resource>
  </doi_data>
 </journal_metadata>
 <journal_issue>
  <publication_date media_type="print">
   <year>2018</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2018</year>
  </publication_date>
 </journal_issue>
 <journal_article publication_type="full_text">
  <titles>
   <title>Fusion Model of Quantum Wavelet Transform and Neural Network for Video Coding on the Internet of Things Environment</title>
  </titles>
  <contributors>
   <organization sequence="first" contributor_role="author">Department of Computer, College of Education for Pure Sciences Ibn Al-Haitham, University of Baghdad, Iraq</organization>
   <person_name sequence="first" contributor_role="author">
    <given_name>Oday</given_name>
    <surname>Oday</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Minister Office of Higher Education and Scientific Research, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Ali Abdullah</given_name>
    <surname>Ali</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Ministry of Education, Wasit Education Directorate, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Oday Ali</given_name>
    <surname>Hassen</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Information Technology, Institute of Graduate Studies and Research, Alexandria University, Egypt</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Saad M.</given_name>
    <surname>Darwish</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Information Technology, University Technical Malaysia Melaka, Hang Taya, Melaka 76100, Malaysia</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Nur Azman</given_name>
    <surname>Abu</surname>
   </person_name>
  </contributors>
  <jats:abstract xml:lang="en">
   <jats:p>Solving the video compression problem requires a multi-faceted approach, balancing quality, efficiency, and computational demands. By leveraging advancements in technology and adapting to the evolving needs of video applications, it is possible to develop compression methods that meet the challenges of the present and future digital landscape. To address these objectives, machine learning and AI approaches can be utilized to predict and remove redundancies more effectively, optimizing compression algorithms dynamically based on content. Still, state-of-the art neural network-based video compression models need large and diverse datasets to generalize well across different types of video content. Wavelets can provide both time (spatial) and frequency localization, making them highly effective for video compression. This dual localization allows wavelet transforms to handle both rapid changes in video content and slow-moving scenes efficiently, leading to better compression ratios. Yet, some wavelet coefficients may be more critical for maintaining visual quality than others. Inaccurate quantization can lead to noticeable degradation. For the first time, the suggested model combine Quantum Wavelet Transform (QWT) and Neural Networks (NN) for video compression. This fusion model aims to achieve higher compression ratios, maintain video quality, and reduce computational complexity by utilizing QWT’s efficient data representation and NN’s powerful pattern recognition and predictive capabilities. Quantum bits (qubits) can encode large amounts of information in their quantum states, enabling more efficient data representation. This is especially useful for encoding large video files. Furthermore, quantum entanglement allows for correlated data representation across qubits, which can be exploited to capture intricate details and redundancies in video data more effectively than classical methods. The experimental results reveal that QWT achieves a compression ratio of almost twice that of traditional WT for the same video, maintaining superior visual quality due to more efficient redundancy elimination.</jats:p>
  </jats:abstract>
  <publication_date media_type="print">
   <year>2025</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2025</year>
  </publication_date>
  <pages>
   <first_page>249</first_page>
   <last_page>263</last_page>
  </pages>
  <doi_data>
   <doi>10.54216/FPA.170219</doi>
   <resource>https://www.americaspg.com/articleinfo/3/show/3294</resource>
  </doi_data>
 </journal_article>
</journal>
