  <?xml version="1.0"?>
<journal>
 <journal_metadata>
  <full_title>Fusion: Practice and Applications</full_title>
  <abbrev_title>FPA</abbrev_title>
  <issn media_type="print">2692-4048</issn>
  <issn media_type="electronic">2770-0070</issn>
  <doi_data>
   <doi>10.54216/FPA</doi>
   <resource>https://www.americaspg.com/journals/show/4053</resource>
  </doi_data>
 </journal_metadata>
 <journal_issue>
  <publication_date media_type="print">
   <year>2018</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2018</year>
  </publication_date>
 </journal_issue>
 <journal_article publication_type="full_text">
  <titles>
   <title>Enhancement Medical Image using U-Net Model in Three Dimensional Vitreoretinal Surgery</title>
  </titles>
  <contributors>
   <organization sequence="first" contributor_role="author">Department of Computer Networks Systems, College of Computer Science and Information Technology, University of Anbar, Ramadi, Iraq</organization>
   <person_name sequence="first" contributor_role="author">
    <given_name>Omar</given_name>
    <surname>Omar</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Al-Imam Al-Adham University College, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Ahmed Abdullah</given_name>
    <surname>Mahmood</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Computer Engineering Techniques, College of Technical Engineering, University of Al Maarif, Al Anbar, 31001, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Omar Muthanna</given_name>
    <surname>Khudhur</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Computer Science and Information Technology, College of Science, University of Hilla, 51001 Babil, Iraq</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Zaid Sami</given_name>
    <surname>Mohsen</surname>
   </person_name>
  </contributors>
  <jats:abstract xml:lang="en">
   <jats:p>Vitreoretinal surgery is highly dependent on good visualization of fragile retinal surfaces for the purpose of accurate and safe operation. However, the image quality of current 3D heads-up display systems is often suboptimal, such as low contrast or inadequate sharpness, which is likely to decrease the accuracy of operation and prolong the operation duration. Improving intraoperative image quality continues to be a challenge for the advancement of the surgical results. In this paper, we advocate a deep learning-based solution to optimal imaging parameter guidance for the prospect of 3D HU-image guided VR surgery, seeking to improve vitreoretinal surface visibility during the surgery. A hybrid model that combines a U-Net-based image enhancement with a ViT for feature refinement has been learned using 212 manually optimized still frames (extracted from the ERM surgical video). The performance of the algorithm was quantitatively assessed through peak signal-to-noise ratio (PSNR) and the structural similarity index map (SSIM) and qualitatively evaluated in terms of the improvement in sharpness, brightness, and contrast. Moreover, the in-cabin usability of optimized images was investigated in an intraoperative survey. For in-vitro validation, 121 anonymous high-resolution ERM fundus images were analyzed with a 3D display coupled with the algorithm. The SSIM and PSNR of the model were 36.45±4.90 and 0.91±0.05, respectively, which indicated considerable improvements in image sharpness, brightness, and contrast. Visible ERM size and color contrast ratio were significantly enhanced in optimized images in the in-vitro studies. The results demonstrate that the developed algorithm can perform digital image enhancement effectively and has promise in the real-time applications during the 3D heads-up vitreoretinal surgeries.</jats:p>
  </jats:abstract>
  <publication_date media_type="print">
   <year>2026</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2026</year>
  </publication_date>
  <pages>
   <first_page>159</first_page>
   <last_page>169</last_page>
  </pages>
  <doi_data>
   <doi>10.54216/FPA.210210</doi>
   <resource>https://www.americaspg.com/articleinfo/3/show/4053</resource>
  </doi_data>
 </journal_article>
</journal>
