  <?xml version="1.0"?>
<journal>
 <journal_metadata>
  <full_title>International Journal of Neutrosophic Science</full_title>
  <abbrev_title>IJNS</abbrev_title>
  <issn media_type="print">2690-6805</issn>
  <issn media_type="electronic">2692-6148</issn>
  <doi_data>
   <doi>10.54216/IJNS</doi>
   <resource>https://www.americaspg.com/journals/show/4269</resource>
  </doi_data>
 </journal_metadata>
 <journal_issue>
  <publication_date media_type="print">
   <year>2020</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2020</year>
  </publication_date>
 </journal_issue>
 <journal_article publication_type="full_text">
  <titles>
   <title>A Unified Linear Algebra–Centric Framework for Integrating Query Processing and GPU-Accelerated Machine Learning</title>
  </titles>
  <contributors>
   <organization sequence="first" contributor_role="author">Department of Computer Science, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia</organization>
   <person_name sequence="first" contributor_role="author">
    <given_name>Adil</given_name>
    <surname>Adil</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">College of Public Health and Health Informatic, Hail University, Saudi Arabia</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Zahra I.</given_name>
    <surname>Mahmoud</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Mathematics, College of Science, Qassim University, Buraydah, 51452, Saudi Arabia</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Mawahib</given_name>
    <surname>Elamin</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Mathematics, College of Science, Qassim University, Buraydah, 51452, Saudi Arabia</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Amel H.</given_name>
    <surname>Abdalla</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Department of Computer Science, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Adil O. Y.</given_name>
    <surname>Mohamed</surname>
   </person_name>
  </contributors>
  <jats:abstract xml:lang="en">
   <jats:p>&#13;
The increasing adoption of large-scale machine learning (ML) applications has exposed critical performance limitations in current data processing pipelines, particularly due to the separation between relational query execution and ML inference. This separation introduces redundant computations, excessive data materialization, and inefficient utilization of GPU Matrix Processing [10] resources. In this paper, we present a unified execution framework that integrates relational query processing and machine learning prediction by representing both as linear algebra operations. Leveraging algebraic properties such as associativity and distributivity, we introduce an operator fusion [8] strategy that enables query operators and ML models to be jointly executed on GPU Matrix Processing [10] architectures. This approach reduces intermediate data movement and enables end-to-end pipeline execution within a single linear algebra runtime. We analyze the computational complexity of the proposed fusion strategy and discuss its applicability to star-schema workloads commonly found in analytical systems. Experimental insights from prior studies indicate that linear algebra–based query execution combined with operator fusion [8] can yield substantial performance improvements over conventional GPU Matrix Processing [10]-accelerated pipelines, while maintaining scalability and portability. The proposed framework provides a viable foundation for future data-intensive systems that aim to unify analytics and machine learning on heterogeneous computing platforms. [1–3,14–16] This work unifies relational query processing and ML inference within a single algebraic runtime on GPUs, rather than coupling independent GPU-accelerated stages, thereby enabling cross-stage optimization and eliminating redundant materialization. Unlike existing GPU-accelerated databases and tensor-based query processors, the proposed framework provides a system-level unification of relational analytics and machine learning inference, rather than treating them as isolated or sequential stages. The framework is backend-agnostic and applicable to modern tensor runtimes and heterogeneous accelerator platforms, making it suitable for next-generation data-intensive systems.</jats:p>
  </jats:abstract>
  <publication_date media_type="print">
   <year>2026</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2026</year>
  </publication_date>
  <pages>
   <first_page>497</first_page>
   <last_page>503</last_page>
  </pages>
  <doi_data>
   <doi>10.54216/IJNS.270240</doi>
   <resource>https://www.americaspg.com/articleinfo/21/show/4269</resource>
  </doi_data>
 </journal_article>
</journal>
