  <?xml version="1.0"?>
<journal>
 <journal_metadata>
  <full_title>Fusion: Practice and Applications</full_title>
  <abbrev_title>FPA</abbrev_title>
  <issn media_type="print">2692-4048</issn>
  <issn media_type="electronic">2770-0070</issn>
  <doi_data>
   <doi>10.54216/FPA</doi>
   <resource>https://www.americaspg.com/journals/show/3658</resource>
  </doi_data>
 </journal_metadata>
 <journal_issue>
  <publication_date media_type="print">
   <year>2018</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2018</year>
  </publication_date>
 </journal_issue>
 <journal_article publication_type="full_text">
  <titles>
   <title>CL-FusionBEV: A Cross-Attention Based Fusion Model for Camera and LiDAR in Bird’s Eye View Perception</title>
  </titles>
  <contributors>
   <organization sequence="first" contributor_role="author">UG Scholar, Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, India</organization>
   <person_name sequence="first" contributor_role="author">
    <given_name>S.</given_name>
    <surname>S.</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">UG Scholar, Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>S.</given_name>
    <surname>Renuka</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">UG Scholar, Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>R. Shakthi</given_name>
    <surname>Priyaa</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">UG Scholar, Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Angel Meriba D..</given_name>
    <surname>S.</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">UG Scholar, Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Maheshwari.</given_name>
    <surname>M.</surname>
   </person_name>
   <organization sequence="first" contributor_role="author"> UG Scholar, Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>Megavarshini.</given_name>
    <surname>M.</surname>
   </person_name>
   <organization sequence="first" contributor_role="author">Professor, Panimalar Engineering College, Chennai, India</organization>
   <person_name sequence="additional" contributor_role="author">
    <given_name>S.</given_name>
    <surname>Malathi</surname>
   </person_name>
  </contributors>
  <jats:abstract xml:lang="en">
   <jats:p>In autonomous navigation, the ability to detect 3D objects from a Bird’s-Eye View (BEV) perspective is essential. Nevertheless, many obstacles remain before LiDAR and camera data can be effectively combined. We propose CL-FusionBEV, a novel framework for sensor fusion that enhances Three-dimensional object recognition in the BEV domain. This method structures LiDAR point clouds for improved spatial feature extraction while converting camera data into BEV format via an implicit learning technique. An implicit fusion network and a multi-modal cross-attention mechanism facilitate seamless sensor interaction, ensuring comprehensive feature integration. Additionally, a self-attention mechanism of BEV enhances broad-scale reasoning and data extraction, improving the detection of occluded and distant objects. By efficiently synchronising data from several sensors, the suggested method improves feature uniformity and resolves spatial inconsistencies. It further leverages adaptive feature selection to enhance robustness against sensor noise and varying conditions. We evaluate CL-FusionBEV on the nuScenes dataset, achieving achieved a 73.3% mAP and a 75.5% NDS on the nuScenes benchmark, with vehicle and pedestrian detection accuracies of 89% and 90.7%, respectively. Our model demonstrates superior robustness in challenging conditions such as low visibility and dense urban environments. CL-FusionBEV maintains high efficiency with real-time inference, making it suitable for deployment in autonomous systems. Extensive experiments show our strategy routinely beats cutting-edge techniques, especially in detecting small and distant objects. By addressing key sensor fusion challenges in the BEV domain, CL-FusionBEV offers a notable advancement in Three-dimensional object recognition, ensuring high accuracy, efficiency, and reliability for real-world driving scenarios.</jats:p>
  </jats:abstract>
  <publication_date media_type="print">
   <year>2025</year>
  </publication_date>
  <publication_date media_type="online">
   <year>2025</year>
  </publication_date>
  <pages>
   <first_page>15</first_page>
   <last_page>27</last_page>
  </pages>
  <doi_data>
   <doi>10.54216/FPA.190202</doi>
   <resource>https://www.americaspg.com/articleinfo/3/show/3658</resource>
  </doi_data>
 </journal_article>
</journal>
