The emergence of chatbots in the healthcare sector is increasingly pivotal, as they provide rapid and accessible assistance for the early detection of diseases and medical guidance. This study delineates a sophisticated two-tier healthcare chatbot system that synergistically integrates deep learning for image-based skin disease classification with machine learning for symptom-driven disease prediction. The system, developed in Python, employs a Hybrid U-Net & Improved MobileNet-V3 model to accurately identify dermatological conditions from images, while a Decision Tree Classifier is utilized to forecast diseases based on user-reported symptoms. Through meticulous evaluation of user inputs, the chatbot facilitates interactive consultations that encompass severity assessments, disease predictions, and preventive recommendations. Rigorous cross-validation of the symptom-based models, alongside testing on a bespoke dataset of skin disease images, substantiates the efficacy of the proposed methodology, demonstrating commendable predictive accuracy. The chatbot exemplifies significant potential by amalgamating conversational artificial intelligence with a hybrid approach of Hybrid U-Net & Improved MobileNet-V3 for image classification and Decision Tree Classifier for symptom analysis, thereby enhancing the landscape of telemedicine and patient care.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170101
Vol. 17 Issue. 1 PP. 01-15, (2025)
This study introduces a novel deep learning-driven multi-layer digital twin framework, underpinned by the Model-Integration-Optimization-Testing (MIOT) methodology, to advance precision oncology in cancer diagnosis. The innovation lies in integrating multi-layered data, including molecular, clinical, and imaging modalities, into a patient-specific digital twin ecosystem. By combining deep learning with the MIOT framework, the proposed approach enables dynamic and predictive modelling tailored to individual patient profiles, facilitating simulations of tumor progression, diagnostic insights, and personalized treatment optimization. Pre-processing pipelines standardize the heterogeneous data, while convolutional and Recurrent Neural Networks (RNN) extract high-level features from imaging and sequential data, respectively. The MIOT framework ensures a systematic design process: deep learning architectures like U-Net, DenseNet, and transformers are employed for tasks such as tumor segmentation, classification, and survival prediction. Data integration pipelines connect the digital twin seamlessly with clinical diagnostic tools to ensure interoperability. Multi-objective optimization algorithms, including evolutionary strategies and reinforcement learning, guide the digital twin in recommending personalized diagnostic and therapeutic pathways. State-of-the-art performance is demonstrated by rigorous validation on benchmark datasets, which yielded 96.3% diagnosis accuracy, 94.8% sensitivity, and 95.6% specificity across many tumor subtypes.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170102
Vol. 17 Issue. 1 PP. 16-26, (2025)
In these modern agriculture system crop pests causes major social, economic and environmental issues worldwide. Each pest necessitates an alternative method of control and precise detection has become a very important challenge in agriculture. Deep learning technique shows remarkable results in image identification. Standard pest detection framework might struggle with accuracy due to complicated algorithms and lack of data, and result in incorrect detection, which leads to harm the crop environment. To end this, we developed a novel framework named Transformer and ResNet Improved Pest Classification and Identification Detection (TRIP-CID) for crop pest classification and identification. At first, the pest images are obtained through the benchmark dataset for pre-processing. The Pre-processed images are immediately delivered to the Improved ResNet (IR-Net) and Pyramidal Vision Transformer (PVT) for multi-scale spatial, channel and contextual feature maps extraction within three stages. The extraction feature maps in the two modules are combined to produce a superior feature map. Then refined feature maps was fed to the three distinct Machine Learning (ML) classifiers offered pest detection outcomes. For accurate results, we employ ensemble-voting technique, which outputs effective pest detection result that is vastly used for particle suggestion. Finally, we utilized presented technique for detecting and identify crop pest in 10-pest class for instance larva of laspeyresia pomonella, Euproctis pseudoconspersa strand, Locusta migratoria, acrida cinerea, empoasca flavescens, spodoptera exigue, parasa lepida, chrysochus chinensi, L.pomonella types of insects pests and larva of S. exigua. Additionally, the suggested methodology has shown to provide experts and farmers with quick, efficient assistance in identifying pests, saving money and preventing losses in agricultural output.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170103
Vol. 17 Issue. 1 PP. 27-38, (2025)
This work explores the innovative application of integrated pest management (IPM) strategies in the control of the Tea Looper Caterpillar and the Tea Leaf Hopper, utilizing the YOLO algorithm for real time pest detection. IPM is essential for sustainable agriculture, aiming to reduce chemical pesticide usage through a combination of biological, cultural, and technological methods. The combination of artificial intelligence and machine learning into IPM practices has shown promising results, particularly in identifying and monitoring pest populations in tea plantations. This study reviews existing literature on the impact of various pests on tea crops and highlights the significance of using advanced algorithms for effective pest management. Notably, the implementation of the YOLO algorithm demonstrated an impressive accuracy rate of 97% in detecting these pests, displaying its potential to enhance pest control efforts. By focusing on the tea green leafhopper and looper caterpillars, the research aims to provide insights into sustainable pest control methods that minimize environmental impact. The findings underscore the potential of AI-driven technologies in enhancing agricultural productivity while promoting ecological balance. This project ultimately contributes to the ongoing discourse on sustainable agricultural practices and the role of technology in addressing pest-related challenges in tea cultivation.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170104
Vol. 17 Issue. 1 PP. 39-56, (2025)
Parkinson's disease (PD) is a degenerative neurological condition instigated by the death of dopamine-producing neurons in the brain, which is manifested as tremors, rigidity, bradykinesia, and postural instability. Early and accurate diagnosis of PD is crucial for timely initiation of appropriate treatment strategies, which can help alleviate symptoms, advance excellence of life, and hypothetically leisurely disease development. A promising method for PD diagnosis is the combination of fMRI and qEEG methods, which provide full neuroimaging data to improve accuracy and early detection. However, recent studies are limited in performing and achieving accurate PD diagnosis. To alleviate this issue, we have proposed graph neural network-based PD diagnosis model addressed as Park-Net. Here, data pre-treatment is initially implemented in which both collected qEEG signal and fMRI image is denoised using Discrete Wavelet Transform (DWT) and Improved Kalman Filter (IKF) respectively. Following that, appropriate region of fMRI is segmented by adversarial network-based U-Net (AN-Net). After that, segmented region is fed into proposed Park-Net model; here modality encoder (ME) encompassed Long Short-Term Memory (LSTM) for feature extraction. We adapted Multi-modal Fused Attentional Graph Convolutional Neural Network (MAGCN) for constructing graph based on feature correlation and then fused. Finally, we designed Self-Attention Pooling with softmax layer for classifying PD as normal or abnormal. We have implemented our proposed Park-Net model to evaluate model performance, and its efficacy is assessed using a range of performance metrics such as accuracy, sensitivity, specificity, F1-Score, and ROC curve, highlighting its superior performance compared to existing methods in PD diagnosis approaches.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170105
Vol. 17 Issue. 1 PP. 57-74, (2025)
In vehicle ad hoc networks (VANETs), vehicles often need to perform complex computing tasks that may exceed their processing capabilities within the required period to provide enhanced services. A common approach to improving service performance is to offload tasks to roadside units (RSUs). However, RSUs might not always have sufficient resources to manage all task assignments effectively. Given the increasing processing power of modern vehicles, task delegation to other vehicles presents a viable alternative to relying solely on RSUs. To achieve this, we first introduce a probabilistic approach that relaxes discrete actions, such as cloud server selection, into a continuous space. We then implement a Supportive Multi-Agent Deep Reinforcement Learning (SMADRL) technique that minimizes total system costs, including Vehicle device energy consumption and cloud server rental charges, by utilizing a centralized training and distributed execution approach. In this framework, each Vehicle device operates as an independent agent, learning efficient decentralized policies that reduce computing pressure on the devices. Experimental results show that the proposed SMADRL framework effectively learns dynamic offloading policies for each Vehicle device and notably outperforms four state-of-the-art DRL-based agents and two heuristic frameworks, resulting in reduce overall system costs.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170106
Vol. 17 Issue. 1 PP. 75-88, (2025)
Advanced stuttering detection and classification using artificial intelligence is the main emphasis of this work. Determining the degree of stuttering for speech therapists, providing an early patient diagnosis and facilitating communication with voice assistants are just a few of the uses for an efficient classification of stuttering and its subclasses. This work's first portion examines the databases and features utilized, along with the deep learning and classical methods used for automated stuttering categorization. The Bayesian Bi-directional Long Short Memory with Fully Convoluted Classifier model (BaBi-LSTM) is a deep learning model in conjunction with an available stuttering information set. The tests evaluate the impact of individual signal features on the classification outcomes, including pitch-determining variables, different 2D speech representations, and Mel-Frequency Cepstral Coefficients (MFCCs). The suggested technique turns out to be the most successful, obtaining a 95% F1 measure for the entire class. When detecting stuttering disorders, deep learning algorithms outperform classical methods. However, the results differ amongst stuttering subtypes because of incomplete data and poor annotation quality. The study also examines the impact of the number of thick layers, the magnitude of the training information set, and the division apportionment of data into training and evaluation groups on the effectiveness of stuttering event recognition to offer insights for future technique improvements.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170107
Vol. 17 Issue. 1 PP. 89-105, (2025)
Diabetes is a common chronic illness that requires ongoing patient monitoring to diagnose the condition in a timely manner. With the significant advancements of the Internet of Medical Things (IoMT) sector in recent years, it is feasible now to monitor the patient's information continuously. There are many studies that used IoMT and machine learning (ML) techniques to diagnose diabetes but so far, the accuracy of the performance is still below the required level. Therefore, this study proposes a common framework for IoMT, cloud, and ML techniques to diagnose diabetes in real-time. IoMT devices continuously collect vital information of diabetic patients such as glucose and insulin levels. Then, this data is transmitted using various communication technologies to be stored in the cloud for diagnosis. Finally, to improve diagnostic accuracy, voting ensemble strategy-based method has been proposed that combines predictions from three base ML techniques (Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF)). The proposed voting model achieved promising results in diagnosing diabetes with an accurate rate of up to 98.0%, outperforming the base classifiers in this and previous studies.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170108
Vol. 17 Issue. 1 PP. 106-117, (2025)
Diagnosis of brain tumors from MRI scans is a vital concern in medical imaging that contributes to the need for fast and accurate deep learning models. In this study, it is proposed a Hybrid CNN-ViT Feature Extraction framework that utilizes the local spatial feature extraction capability of Convolutional Neural Networks (CNNs) and long-range dependency capturing ability of Vision Transformers (ViTs). The method starts with a set of advanced preprocessing techniques such as contrast limited adaptive histogram equalization (CLAHE) and data augmentation based on generative adversarial networks (GAN) to help increase image quality and balance the dataset. First, trained by a CNN-based backbone is EfficientNet to obtain low- and mid-level spatial features, the hybrid model is proposed. These feature maps are further converted into patches and input to a Vision Transformer  (ViT) encoder, where self-attention functions to refine global feature representations. The proposed method utilized concatenation and attention-based mechanism for feature fusion, which ensured the discriminative classification of features from both CNN and ViT. Finally, a fully connected layer with the softmax classifier predicts the presence of tumor and its kind. Extensive experiments have been conducted on benchmark brain MRI datasets, which show that the Hybrid CNN-ViT model significantly outperforms traditional CNN-based models and achieves higher accuracy, precision, recall, and F1-score. The study demonstrates the successful application of hybrid deep learning techniques for robust and generalizable brain tumor classification. The novelty of this research lies in integrating spatial information with context attention in enhancing AI-based medical diagnostics.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170109
Vol. 17 Issue. 1 PP. 118-128, (2025)
Recent advancements in Remote Sensing (RS) have created challenges in data storage, retrieval, and privacy. Existing Content-Based Image Retrieval (CBIR) systems are useful but often face limitations related to hypersensitivity towards remote sensing data in the cloud, scalability, and security. This article presents SecureRS-CBIR, a privacy-preserving framework for remote sensing image retrieval combining deep learning with multi-level encryption. The system uses three CNN models (VGG16, ResNet50, and DenseNet121) for feature extraction and implements encryption through image division, texture extraction, subblock shuffling, and color encryption. Experiments on the Aerial Image Dataset show VGG16 achieving 96% validation accuracy, with ResNet50 and DenseNet121 at 95% and 94% respectively. DenseNet121 excelled at DenseResidential classification (41/42 correct) with minor confusion between Beach and Desert categories. The framework successfully balances security with retrieval efficiency, maintaining privacy through robust encryption while enabling accurate content-based searches, providing a scalable solution for secure image retrieval in cloud environments. This work offers a new approach for remote sensing image retrieval by enabling efficient searching in large-scale datasets while addressing privacy concerns in cloud environments, thereby contributing to the relevant literature.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170110
Vol. 17 Issue. 1 PP. 129-144, (2025)
As electric power develops, stable distribution of output power has become a key issue, and more and more power distribution strategies have been proposed. However, most of them are single distribution strategies with large errors and low credibility, which makes it difficult to maintain the stability of motor output distribution power in the actual situation. Therefore, by characteristics of adaptive virtual impedance to reduce small signals influence in the circuit and parallel power stability of virtual synchronous machine virtual synchronous generator control strategy, this research establishes a parallel power model of virtual synchronous generator, selects the changes of voltage and current as the measurement standard of the system, and sets up simulation experiments to determine whether to add adaptive virtual impedance to design a control strategy that can stably distribute output power. Results showed that it can keep output ratio of active power and reactive power within range of 2:1, and voltage difference at the output terminal is 0, and the current is 0.8A, which meets the requirements of circulating current. In a word, the control strategy of virtual synchronous generator designed in this research has high accuracy and strong stability. Compared with previous control strategies, the control strategy of parallel power distribution can ensure the stability of output power in the actual situation. This achievement has certain application prospects in the field of motor power distribution.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170111
Vol. 17 Issue. 1 PP. 145-158, (2025)
Due to emerging disruptive technologies, the Internet of Things (IoT) is essential in innovative living domains, such as elderly and disabled healthcare services, home security and safety monitoring, and computerization control services. The IoT can improve inhabitants' quality of life and the quality of life of smart ambient assisted living (AAL) environment users. The sixth (6G) network will enable a completely linked world with terrestrial wireless communications. Blockchain-based approaches offer decentralized privacy and security, yet they include vital delay, computational, and energy overhead inappropriate for most resource-reserved IoT devices. Hence, this study proposes a Blockchain and IoT-based Assisted Living System (BIoT-ALS) using 6G communication. The nodes in our proposed paradigm use smart contracts to specify norms of interaction while working together to provide storage and computing resources. Our suggested approach has encouraged confidence-free interaction and boosted user privacy through the blockchain approach. This paper aims to explain the sensor layer, a distributed signal processing system in vast, physically connected, wirelessly networked, and energy-restricted networks of sensor items. A comprehensive experimental test series shows each sensor type's accuracy and probable usage. The numerical results show that the suggested BIoT-ALS model improves the performance ratio of 99.1%, accuracy ratio of 98.8%, reliability ratio of 94.8%, an efficiency ratio of 93.6%, the throughput rate of 97.6%, and reduces the network delay of 19.2%, latency ratio 10.2%, and execution time of 20.4% compared to other popular models.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170112
Vol. 17 Issue. 1 PP. 159-176, (2025)
The proposed research implements a new 3D-block-based alpha-rooting enhancement method, which uses PCA classification for detecting glaucoma. The use of Euclidean distance in current image enhancement methods tends to lose important structural details that result in incorrect classification outcomes. The proposed method executes block-matching and grouping operations to locate equivalent 3D patterns before using adaptive alpha-rooting adjustment, which automatically controls contrast throughout optic disc and optic cup regions. Following enhancement processing an additional polishing stage optimizes these results for classification purposes. The classification of enhanced images takes place using PCA and its wavelet variants to extract important retinal features. The proposed system utilizes both ACRIMA dataset and real-world hospital images to show better classification achievements than CLAHE-based enhancement while validating its effectiveness. The experimental outcome demonstrated both high accuracy and reduced time consumption when using biorthogonal DWT with (2D) ²-PCA for classification. The proposed method offers a time-effective hardware-oriented solution for automatic glaucoma detection by combining conventional statistical techniques with deep learning-based classification approaches. The method provides clinical facilities with a dependable standard for glaucoma identification and diagnosis improvement. The Proposed 3D block-based adaptive alpha rooting method achieves a total accuracy level of 95.1%. The U-net model achieves 91.0% accuracy while CNN reaches 90.3% and RF delivers 87.1%. At the same time, SVM provides 86.3% accuracy while PCA returns 85.2% and DWT reaches 84.2% and KNN establishes 81.2% accuracy.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170113
Vol. 17 Issue. 1 PP. 177-195, (2025)
The transmission of complex medical images in telemedicine applications poses significant challenges. An effective hybrid compressed sensing and encryption framework is proposed for enabling efficient MRI compression and secure transmission in telemedicine applications. Firstly, a fuzzy-logic-based image enhancement is pressed. Then an optimized chaotic sequence generation scheme is formulated based on image characteristics to achieve compression robustness and security of the compression process. In addition, the proposed framework uses a lightweight public key encryption method to speed up encryption and decryption time. Our experimental results demonstrate the effectiveness of the proposed system on various metrics, including PSNR, SSIM, correlation coefficient, and processing time. The system consistently achieved high SSIM scores (0.96 to 1.0) and maintained low algorithm processing time, validating its efficiency in high-quality reconstruction.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170114
Vol. 17 Issue. 1 PP. 196-207, (2025)
Internet of Vehicles (IoV) is the later application of VANET and is the fusion of the Internet and IoT. With the advancement in innovation, individuals are investigating a traffic environment wherever they would have the extreme cooperation with their environment including other vehicles. The Internet of Vehicles (IoV) was created so that vehicles can communicate with each other in an infrastructure environment. The prerequisite is to form a more secure trip in an IoV environment with the least delay and high packet delivery rate. This guarantees that all information is received with negligible delay to maintain a strategic distance from any mishap. This paper presents a new position-based routing algorithm called Position-Based Connectivity Aware Routing (PBCAR) for IoV that covers sparse and coarse regions of vehicles. It takes advantage of the Internet and street format to progress the execution of routing in IoV. The PBCAR algorithm uses a GPS real-time chasing system to find traffic information for forming position-based paths from the source node to the destination node. The PBCAR algorithm has been simulated using SUMO and Network Simulator and compared with AODV and GPSR. The results show that the PBCAR algorithm obtains exceptional results considering the several simulation parameters.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170115
Vol. 17 Issue. 1 PP. 208-218, (2025)
The implementation of Intrusion Detection Systems (IDS) remains crucial for network security yet high-dimensional data alongside class imbalance issues decrease their functionality. Machine learning-based IDS models, which use traditional approaches experience difficulties in providing explanations about their prediction results. An IDS framework enhancement with explainable AI (XAI) methods aims at improving the system's transparency throughout this study. The data processing includes KNN imputation combined with K-Means SMOTE to handle missing information and class imbalance problems. When selecting features the model uses a merged methodology combining Pearson Correlation with Mutual Information and Sequential Forward Floating Selection (SFFS) algorithms for optimization. Light Gradient Boosting Model (LGBM) serves as the classification model that produces higher accuracy than competing methods with 90.71% for UNSW-NB 15 and 96.98% for CICIDS-2017. By using SHAP-based explain ability, the system provides worldwide and specific model interpretations that enable users to trust IDS prediction results. The experimental findings validate that the proposed methodology succeeds in simplifying the system while improving its classification functionality and delivering stronger interpretability properties to tackle weaknesses of current IDS technologies. The examination presents important findings for the development of secure network protection technologies, which operate with transparency.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170116
Vol. 17 Issue. 1 PP. 219-238, (2025)
Skin image segmentation serves as a vital undertaking in medical image analysis, specifically in dermatology, since it enables the detection of skin diseases and the assessment of effectiveness of treatments. Segmenting skin lesions from photographs is a crucial step in working towards this patchive. Nevertheless, the work of segmenting skin lesions is difficult due to the existence of both artificial and natural deviations, inherent characteristics like the shape of the lesion), and deviations in the circumstances during which the images are obtained. In recent years, researchers have been investigating the feasibility of utilizing deep-learning models for skin lesion segmentation. Deep learning methodologies have exhibited encouraging outcomes in various image segmentation initiatives, thereby preventing the possibility of automating and enhancing the precision of skin segmentation. This paper introduces a new hybrid method, named the CBi-BERT framework, aimed to improve the results and architectures of medical image segmentation or patch detection tasks. This architecture employs Convolutional Neural Networks (CNNs) for feature extraction as well Bidirectional LSTM-based encoders to process sequence information and BERT based attention collection across the strongest features. Image normalization, resizing and data augmentation techniques are applied in the proposed method to deal with major challenges faced during medical imaging such as rate of image quality differentiation from noise or bias between benign vs. malign features. We evaluate the performance of CBi-BERT to those benchmark datasets and state-of-the-art models, showing that CBi-BERT outperforms them in all relevant metrics such as Intersection over Union (IoU), recall, mean average precision (bin-MAP) DICE coefficient. Specifically, for images sized 256x256 the model achieved IoU =0.9, recall=0.92, mAP=0.89 and Dice coefficient: =0.91 thereby outperforming some advanced state-of-the-art models as ResNet50, VGG16, UNet, EfficientNet-B-01 Our results show that the framework is able to detect and segment important structures in medical images with high precision which makes it a compelling tool for AI based Healthcare solutions.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170117
Vol. 17 Issue. 1 PP. 239-254, (2025)