In this paper, we present a novel methodology to improve the performance of collection operations in wireless sensor networks by the application of software-defined networking technology on (SD WISE) platform. The conditions for selecting the grouped nodes in the controller were determined by adjusting the weights of the (Dijekstra) algorithm. The grouped nodes that include the node were determined based on the paths chosen by the algorithm. The SDN-WISE platform supports reading the payload of the packet and not just the header, in addition to the possibility of dealing with a packet depending on another packet, and the flexibility to modify the routing tables to achieve the appropriate rules for the proposed aggregation algorithm. The results show a significant reduction in the energy consumed after applying the novel suggested algorithm.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120201
Vol. 12 Issue. 2 PP. 08-18, (2024)
This study leverages sophisticated machine learning methodologies, particularly XGBoost, to analyze cardiovascular diseases through cardiac datasets. The methodology encompasses meticulous data pre-processing, training of the XGBoost algorithm, and its performance evaluation using metrics such as accuracy, precision, and ROC curves. This technique represents a notable progression in the realm of medical research, potentially leading to enhanced diagnostic precision and a deeper comprehension of cardiovascular ailments, thereby improving patient care and treatment modalities in cardiology. Furthermore, the research delves into the utilization of deep learning methodologies for the automated delineation of cardiac structures in MRI and mammography images, aiming to boost diagnostic precision and patient management. [24][3][5][6] In assessing machine learning algorithms' efficacy in diagnosing cardiovascular diseases, this analysis underscores the pivotal role of such algorithms and their possible data inputs. Additionally, it investigates promising directions for future exploration, such as the application of reinforcement learning. A significant aspect of our investigation is the development and deployment of sophisticated deep learning models for segmenting right ventricular images from cardiac MRI scans, aiming at heightened accuracy and dependability in diagnostics. Through the utilization of advanced techniques like Fourier Convolutional Neural Network (FCNN) and improved versions of Vanilla Convolutional Neural Networks (Vanilla-CNN) and Residual Networks (ResNet), we achieved a substantial improvement in accuracy and reliability. This enhancement allows for more precise and quicker identification and diagnosis of cardiovascular diseases, which is of utmost importance in clinical practice. Evaluation of Machine Learning Algorithms: We conducted a comprehensive evaluation of machine learning algorithms in the context of cardiovascular disease diagnosis. This assessment emphasized the fundamental role of machine learning algorithms and their potential data sources. We also explored promising avenues, such as reinforcement learning, for future research. Factors Affecting Predictive Models: We highlighted the critical factors affecting the effectiveness of machine learning-based predictive models. These factors include data heterogeneity, depth, and breadth, as well as the nature of the modeling task, and the choice of algorithms and feature selection methods. Recognizing and addressing these factors are essential for building reliable models.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120202
Vol. 12 Issue. 2 PP. 19-33, (2024)
Anemia, generally termed as deficiency of hemoglobin or red blood cells in the blood is significant global health concern for the population in underdeveloped as well as in developing nations specially, for children and young women in rural areas. This paper proposes a quantitative approach for anemia detection by regression analysis technique which predicts hemoglobin level in the blood. To achieve this, the image dataset of microscopic blood sample is collected from 70 individuals. The data collection requires proper procedure as it plays vital part in system implementation. The statistical feature utilizing mean pixel intensity values from the red, green, and blue color planes of the images are given as input to the regression model. For the proposed system, we have employed multiple regression analysis model using machine learning approach with both three and four regression coefficients to establish relation between features obtained from blood samples and the hemoglobin level in the blood to achieve the specified task of anemia detection in an individual. Performance analysis show promising results for the proposed system with co-efficient of determination (R2) and root mean square error (RMSE) found out be 0.923 and 1.682 respectively. Overall, this paper presents valuable system for anemia detection based on hemoglobin estimation which can be implemented in areas with limited medical resources and gives another supportive technological solution for current healthcare problems.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120203
Vol. 12 Issue. 2 PP. 34-43, (2024)
With the development and advancement of ICST, data-driven technology such as the Internet of Things (IoT) and Smart Technology including Smart Energy Management Systems (SEMS) has become a trend in many regions and around the globe. There is no doubt that data quality and data quality problems are among the most vital topics to be addressed for a successful application of IoT-based SEMS. Poor data in such major yet delicate systems will affect the quality of life (QoL) of millions, and even cause destruction and disruption to a country. This paper aims to tackle this problem by searching for suitable outlier detection techniques from the many developed ML-based outlier detection methods. Three methods are chosen and analyzed for their performances, namely the K-Nearest Neighbour (KNN)+ Mahalanobis Distance (MD), Minimum Covariance Determinant (MCD), and Local Outlier Factor (LOF) models. Three sensor-collected datasets that are related to SEMS and with different data types are used in this research, they are pre-processed and split into training and testing datasets with manually injected outliers. The training datasets are then used for searching the patterns of the datasets through training of the models, and the trained models are then tested with the testing datasets, using the found patterns to identify and label the outliers in the datasets. All the models can accurately identify the outliers, with their average accuracies scoring over 95%. However, the average execution time used for each model varies, where the KNN+MD model has the longest average execution time at 12.99 seconds, MCD achieving 3.98 seconds for execution time, and the LOF model at 0.60 seconds, the shortest among the three.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120204
Vol. 12 Issue. 2 PP. 44-64, (2024)
Waste management has been an issue due to low awareness among people of any country to lead major environmental contamination, tragic accidents, and unfavorable working conditions for landfill workers. The Lack of precise and efficient object detection could be a barrier in the growth of computer vision-based systems. As per the latest research articles, pre-trained models could be used for Trash Bin detection in real time and for recommending appropriate actions after detection. Using a unique validation dataset made up of predicted trash items, the two classes of acceptable object identification models, YOLO (You Only Look Once) and SSD (Single Shot Multibox Detector), are then contrasted. It is concluded that SSD performs noticeably better than YOLO in identifying trash objects based on several performance metrics computed utilizing multiple open-source research projects. The model is then built up to recognize several trash object types after being pre-trained using Microsoft's COCO (Common Objects in Context) dataset. Our initiative intends to enhance sustainable waste management, make trash sorting incredibly simple, and guard against serious illnesses and accidents at landfill and garbage disposal sites.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120205
Vol. 12 Issue. 2 PP. 65-74, (2024)
The seamless integration of technology for computing into everyday items and environments is known as pervasive computing. To protect against cyber threats and vulnerabilities, robust security mechanisms are necessary. Conventional security measures, including gateways and the use of encryption, may not be sufficient to address the unique challenges encountered in ubiquitous computing systems. But these techniques are still vital. In addition to the variety of devices, resource limitations, mobility needs, and the possibility of large-scale distributed attacks, these obstacles also include the potential for attack. Network virtualization, that abstracts and separates network facilities and functions, is a promising way to increasing security in pervasive computing deployments: it abstracts and isolates network resources and processes. Wireless communication play a significant part in the development of a digital infrastructure that is both resilient and trustworthy. The processes of dynamic resource allocation, isolation, and management of network bandwidth are made possible through the utilization of virtualization, leads to the proposal of Secure Wireless Virtual Resource Allocation and Authentication Algorithm(SWVRA3) to make the abstraction of the network's physical resources into virtualized entities By using network virtualization, pervasive computing applications and services can be secured with logically segregated virtual networks. The cross-contamination and security breaches can be reduced by this separation. Furthermore, flexible configuration, dynamic allocation of resources, and centralized virtual control are allowed by network visualization that improves threat incidence response, enforcement of policies, and security surveillance.
Read MoreDoi: https://doi.org/10.54216/JISIoT.120206
Vol. 12 Issue. 2 PP. 75-88, (2024)