Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access February 06, 2026

Predictive Modeling of Public Sentiment Using Social Media Data and Natural Language Processing Techniques

Abstract Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled [...] Read more.
Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled tweets, and develops predictive models for binary sentiment classification using Naive Bayes, Logistic Regression, and the transformer-based BERT model. Experiments were conducted on a balanced subset of 12,000 tweets after comprehensive NLP preprocessing. Evaluation using accuracy, F1-score, and confusion matrices revealed that BERT significantly outperforms traditional models, achieving an accuracy of 89.5% and an F1-score of 0.89 by effectively modeling contextual and semantic nuances. In contrast, Naive Bayes and Logistic Regression demonstrated reasonable but consistently lower performance. To support practical deployment, we introduce SentiFeel, an interactive tool enabling real-time sentiment analysis. While resource constraints limited the dataset size and training epochs, future work will explore full corpus utilization and the inclusion of neutral sentiment classes. These findings underscore the potential of transformer models for enhanced public opinion monitoring, marketing analytics, and policy forecasting.
Figures
PreviousNext
Article
Open Access November 29, 2022

The Application of Machine Learning in the Corona Era, With an Emphasis on Economic Concepts and Sustainable Development Goals

Abstract The aim of this article is to examine the impacts of Coronavirus Disease -19 (Covid-19) vaccines on economic condition and sustainable development goals. In other words, we are going to study the economic condition during Covid19. We have studied the economic costs of pandemic, benefits in terms of gross domestic product (GDP), public finances and employment, investment on vaccines around the [...] Read more.
The aim of this article is to examine the impacts of Coronavirus Disease -19 (Covid-19) vaccines on economic condition and sustainable development goals. In other words, we are going to study the economic condition during Covid19. We have studied the economic costs of pandemic, benefits in terms of gross domestic product (GDP), public finances and employment, investment on vaccines around the world, progress and totally the economic impacts of vaccines and the impacts of emerging markets (EM) on achieving sustainable development goals (SDGs), including no poverty, good health and well-being, zero hunger, reduced inequality etc. The importance of emerging economies in reducing the harmful effects of the Corona has also been noted. We have tried to do experimental results and forecast daily new death cases from Feb-2020 to Aug-2021 in Iran using Artificial Neural Network (ANN) and Beetle Antennae Search (BAS) algorithm as a case study with econometric models and regression analysis. The findings show that Covid19 has had devastating economic and health effects on the world, and the vaccine can be very helpful in eliminating these effects specially in long-term. We observed that there is inequality in the distribution of Corona vaccines in rich countries compared to poor which EM can decrease the gap between them. The results show that both models (i.e., Artificial intelligence (AI) and econometric models) almost have the same results but AI optimization models can robust the model and prediction. The main contribution of this article is that we have surveyed the impacts of vaccination from socio-economic viewpoint not just report some facts and truth. We have surveyed the impacts of vaccines on sustainable development goals and the role of EM in achieving SDGs. In addition to using the theoretical framework, we have also used quantitative and empirical results that have rarely been seen in other articles.
Figures
PreviousNext
Article
Open Access August 31, 2022

Extended Rule of Five and Prediction of Biological Activity of peptidic HIV-1-PR Inhibitors

Abstract In this research work, we have applied “Lipinski’s RO5” for pharmacokinetics (PK) study and to predict the activity of peptidic HIV-1 protease inhibitors. Peptidic HIV-1-PRIs have been taken from literature with their observed biological activities (OBAs) in term of IC50. The logarithms of the inverse of IC50 have been used as biological end point o(log1/C) in the study. For calculation of [...] Read more.
In this research work, we have applied “Lipinski’s RO5” for pharmacokinetics (PK) study and to predict the activity of peptidic HIV-1 protease inhibitors. Peptidic HIV-1-PRIs have been taken from literature with their observed biological activities (OBAs) in term of IC50. The logarithms of the inverse of IC50 have been used as biological end point o(log1/C) in the study. For calculation of physicochemical parameters, the molecular modeling and geometry optimization of all the derivatives have been carried out with CAChe Pro software using semiempirical PM3 method. Prediction of the biological activity of the inhibitors has shown that the best QSAR model is constructed from pharmacokinetic properties, molecular weight and hydrogen bond acceptor. This also proved that these properties play important role to describe the PKs of the drugs. On the basis of the derived models one can build up a theoretical basis to access the biological activity of the compounds of the same series.
Figures
PreviousNext
Article
Open Access August 21, 2021

Global Analysis of Potential COVID 19 Transmission and Enabling Factors

Abstract Background: Coronavirus disease has caused global turmoil especially causing huge impact on human life all over the world. Current reports states more than 3 million people have lost life and more than 160 million people are known to be suspected with the SARS-CoV-2. Transmission and disease incidence rates are indicators to assess the seriousness of COVID-19 pandemic and studies to understand the factors that aid in this direction are very vital to curb the disease. Methods: The study intends to discover the relationship by performing statistical analysis using correlation and multiple linear regression analysis between the variable’s population density, temperature, relative humidity, and active time of virus and find out the parameters that predict the cases reported per million population in 83 countries. Results: Analysis indicates active time of virus in days is very positively associated with the COVID -19 cases in all the countries r = .604, p < .01. Active time of virus shows strong negative correlation with temperature r = -.930, p [...] Read more.
Background: Coronavirus disease has caused global turmoil especially causing huge impact on human life all over the world. Current reports states more than 3 million people have lost life and more than 160 million people are known to be suspected with the SARS-CoV-2. Transmission and disease incidence rates are indicators to assess the seriousness of COVID-19 pandemic and studies to understand the factors that aid in this direction are very vital to curb the disease. Methods: The study intends to discover the relationship by performing statistical analysis using correlation and multiple linear regression analysis between the variable’s population density, temperature, relative humidity, and active time of virus and find out the parameters that predict the cases reported per million population in 83 countries. Results: Analysis indicates active time of virus in days is very positively associated with the COVID -19 cases in all the countries r = .604, p < .01. Active time of virus shows strong negative correlation with temperature r = -.930, p < .01 revealing that rise in temperature will reduce the virus activity in the population. Together, these variables will account for 36.2% variance in the cases per million population with no significant prediction estimated from any factor. Conclusion: The study outcomes clearly state that population density alone is insufficient to estimate the extent of influence on COVID -19 cases as the number of persons living per sq. km of land is a dynamic quantity tend to fluctuate over time and space due to migration of population. In conjunction to the previous studies reported on the environmental and climatic factors influencing the cases reported, population dynamics does not show much significance on the disease spread and incidence. Contribution: The rise in confirmed cases and the high incidence rate reported in countries can be attributed to the active time of virus life expectancy as there is a positive correlation observed between the COVID-19 cases reported and the virus active time in the examined countries. Also, environment and climatic factors play a role in modulating the infection and transmission rate with less significant influence of population density on the COVID-19.
Figures
PreviousNext
Article
Open Access May 20, 2021

Bioconcentration Factor of Polychlorinated Biphenyls and Its Correlation with UV- and IR-Spectroscopic data: A DFT based Study

Abstract Polychlorinated biphenyls (PCBs) are important class of persist organic pollutants that were used as a component of paints especially in printings, as plastificator of plastics and insulating materials in transformers and capacitors, heat transfer fluids, additives in hydraulic fluids in vacuum and turbine pumps. There is always a need to establish reliable procedures for predicting the [...] Read more.
Polychlorinated biphenyls (PCBs) are important class of persist organic pollutants that were used as a component of paints especially in printings, as plastificator of plastics and insulating materials in transformers and capacitors, heat transfer fluids, additives in hydraulic fluids in vacuum and turbine pumps. There is always a need to establish reliable procedures for predicting the bioconcentration potential of chemicals from the knowledge of their molecular structure, or from readily measurable properties of the substance. Hence, correlation and prediction of biococentration factors (BCFs) based on λmax and vibration frequencies of various bonds viz υ(C-H) and υ(C=C) of biphenyl and its fifty-seven derivatives have been made. For the study, the molecular modeling and geometry optimization of the PCBs have been performed on workspace program of CAChe Pro 5.04 software of Fujitsu using DFT method. UV-visible spectra for each compound were created by electron transition between molecular orbitals as electromagnetic radiation in the visible and ultraviolet (UV-visible) region is absorbed by the molecule. The energies of excited electronic states were computed quantum mechanically. IR spectra of transitions for each compound were created by coordinated motions of the atoms as electromagnetic radiation in the infrared region is absorbed by the molecule. The force necessary to distort the molecule was computed quantum mechanically from its equilibrium geometry and thus frequency of vibrational transitions was predicted. Project Leader Program associated with CAChe has been used for multiple linear regression (MLR) analysis using above spectroscopic data as independent variables and BCFs of PCBs as dependent variables. The reliability of correlation and predicting ability of the MLR equations (models) are judged by R2, R2adj, se, q2L10O and F values. This study reflected clearly that UV and IR spectroscopic data can be used to predict BCFs of a large number of related compounds within limited time without any difficulty.
Figures
PreviousNext
Editorial Article
Open Access September 28, 2025

Gut-Brain Axis in Autism Spectrum Disorder: A Bibliometric and Microbial-Metabolite-Neural Pathway Analysis

Abstract The gut-brain axis (GBA) has emerged as a central focus in the study of neurodevelopmental disorders, particularly autism spectrum disorder (ASD). Research suggests that microbial composition and its metabolic byproducts influence neural development, synaptic plasticity, and behavior [1,2,3]. A structured bibliometric analysis of Scopus and Web of Science records was performed using Bibliometrix [...] Read more.
The gut-brain axis (GBA) has emerged as a central focus in the study of neurodevelopmental disorders, particularly autism spectrum disorder (ASD). Research suggests that microbial composition and its metabolic byproducts influence neural development, synaptic plasticity, and behavior [1,2,3]. A structured bibliometric analysis of Scopus and Web of Science records was performed using Bibliometrix and VOSviewer to trace trends and thematic evolution of GBA–ASD literature [7,8]. In parallel, a data-driven pathway modeling approach maps microbial metabolites (e.g., short-chain fatty acids, tryptophan catabolites) to host signaling pathways including vagal stimulation, immune cytokine modulation, and blood–brain barrier (BBB) permeability [4,5]. Simulations implemented in Python’s NetworkX illustrate how perturbations in metabolite flux may influence CNS outcomes. The findings reveal growing emphasis on butyrate, serotonin, microglial priming, and maternal immune activation in ASD-related GBA studies, and highlight the need for rigorous empirical validation of computational predictions [9,10,11].
Brief Report
Open Access June 28, 2025

Development of a Hemodialysis Data Collection and Clinical Information System and Establishment of an Intradialytic Blood Pressure/Pulse Rate Predictive Model

Abstract This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models [...] Read more.
This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models, with subsequent suggestions provided. Both objectives were executed under the supervision of the Institutional Review Board (IRB) at Mackay Memorial Hospital in Taiwan. The system completed for objective one has introduced three significant services to the clinic, including automated hemodialysis data collection, digitized data storage, and an information-rich human-machine interface as well as graphical data displays, which replaces traditional paper-based clinical administrative operations, thereby enhancing healthcare efficiency. The graphical data presented through web and app interfaces aids in real-time, intuitive comprehension of the patients’ conditions during hemodialysis. Moreover, the data stored in the backend database is available for physicians to conduct relevant analyses, unearth insights into medical practices, and provide precise medical care for individual patients. The training and evaluation of the predictive models for objective two, along with related comparisons, analyses, and recommendations, suggest that in situations with limited computational resources and data, an Artificial Neural Network (ANN) model with six hidden layers, SELU activation function, and a focus on artery-related features can be employed for hourly intradialytic BP/PR prediction tasks. It is believed that this contributes to the collaborating clinic and relevant research communities.
Figures
Figure 3 (c)
Figure 3 (d)
Figure 4 (b)
Figure 4 (c)
Figure 4 (d)
Figure 4 (e)
Figure 4 (f)
Figure 4 (g)
Figure 4 (h)
Figure 5 (b)
Figure 6 (b)
Figure 6 (c)
Figure 6 (d)
Figure 6 (e)
Figure 6 (f)
Figure 7 (b)
Figure 7 (c)
Figure 7 (d)
Figure 7 (e)
Figure 7 (f)
Figure 7 (g)
Figure 8 (b)
Figure 8 (c)
Figure 8 (d)
Figure 9 (b)
Figure 9 (c)
Figure 9 (d)
Figure 10 (b)
Figure 10 (c)
Figure 10 (d)
Figure 10 (e)
Figure 10 (f)
Figure 11 (b)
Figure 11 (c)
Figure 11 (d)
Figure 11 (e)
PreviousNext
PDF Html Xml
Article
Open Access July 10, 2024

Achieving Maintainability, Readability & Understandability of Software Projects using Code Smell Prediction

Abstract Maintenance of large-scale software is difficult due to large size and high complexity of code.80% of software development is on maintenance and the other 60% is on trying to understand the code. The severity of the code smells must be measured as well as fairness on it because it will help the developers especially in large scale source code projects. Code smell is not a bug in the system as it [...] Read more.
Maintenance of large-scale software is difficult due to large size and high complexity of code.80% of software development is on maintenance and the other 60% is on trying to understand the code. The severity of the code smells must be measured as well as fairness on it because it will help the developers especially in large scale source code projects. Code smell is not a bug in the system as it doesn’t prevent the program from functioning but it may increase the risk of software failure or performance slowdown. Therefore, this paper seeks to help developers with early prediction of severity of code smells and test the level of fairness on the predictions especially in large scale source code projects. Data is the collection of facts and observations in terms of events, it is continuously growing, getting denser and more varied by the minute across different disciplines or fields. Hence, Big Data emerged and is evolving rapidly, the various types of data being processed are huge, but no one has ever thought of where this data resides, we therefore noticed this data resides in software’s and the codebases of the software’s are increasingly growing that is the size of the modules, functionalities, the size of the classes etc. Since data is growing so rapidly it also mean the codebases of software’s or code are also growing as well. Therefore, this paper seeks to discuss the 5V’s of big data in the context of software code and how to optimize or manage the big code. When we talk of "Big Code for Big Software's," we are referring to the specific challenges and considerations involved in developing, managing, and maintaining of code in large-scale software systems.
Figures
PreviousNext
Technical Note
Open Access November 15, 2023

Predictive Failure Analytics in Critical Automotive Applications: Enhancing Reliability and Safety through Advanced AI Techniques

Abstract Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time [...] Read more.
Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time to the time of failure. The goal is to make accurate predictions close to the failure time to provide early warnings. J S Grewal and J. Grewal provide a comprehensive definition of RUL in their paper "The Kalman Filter approach to RUL estimation." A process is a quadruple (XU f P), where X is the state space, U is the control space, P is the set of possible paths, and f represents the transition between states. The process involves applying control values to change the system's state over time.
Figures
PreviousNext
Article
Open Access February 15, 2024

Stock Closing Price and Trend Prediction with LSTM-RNN

Abstract The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use [...] Read more.
The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use the Long Short-Term Memory (LSTM-RNN) Recurrent Neural Networks (RNN) to predict the daily closing price of the Amazon Inc. stock (ticker symbol: AMZN). We study the influence of various hyperparameters in the model to see what factors the predictive power of the model. The root mean squared error (RMSE) on the training was 2.51 with a mean absolute percentage error (MAPE) of 1.84%.
Figures
PreviousNext
Article
Open Access December 14, 2022

Applying Artificial Intelligence (AI) for Mitigation Climate Change Consequences of the Natural Disasters

Abstract Climate change and weather-related disasters are speeded very fast in the last decades with the consequences bringing to humanity: insecurity, destructing the ecological systems, increasing poverty, human victims, and economical losses everywhere on the planet. The innovative methods applied to mitigate the magnitudes of natural disasters and to combat effectively their negative impact consist of [...] Read more.
Climate change and weather-related disasters are speeded very fast in the last decades with the consequences bringing to humanity: insecurity, destructing the ecological systems, increasing poverty, human victims, and economical losses everywhere on the planet. The innovative methods applied to mitigate the magnitudes of natural disasters and to combat effectively their negative impact consist of remote and earth constantly monitoring, data collection, creation of models for big data extrapolation, prediction, in-time warning for prevention, and others. Artificial intelligence (AI) is used to deal with big data, for calculations, forecasts, predictions of natural disasters in the near future, the establishment of the possibilities to escape the hazards or risky situations, as well as to prepare the human being for adverse changes, and drawing the different choices as assistance the right decision to be accepted. Many projects, programs, and frameworks are adopted and carried out the separate governments and business makers to common goals and actions for the formation of a friendly environment and measures for reducing undesired climate alterations and cataclysms. The aim of the article is to review the last programs and innovations applied in the mitigation of climate change using AI.
Figures
PreviousNext
Brief Review
Open Access November 10, 2022

Modeling and Forecasting Cryptocurrency Returns and Volatility: An Application of GARCH Models

Abstract The future of e-money is crypocurrencies, it is the decentralize digital and virtual currency that is secured by cryptography. It has become increasingly popular in recent years attracting the attention of the individual, investor, media, academia and governments worldwide. This study aims to model and forecast the volatilities and returns of three top cryptocurrencies, namely; Bitcoin, Ethereum [...] Read more.
The future of e-money is crypocurrencies, it is the decentralize digital and virtual currency that is secured by cryptography. It has become increasingly popular in recent years attracting the attention of the individual, investor, media, academia and governments worldwide. This study aims to model and forecast the volatilities and returns of three top cryptocurrencies, namely; Bitcoin, Ethereum and Binance Coin. The data utilized in the study was extracted from the higher market capitalization at 31st December, 2021 and the data for the period starting from 9th November, 2017 to 31st December 2021. The Generalised Autoregressive conditional heteroscedasticity (GARCH) type models with several distributions were fitted to the three cryptocurrencies dataset with their performances assessed using some model criterion tests. The result shows that the mean of all the returns are positive indicating the fact that the price of this three crptocurrencies increase throughout the period of study. The ARCH-LM test shows that there is no ARCH effect in volatility of Bitcoin and Ethereum but present in Binance Coin. The GARCH model was fitted on Binance Coin, the AIC and log L shows that the CGARCH is the best model for Binance Coin. Automatic forecasting was perform based on the selected ARIMA (2,0,1), ARIMA (0,1,2) and the random walk model which has the lowest AIC for ETH-USD, BNB-USD and BTC-USD respectively. This finding could aid investors in determining a cryptocurrency's unique risk-reward characteristics. The study contributes to a better deployment of investor’s resources and prediction of the future prices the three cryptocurrencies.
Figures
Figure 2 (c)
Figure 4 (b)
Figure 4 (c)
Figure 5 (b)
Figure 5 (c)
PreviousNext
PDF Html Xml
Article
Open Access July 22, 2022

DFT-Based Prediction of Anti-Leishmanial Activity of Carboxylates and Their Antimony(III) Complexes Against Five Leishmanial Strains

Abstract Carboxylates and their antimony(III) complexes experimentally scanned earlier for anti-leishmanial activity (IC50) against five leishmanial strains viz., L. major, L. major (Pak), L. tropica, L. mex mex, and L. donovani. These activities have been theoretically predicted by DFT method along with quantitative structure-activity relationship (QSAR) study. Molecular modeling and geometry optimization of the all the eight compounds have been performed on workspace program of CAChe Pro software of Fujitsu by opting B88-PW91 (Becke '88; Perdew & Wang '91) GGA (generalized-gradient approximation) energy functional with DZVP (double-zeta valence polarized ) basis set in DFT (Density Functional Theory). For QSAR, multiple linear regression (MLR) analysis has been performed on Project Leader Program associated with CAChe. The reliability of correlation between experimental activities and predicted activities are r2 = 0.826, r2CV = 0.426 (L. major); r2 = 0.905, r2CV = 0.507 (L. major (Pak)); r2 = 0.980, r2CV = 0.932 (L. tropica); r2 = 0.781, r2CV = 0.580 (L. mex mex) and r2 = 0.634, r2CV = 0.376 (L. donovani [...] Read more.
Carboxylates and their antimony(III) complexes experimentally scanned earlier for anti-leishmanial activity (IC50) against five leishmanial strains viz., L. major, L. major (Pak), L. tropica, L. mex mex, and L. donovani. These activities have been theoretically predicted by DFT method along with quantitative structure-activity relationship (QSAR) study. Molecular modeling and geometry optimization of the all the eight compounds have been performed on workspace program of CAChe Pro software of Fujitsu by opting B88-PW91 (Becke '88; Perdew & Wang '91) GGA (generalized-gradient approximation) energy functional with DZVP (double-zeta valence polarized ) basis set in DFT (Density Functional Theory). For QSAR, multiple linear regression (MLR) analysis has been performed on Project Leader Program associated with CAChe. The reliability of correlation between experimental activities and predicted activities are r2 = 0.826, r2CV = 0.426 (L. major); r2 = 0.905, r2CV = 0.507 (L. major (Pak)); r2 = 0.980, r2CV = 0.932 (L. tropica); r2 = 0.781, r2CV = 0.580 (L. mex mex) and r2 = 0.634, r2CV = 0.376 (L. donovani), and a comparison of the experimental values and the values obtained by theoretical calculations has been presented pictorially that shows close resemblance.
Figures
PreviousNext
Article
Open Access October 07, 2021

Estimation of Clear Sky Normal Irradiance over Northern Nigeria Atmosphere

Abstract Energy from the sun is an ideal new energy source for power systems, in a context of sustainable development, enthusiasm for concentrated solar power technologies is developing. Accurate estimation of clear-sky radiation is needed in many engineering, architectural and agricultural applications in order to integrate solar energy into the power grid. An evaluation of the irradiance input to solar [...] Read more.
Energy from the sun is an ideal new energy source for power systems, in a context of sustainable development, enthusiasm for concentrated solar power technologies is developing. Accurate estimation of clear-sky radiation is needed in many engineering, architectural and agricultural applications in order to integrate solar energy into the power grid. An evaluation of the irradiance input to solar power systems is required in many applications. Clear-sky models represent the maximum input of solar power systems, which is especially useful for forecasting solar irradiance and numerical weather prediction. This work examined the application of Yang model to estimate the monthly mean clear sky normal irradiance for northern Nigeria using meteorological variables like temperature, relative humidity and solar radiation considering the shading effect of the complex topography of terrain in Norther region of Nigeria, also to know the variation of beam radiation and diffuse radiation among the selected stations and also to ascertain the significance of aerosols, water vapor, and other transmittances in the estimation of the beam and diffuse radiation in the northern atmosphere. The modeling was computed using monthly mean maximum temperature and relative humidity gotten from the Nigeria Meteorological Agency (NIMET) for the period of fourteen years (1983-1997. The beam and diffuse irradiance for the northern atmosphere is compared by estimating their mean and standard deviation. Also, detailed information about the trend of radiation in each of the selected states in the northern hemisphere of Nigeria was obtained using a graphical method of data analysis. Result reveals that the value of beam and diffused radiation getting to the earth's surface depends on the aerosols, water vapour, atmospheric Ozone, gas transmittance and Rayleigh scattering. From the result above, the maximum beam radiation and the minimum diffused radiation occur during the raining season and the minimum beam radiation and maximum diffuse radiation occur during the dry season. This is due to the variations of these atmospheric constituents (aerosols, water vapour, atmospheric Ozone, gas transmittance and Rayleigh scattering) in the northern atmosphere on these seasons.
Figures
PreviousNext
Article
Open Access August 09, 2021

Optimization and Prediction of Biodiesel Yield from Moringa Seed Oil and Characterization

Abstract In this study, oil was extracted from Moringa seed using mechanical and solvent methods. To transesterify the oil into biodiesel, factorial design of experiment of 24 was used to obtain different combination factors at different level of reaction temperature, catalyst amount, reaction time and alcohol to oil ratio, giving rise to 48 experimental runs. The oil sample was transesterified [...] Read more.
In this study, oil was extracted from Moringa seed using mechanical and solvent methods. To transesterify the oil into biodiesel, factorial design of experiment of 24 was used to obtain different combination factors at different level of reaction temperature, catalyst amount, reaction time and alcohol to oil ratio, giving rise to 48 experimental runs. The oil sample was transesterified in 48 experimental runs, in each case the biodiesel yield was recorded in percentage. The biodiesel was then characterized according to ASTM test protocol. Factorial design model was developed using Design Expert 7.0, the model generated R of 0.987 and Mean Square Error (MSE) of 5.0453 and was used to predict and optimize biodiesel yield. Artificial Neural Network (ANN) model from MATLAB R2016a was developed using 4 input variables and 30 runs, the remaining 18 runs were tested with the ANN model to predict and compare the biodiesel yield with the experimental biodiesel yield, the model generated R value of 0.99687 and MSE of 3.50804. It was found that solvent method yielded more oil than mechanical method, the biodiesel has good thermo-physical property, optimum biodiesel yield of 91.45 % was obtained at 5:1 alcohol/ oil molar ratio, 18.89 wt% catalyst amounts, 45 minutes reaction time and at 45 reaction temperature. The experimental validation yielded 88.33 % biodiesel. The ANN model adequately predicted the remaining 18 runs with R2 value of 0.99649 and MSE of 4.914243. Both models proved adequate enough to predict biodiesel yield but ANN model proved more adequate.
Figures
PreviousNext
Article
Open Access July 17, 2021

Nonlinear Whole Seismology, Topological Seismology, Magnitude-Period Formula of Earthquakes and Their Predictions

Abstract First, we propose the nonlinear whole seismology and its three basic laws. Next, based on the nonlinear equations of fluid dynamics in Earth’s crust, we obtain a chaos equation, in which chaos corresponds to the earthquake, and shows complexity on seismology. But, combining the Carlson-Langer model and the Gutenberg-Richter relation, a simplified nonlinear solution and corresponding [...] Read more.
First, we propose the nonlinear whole seismology and its three basic laws. Next, based on the nonlinear equations of fluid dynamics in Earth’s crust, we obtain a chaos equation, in which chaos corresponds to the earthquake, and shows complexity on seismology. But, combining the Carlson-Langer model and the Gutenberg-Richter relation, a simplified nonlinear solution and corresponding magnitude-period formula of earthquakes may be derived approximately. Further, we research the topological seismology. From these theories some predictions can be calculated quantitatively and are already tested. Combining the Lorenz nonlinear model, we may discuss the earthquake migration to and fro. Finally, if various modern scientific instruments, different scientific theories and some paranormal ways for earthquake are combined each other, the accuracy of multilevel prediction will be increased.
Article
Open Access December 27, 2020

Exploring AI Algorithms for Cancer Classification and Prediction Using Electronic Health Records

Abstract Cell division that is not controlled leads to cancer, an incurable condition. An early diagnosis has the potential to lower death rates from breast cancer, the most frequent disease in women worldwide. Imaging studies of the breast may help doctors find the disease and diagnose it. This study explores an effectiveness of DL and ML models in a classification of mammography images for breast cancer [...] Read more.
Cell division that is not controlled leads to cancer, an incurable condition. An early diagnosis has the potential to lower death rates from breast cancer, the most frequent disease in women worldwide. Imaging studies of the breast may help doctors find the disease and diagnose it. This study explores an effectiveness of DL and ML models in a classification of mammography images for breast cancer detection, utilizing the publicly available CBIS-DDSM dataset, which comprises 5,000 images evenly divided between benign and malignant cases. To improve diagnostic accuracy, models such as Gaussian Naïve Bayes (GNB), CNNs, KNN, and MobileNetV2 were assessed employing performance measures including F1-score, recall, accuracy, and precision. The methodology involved data preprocessing techniques, including transfer learning and feature extraction, followed by data splitting for robust model training and evaluation. Findings indicate that MobileNetV2 achieved a highest accuracy99.4%, significantly outperforming GNB (87.2%), CNN (96.7%), and KNN (91.2%). The outstanding capacity of MobileNetV2 to identify between benign and malignant instances was shown by the investigation, which also made use of confusion matrices and ROC curves to evaluate model performance.
Figures
PreviousNext
Review Article
Open Access October 15, 2022

Big Data and AI/ML in Threat Detection: A New Era of Cybersecurity

Abstract The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even [...] Read more.
The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even under pressure from regulatory boards, have strived to harness the power of data and leverage it to enhance safety and security, maximize performance, and mitigate risks. However, the adversaries themselves have capitalized on the unequal battle of big data and artificial intelligence to inflict widespread chaos. Therefore, the demand for big data analytics and AI/ML for high-fidelity intelligence, surveillance, and reconnaissance is at its highest. Today, in the cybersecurity realm, the detection of adverse incidents poses substantial challenges due to the sheer variety, volume, and velocity of deep packet inspection data. State-of-the-art detection techniques have fallen short of detecting the latest attacks after a big data breach incident. On the other hand, computational intelligence techniques such as machine learning have reignited the search for solutions for diverse monitoring problems. Recent advancements in AI/ML frameworks have the potential to analyze IoT/edge-generated big data in near real-time and assist risk assessment and mitigation through automated threat detection and modeling in the big data and AI/ML domain. Industry best practices and case studies are examined that endeavor to showcase how big data coupled with AI/ML unlocks new dimensions and capabilities in improved vigilance and monitoring, prediction of adverse incidents, intelligent modeling, and future uncertainty quantification by data resampling correction. All of these avenues lead to enhanced robustness, security, safety, and performance of industrial processes, computing, and infrastructures. A view of the future and how the potential threats due to the misuse of new technologies from bandwidth to IoT/edge, blockchain, AI, quantum, and autonomous fields is discussed. Cybersecurity is again playing out at a pace set by adversaries with low entry barriers and debilitating tools. The need for innovative solutions for defense from the emerging threat landscape, harnessing the power of new technologies and collaboration, is emphasized.
Figures
PreviousNext
Article
Open Access December 27, 2021

Leveraging AI in Urban Traffic Management: Addressing Congestion and Traffic Flow with Intelligent Systems

Abstract Traffic congestion across the globe is a multimodal problem, intertwining vehicular, pedestrian, and bicycle traffic. The relationship between the multimodal traffic flow is a key factor in understanding urban traffic dynamics. The impact of excessive congestion extends to the excessive cost spent on traffic maintenance, as well as the inherent transportation inefficiency and delayed travel times. [...] Read more.
Traffic congestion across the globe is a multimodal problem, intertwining vehicular, pedestrian, and bicycle traffic. The relationship between the multimodal traffic flow is a key factor in understanding urban traffic dynamics. The impact of excessive congestion extends to the excessive cost spent on traffic maintenance, as well as the inherent transportation inefficiency and delayed travel times. From an urban transportation standpoint, an immediate consideration on one hand is monitoring traffic conditions and demand cycles, while on the other hand inducing flow modifications that benefit the traffic network and mitigate congestion. Embedded and centralized control systems that characterize modern traffic management systems extract traffic conditions specific to their regions but lack communication between networks. Moreover, innovative methods are required to provide more accurate up-to-date traffic forecasts that characterize real-world traffic dynamics and facilitate optimal traffic management decisions. In this chapter, we briefly outline the main difficulties and complexities in modeling, managing, and forecasting traffic dynamics. We also compare various conventional and modern Intelligent Transportation Strategies in terms of accuracy and applicability, their performance, and potential opportunities for optimization of multimodal traffic flow and congestion reduction. This chapter introduces various proposed data-driven models and tools employed for traffic flow prediction and management, investigating specific strategies' strengths, weaknesses, and benefits in addressing various real-world traffic management problems. We describe that the design phase of dependable Intelligent Transportation Systems bears unique requirements in terms of the robustness, safety, and response times of their components and the encompassing system model. Furthermore, this architectural blueprint shares similarities with distributed coordinate searching and collective adaptive systems. Town size-independent models induce systemic performance improvements through reconfigurable embedded functionality. These AI techniques feature elaborate anytime planner-engagers ensuring near-optimal performances in an unbiased behavior when the model complexity is varied. Sustainable models minimize congestion during peaks, flooding, and emergency occurrences as they adhere to area-specific regulations. Security-aware and fail-safe traffic management systems relinquish reasonable assurances of persistent operation under various environmental settings, to acknowledge metropolis and complex traffic junctions. The chapter concludes by outlining challenges, research questions, and future research paths in the field of transportation management.
Figures
PreviousNext
Review Article
Open Access October 29, 2022

Neural Networks for Enhancing Rail Safety and Security: Real-Time Monitoring and Incident Prediction

Abstract The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the [...] Read more.
The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the improvements in monitoring can be achieved using artificial intelligence algorithms such as neural networks. Neural networks have been used to achieve real-time incident identification in monitoring the track quality in terms of classifying the graphical outputs of an ultrasonic system working with the rails and track bed, to predict incidents on the rail infrastructure due to transmission channels becoming blocked, and also to attempt scheduling preemptive and preventative maintenance. In terms of forecasting incidents and accidents on board the trains, neural networks have been used to model passenger behavior and optimize responses during a train station evacuation. In tackling the incidents and accidents occurring on rail transport, we contribute with two methodologies to detect anomalies in real-time and identify the level of security risk: at the maintenance level with personnel operating along the railways, and onboard passenger trains. These methodologies were evaluated on real-world datasets and shown to be able to achieve a high accuracy in the results. The results generated from these case studies also reveal the potential for network-wide applications, which could enhance security and safety on railway networks by offering the possibility of better managing network disruptions and more rapidly identifying security issues. The speed and coverage of the information generated through the implementation of these methodologies have implications in utilizing prediction for decision support and enhancing safety and security on board the rail network.
Figures
PreviousNext
Review Article
Open Access November 05, 2022

Application of Neural Networks in Optimizing Health Outcomes in Medicare Advantage and Supplement Plans

Abstract The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, [...] Read more.
The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, personalized treatment recommendations, and risk stratification. This paper explores the application of neural networks in enhancing health outcomes within the context of Medicare Advantage and Supplement plans. We review how deep learning models can be leveraged to predict patient risk, optimize resource allocation, and identify at-risk populations for preventive interventions. Additionally, we discuss the potential for neural networks to improve claims processing, reduce fraud, and streamline administrative burdens. By integrating various data sources, including medical records, claims data, and demographic information, neural networks enable more accurate and efficient decision-making processes. Ultimately, this approach can lead to better patient care, reduced healthcare costs, and improved satisfaction for beneficiaries of these programs. The paper concludes by highlighting the current limitations, ethical considerations, and future directions for AI adoption in the Medicare Advantage and Supplement sectors.
Figures
PreviousNext
Review Article
Open Access November 16, 2022

AI-Driven Automation in Monitoring Post-Operative Complications Across Health Systems

Abstract Artificial intelligence systems have been previously used to predict post-operative complications in small studies and single institutions. Here we developed a robust artificial intelligence model that predicts the risk of having cardiac, pulmonary, thromboembolic, or septic complications after elective, non-cardiac, non-ambulatory surgery. We combined structured and unstructured electronic health [...] Read more.
Artificial intelligence systems have been previously used to predict post-operative complications in small studies and single institutions. Here we developed a robust artificial intelligence model that predicts the risk of having cardiac, pulmonary, thromboembolic, or septic complications after elective, non-cardiac, non-ambulatory surgery. We combined structured and unstructured electronic health record data from 3.5 million surgical encounters from 25 medical centers between 2009 and 2017. Our neural network model predicted postoperative comorbidities 15 to 80 times faster than classical models. As such, our model can be used to assess the risk of having a specific complication postoperatively in a fraction of a second. With our model, we believe clinicians will be able to identify high-risk surgical patients and use their good judgment to mitigate upcoming risks, ultimately improving patient outcomes [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2019

The Role of Neural Networks in Advancing Wearable Healthcare Technology Analytics

Abstract Neural networks are bringing a transformation in wearable healthcare technology analytics. These networks are able to analyze a vast amount of data to help in making decisions concerning patient care. Advancements in deep learning have brought neural networks to the forefront, making data analytics a straightforward process. This study will help in unveiling the use of ICT and AI in medical [...] Read more.
Neural networks are bringing a transformation in wearable healthcare technology analytics. These networks are able to analyze a vast amount of data to help in making decisions concerning patient care. Advancements in deep learning have brought neural networks to the forefront, making data analytics a straightforward process. This study will help in unveiling the use of ICT and AI in medical healthcare technology, crawling through some industry giants. Wearable Healthcare Technologies are becoming more popular every day. These technologies facilitate collecting, monitoring, and sharing every vital aspect of the human body necessary for diagnosing and treating an ailment. At the advent of global digitization, health data storage and systematic analysis are taking shape to ensure better diagnostics, preventive, and predictive healthcare. Healthcare analytics powered by neural networks can significantly improve health outcomes, maximizing individuals' potential and quality of life. The breadth and possibilities of connected devices are getting wider. From personal activity monitoring to quantifying every bit of health statistics, connected devices are making an impact in measurement, management, and manipulation. In healthcare, early diagnosis could be a lifesaver. Data analytics can help in a big way to make moves and predictions to save lives. We are in another phase of the digitization era, "Neural Network and Wearable Healthcare Technology Analytics." A neural network could be conceived as an adaptive system made up of a large number of neurons connected in multiple layers. A neural network processes data in a similar way as the human brain does. Using a collection of algorithms, for many neural networks, objects are composed of 'input' and 'output' layers along with the layers of the neural network.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Advancing Healthcare Innovation in 2021: Integrating AI, Digital Health Technologies, and Precision Medicine for Improved Patient Outcomes

Abstract Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial [...] Read more.
Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial intelligence (AI) with a machine learning methodology is prominently considered as it is uniquely suitable to derive predictions and recommendations from complex patient datasets. Recent studies have shown that precise data aggregation methods exhibit an important role in the precision and reliability of clinical outcome distribution models. There is an essential need to develop an effective and powerful multifunctional machine learning platform to enable healthcare professionals to comprehend challenging biomedical multifactorial datasets to understand patient-specific scenarios and to make better clinical decisions, potentially leading to the optimist patient outcomes. There is a substantial drive to develop the networking and interoperability of clinical systems, the laboratory, and public health. These steps are delivered in concert with efforts at enabling usefully analytic tools and technologies for making sense of the eruption of overall patient’s information from various sources. However, the full efficiency of this technology can only be eliminated when ethical, legal, and social challenges related to reducing the privacy of healthcare information are successfully absorbed. Public and media are to be informed about the capabilities and limitations of the technologies and the paramount to be balanced is juvenile public healthcare data privacy debate. While this is ongoing, the measures have been progressed from patient data protection abuses for progress to realize the full potential of AI technology for hosting the health system, with benefits for all stakeholders. Any protection program should be based on fairness, transparency, and a full commitment to data privacy. On-going innovative systems that use AI to manage clinical data and analyzes are proposed. These tools can be used by healthcare providers, especially in defining specific scenarios related to biomedical data management and analysis. These platforms ensure that the significant and potentially predictive parameters associated with the diagnosis, treatment, and progression of the disease have been recognized. With the systematic use of these solutions, this work can contribute to the realization of noticeable improvements in the provision of real-time, personalized, and efficient medicine at a reduced cost [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2021

Revolutionizing Risk Assessment and Financial Ecosystems with Smart Automation, Secure Digital Solutions, and Advanced Analytical Frameworks

Abstract For years, risk assessment and financial calculations have been based on mathematical, statistical, and actuarial studies of existing and historical data. The manual process of building datasets, processing data, deriving trends, identifying periodicities, and analyzing diagnostics is extremely expensive and time-consuming. With the automation and evolution of data science technologies, [...] Read more.
For years, risk assessment and financial calculations have been based on mathematical, statistical, and actuarial studies of existing and historical data. The manual process of building datasets, processing data, deriving trends, identifying periodicities, and analyzing diagnostics is extremely expensive and time-consuming. With the automation and evolution of data science technologies, organizations are now bringing in niche data, such as unstructured data, which contain more disruptive and precise signals for decision-making—thereby making predictions and derivative valuations more robust. This discussion highlights how investment decision-making and financial ecosystem activities are set to be transformed with the power of technical automation, data, and artificial intelligence. A noted trend in the financial investment sector is that financial valuations are highly predictive and highly non-linear in long-term occurrences. To understand these robust evolving signals and execute profitable strategies upon them, the investment management process needs to be very dynamic, open, smart, and technically deep. However, with current manual processes, reaching a high-end asset prediction still seems like a shot in the dark. In parallel, open and democratically developed financial ecosystems query relatively riskless premium opportunities in high-finance valuation and perception. The process of evolving financial ecosystems or the use of automated tools and data to move to unique frontiers could make high-yield profiting opportunities very safe and entirely riskless. Financial economic theories and realistic approximation models support this.
Figures
PreviousNext
Review Article
Open Access December 17, 2024

An Analysis of Performance and Comparison of Models for Cardiovascular Disease Prediction via Machine Learning Models in Healthcare

Abstract Over the past few decades, cardiovascular disease and related complications have surpassed all others as the important causes of death on a universal scale. At the moment, they are the important cause of mortality universal, including in India. It is important to know how to find cardiovascular problems early so that patients get better care and prices go down. This project utilizes the UCI Heart [...] Read more.
Over the past few decades, cardiovascular disease and related complications have surpassed all others as the important causes of death on a universal scale. At the moment, they are the important cause of mortality universal, including in India. It is important to know how to find cardiovascular problems early so that patients get better care and prices go down. This project utilizes the UCI Heart Disease Dataset to develop ML and DL models capable of detecting cardiac diseases. Heart illness was categorized using Convolutional Neural Network (CNN) models, which are able to detect intricate patterns in supplied data. A confusion matrix rating, an F1-score, a ROC curve, accuracy, precision, and recall were some of the measures used to grade the model. It did much better than the Neural Network, Deep Neural Network (DNN), and Gradient Boosted Trees (GBT) models, with 91.71% accuracy, 88.88% precision, 82.75% memory, and 85.70% F1-score. Comparative study showed that CNN was the most accurate model. Other models had different balances between accuracy and recall. The experiment results show that the optional CNN model is a decent way to identify cardiovascular disease. This means that it could be used in healthcare systems to find diseases earlier and treat patients better.
Figures
PreviousNext
Article
Open Access December 26, 2021

Deep Learning Applications for Computer Vision-Based Defect Detection in Car Body Paint Shops

Abstract The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on [...] Read more.
The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on prediction and has an invisible fluctuation in production. The detection process for hidden defect images requires a lot of costs and needs to be supported for better progress and quality enhancement. Paint shop defects should be analyzed from color changes to detect defects effectively by preventing the variability of product demand over time. It is not easy to take visible light images without noise because the paint surfaces are glossy. A few parts of illumination and shadows remain in images, even in larger size and high-resolution images. The various painted surfaces are also needed to reflect both color and texture information in computer vision models to classify defects precisely. Several automated detection systems have been applied to paint shop inspections using lasers, infrared, x-ray, electrical, magnetic, and acoustic sensors. The chance of paint shop defects can be low, unnecessarily low, compared to clouds in the sky, but those chances impact defect functionalities. Thus, they are called as “lessons learned.” Lately, artificial intelligence has been introduced to the field of factory automation, and many defect detection feeds have found footsteps in machine learning and deep learning. Recent attempts at deep learning-based defect detection are proposing simple techniques using specific neural network architectures with big data. However, big data is still in its early stages, and significant challenges exist in normalizing and annotating such data. To get cost-efficient and timely solutions tailored to automotive paint shops, it might be a better consideration to combine deep learning solutions with traditional computer vision and more elaborate machine learning methods.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

An Analysis of Crime Prediction and Classification Using Data Mining Techniques

Abstract Crime is a serious and widespread problem in their society, thus preventing it is essential. Assignment. A significant number of crimes are committed every day. One tool for dealing with model crime is data mining. Crimes are costly to society in many ways, and they are also a major source of frustration for its members. A major area of machine learning research is crime detection. This paper [...] Read more.
Crime is a serious and widespread problem in their society, thus preventing it is essential. Assignment. A significant number of crimes are committed every day. One tool for dealing with model crime is data mining. Crimes are costly to society in many ways, and they are also a major source of frustration for its members. A major area of machine learning research is crime detection. This paper analyzes crime prediction and classification using data mining techniques on a crime dataset spanning 2006 to 2016. This approach begins with cleaning and extracting features from raw data for data preparation. Then, machine learning and deep learning models, including RNN-LSTM, ARIMA, and Linear Regression, are applied. The performance of these models is evaluated using metrics like Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE). The RNN-LSTM model achieved the lowest RMSE of 18.42, demonstrating superior predictive accuracy among the evaluated models. Data visualization techniques further unveiled crime patterns, offering actionable insights to prevent crime.
Figures
PreviousNext
Article
Open Access December 27, 2022

Advance of AI-Based Predictive Models for Diagnosis of Alzheimer's Disease (AD) in Healthcare

Abstract The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based [...] Read more.
The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based on Convolutional Neural Networks (CNNs) to help with the early detection of Alzheimer's disease. Four levels of dementia have been applied to the 6,400 photos in the collection: not demented, slightly demented, moderately demented, and considerably mildly demented. Pixel normalization, class balancing utilizing data augmentation techniques, and picture scaling to 128×128 pixels were all part of a thorough workflow for data preparation. To improve the gathering of spatial dependence in volumetric MRI data, a 3D convolutional neural network (CNN) architecture was used. We used important performance measures including F1-score, recall, accuracy, precision, and log loss to gauge the model's effectiveness. A review of the available data indicates that the total F1-score, accuracy, recall, and precision were 99.0%, 99.0%, and 99.38%, respectively. The findings demonstrate the model's potential for practical use in early AD diagnosis and establish its robustness with the help of confusion matrix analysis and performance curves.
Figures
PreviousNext
Article
Open Access December 27, 2022

Big Data-Driven Time Series Forecasting for Financial Market Prediction: Deep Learning Models

Abstract Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing [...] Read more.
Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing prices of S&P 500 stocks over a decade (2010–2019), the ARIMA-LSTM model combines influences of both autoregressive time series forecasting with the substantial sequence learning property of LSTM. Data preprocessing in all aspects including missing values interpolation, outlier’s detection and data scaling – Min-Max guarantees data quality. The model is trained on 90/10 training/testing split and met with main performance metrics: MaE, MSE & RMSE. As indicated in the results, the proposed ARIMA-LSTM model gives a MAE value and MSE value of 0.248 and 0.101 respectively and RMSE of 0.319, a measure high accuracy on stock price prediction. Coupled comparative analysis with other Artificial Neural Networks (ANN) and BP Neural Networks (BPNN) are examples of machine learning reference models, further illustrates the suitability and superiority of ARIMA-LSTM approach as compared to the underlying models with the least MAE and strong predictive capability. This work demonstrates the efficiency of integrating the classical time series models with deep learning methods for financial forecasting.
Figures
PreviousNext
Article
Open Access December 24, 2022

Web-Centric Cloud Framework for Real-Time Monitoring and Risk Prediction in Clinical Trials Using Machine Learning

Abstract Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing [...] Read more.
Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing machine-learning algorithms for data analysis. Detailed descriptions of the data-management interface and the machine-learning processes are provided, targeting active clinical trials with therapeutic uses in cancer. Demonstrations utilize publicly available clinical-trial data from the ClinicalTrials.gov repository. The real-time monitoring and risk prediction systems were assessed by developing five supervised-classification-machine-learning models for trial-status prediction and six unsupervised models for patient-safety-profile assessment, each representing a different phase of the clinical-trial process. All supervised models yielded high accuracy and area-under-the-curve values at the testing stage, while the unsupervised models demonstrated practical applicability. The results underscore the advantages of using the trial-status algorithm, the patient-safety-profile model, and the proposed framework for performing real-time monitoring and risk prediction of clinical trials.
Figures
PreviousNext
Review Article
Open Access December 22, 2020

Cloud Migration Strategies for High-Volume Financial Messaging Systems

Abstract Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the [...] Read more.
Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the critical path and, in many enterprise-scale settings, forgoes hybrid complexity and multi-cloud risks. Nevertheless, slack in system designs does exist; financial institutions enable market functionality—trading, clearing/best execution—despite potentially being able to meet such sets with lower service levels than other verticals. A cloud multi-account structure for sensitive data, for example, naturally limits exposure when combined with observed risk. Fulfilling predictions of elasticity during periods of high demand usually requires support from a dedicated environment (or environments) located nearer to the operations. Components can consequently be allocated on a per-account basis or maintained as shared sink systems to which the dedicated streams write. The automation code can similarly be targeted for dedicated accounts, avoiding the resource constraints that beset such operations during industry events like emergency triage/contact desking.
Figures
PreviousNext
Review Article
Open Access December 02, 2020

Predictive Modeling and Machine Learning Frameworks for Early Disease Detection in Healthcare Data Systems

Abstract Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, [...] Read more.
Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, the built models represent the relationship among the data concerning customer, provider, production, and other aspects of the healthcare environment in order to assist the decision processes in the prevention of diseases and in the planning of preventive actions by detection of high-risk patients. Contrary to trend analysis, whose goal is to describe past events, predictive models aim to provide useful indications regarding future events and changes. Predictive healthcare modeling supports actions that try to prevent the manifestation of diseases in healthy individuals or try to diagnose as early as possible the incidence of a disease in patients at risk. A sound predictive analysis encompasses not only the model-training task, but also the aspects of data quality, preprocessing, and fusion during its entire implementation lifecycle to ensure appropriate input data preparation. The robustness of the predictive model and its results depends highly on data quality. Due to the variety of data sources in healthcare environments, it becomes essential to use preprocessing in order to remove noise and inconsistencies. The increasing number of endorsable data exchange standards makes each data exchange achievable, but it demands the implementation of a data-governance program. In addition, the influence of the hospital-database architect on the architecture of an early-diagnosis model is important to guarantee appropriate input-formatting modularity.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Prediction

View options

Citations of

Views of

Downloads of