Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access February 06, 2026

Predictive Modeling of Public Sentiment Using Social Media Data and Natural Language Processing Techniques

Abstract Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled [...] Read more.
Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled tweets, and develops predictive models for binary sentiment classification using Naive Bayes, Logistic Regression, and the transformer-based BERT model. Experiments were conducted on a balanced subset of 12,000 tweets after comprehensive NLP preprocessing. Evaluation using accuracy, F1-score, and confusion matrices revealed that BERT significantly outperforms traditional models, achieving an accuracy of 89.5% and an F1-score of 0.89 by effectively modeling contextual and semantic nuances. In contrast, Naive Bayes and Logistic Regression demonstrated reasonable but consistently lower performance. To support practical deployment, we introduce SentiFeel, an interactive tool enabling real-time sentiment analysis. While resource constraints limited the dataset size and training epochs, future work will explore full corpus utilization and the inclusion of neutral sentiment classes. These findings underscore the potential of transformer models for enhanced public opinion monitoring, marketing analytics, and policy forecasting.
Figures
PreviousNext
Article
Open Access November 29, 2022

The Application of Machine Learning in the Corona Era, With an Emphasis on Economic Concepts and Sustainable Development Goals

Abstract The aim of this article is to examine the impacts of Coronavirus Disease -19 (Covid-19) vaccines on economic condition and sustainable development goals. In other words, we are going to study the economic condition during Covid19. We have studied the economic costs of pandemic, benefits in terms of gross domestic product (GDP), public finances and employment, investment on vaccines around the [...] Read more.
The aim of this article is to examine the impacts of Coronavirus Disease -19 (Covid-19) vaccines on economic condition and sustainable development goals. In other words, we are going to study the economic condition during Covid19. We have studied the economic costs of pandemic, benefits in terms of gross domestic product (GDP), public finances and employment, investment on vaccines around the world, progress and totally the economic impacts of vaccines and the impacts of emerging markets (EM) on achieving sustainable development goals (SDGs), including no poverty, good health and well-being, zero hunger, reduced inequality etc. The importance of emerging economies in reducing the harmful effects of the Corona has also been noted. We have tried to do experimental results and forecast daily new death cases from Feb-2020 to Aug-2021 in Iran using Artificial Neural Network (ANN) and Beetle Antennae Search (BAS) algorithm as a case study with econometric models and regression analysis. The findings show that Covid19 has had devastating economic and health effects on the world, and the vaccine can be very helpful in eliminating these effects specially in long-term. We observed that there is inequality in the distribution of Corona vaccines in rich countries compared to poor which EM can decrease the gap between them. The results show that both models (i.e., Artificial intelligence (AI) and econometric models) almost have the same results but AI optimization models can robust the model and prediction. The main contribution of this article is that we have surveyed the impacts of vaccination from socio-economic viewpoint not just report some facts and truth. We have surveyed the impacts of vaccines on sustainable development goals and the role of EM in achieving SDGs. In addition to using the theoretical framework, we have also used quantitative and empirical results that have rarely been seen in other articles.
Figures
PreviousNext
Article
Open Access August 31, 2022

Extended Rule of Five and Prediction of Biological Activity of peptidic HIV-1-PR Inhibitors

Abstract In this research work, we have applied “Lipinski’s RO5” for pharmacokinetics (PK) study and to predict the activity of peptidic HIV-1 protease inhibitors. Peptidic HIV-1-PRIs have been taken from literature with their observed biological activities (OBAs) in term of IC50. The logarithms of the inverse of IC50 have been used as biological end point o(log1/C) in the study. For calculation of [...] Read more.
In this research work, we have applied “Lipinski’s RO5” for pharmacokinetics (PK) study and to predict the activity of peptidic HIV-1 protease inhibitors. Peptidic HIV-1-PRIs have been taken from literature with their observed biological activities (OBAs) in term of IC50. The logarithms of the inverse of IC50 have been used as biological end point o(log1/C) in the study. For calculation of physicochemical parameters, the molecular modeling and geometry optimization of all the derivatives have been carried out with CAChe Pro software using semiempirical PM3 method. Prediction of the biological activity of the inhibitors has shown that the best QSAR model is constructed from pharmacokinetic properties, molecular weight and hydrogen bond acceptor. This also proved that these properties play important role to describe the PKs of the drugs. On the basis of the derived models one can build up a theoretical basis to access the biological activity of the compounds of the same series.
Figures
PreviousNext
Article
Open Access August 21, 2021

Global Analysis of Potential COVID 19 Transmission and Enabling Factors

Abstract Background: Coronavirus disease has caused global turmoil especially causing huge impact on human life all over the world. Current reports states more than 3 million people have lost life and more than 160 million people are known to be suspected with the SARS-CoV-2. Transmission and disease incidence rates are indicators to assess the seriousness of COVID-19 pandemic and studies to understand the factors that aid in this direction are very vital to curb the disease. Methods: The study intends to discover the relationship by performing statistical analysis using correlation and multiple linear regression analysis between the variable’s population density, temperature, relative humidity, and active time of virus and find out the parameters that predict the cases reported per million population in 83 countries. Results: Analysis indicates active time of virus in days is very positively associated with the COVID -19 cases in all the countries r = .604, p < .01. Active time of virus shows strong negative correlation with temperature r = -.930, p [...] Read more.
Background: Coronavirus disease has caused global turmoil especially causing huge impact on human life all over the world. Current reports states more than 3 million people have lost life and more than 160 million people are known to be suspected with the SARS-CoV-2. Transmission and disease incidence rates are indicators to assess the seriousness of COVID-19 pandemic and studies to understand the factors that aid in this direction are very vital to curb the disease. Methods: The study intends to discover the relationship by performing statistical analysis using correlation and multiple linear regression analysis between the variable’s population density, temperature, relative humidity, and active time of virus and find out the parameters that predict the cases reported per million population in 83 countries. Results: Analysis indicates active time of virus in days is very positively associated with the COVID -19 cases in all the countries r = .604, p < .01. Active time of virus shows strong negative correlation with temperature r = -.930, p < .01 revealing that rise in temperature will reduce the virus activity in the population. Together, these variables will account for 36.2% variance in the cases per million population with no significant prediction estimated from any factor. Conclusion: The study outcomes clearly state that population density alone is insufficient to estimate the extent of influence on COVID -19 cases as the number of persons living per sq. km of land is a dynamic quantity tend to fluctuate over time and space due to migration of population. In conjunction to the previous studies reported on the environmental and climatic factors influencing the cases reported, population dynamics does not show much significance on the disease spread and incidence. Contribution: The rise in confirmed cases and the high incidence rate reported in countries can be attributed to the active time of virus life expectancy as there is a positive correlation observed between the COVID-19 cases reported and the virus active time in the examined countries. Also, environment and climatic factors play a role in modulating the infection and transmission rate with less significant influence of population density on the COVID-19.
Figures
PreviousNext
Article
Open Access May 20, 2021

Bioconcentration Factor of Polychlorinated Biphenyls and Its Correlation with UV- and IR-Spectroscopic data: A DFT based Study

Abstract Polychlorinated biphenyls (PCBs) are important class of persist organic pollutants that were used as a component of paints especially in printings, as plastificator of plastics and insulating materials in transformers and capacitors, heat transfer fluids, additives in hydraulic fluids in vacuum and turbine pumps. There is always a need to establish reliable procedures for predicting the [...] Read more.
Polychlorinated biphenyls (PCBs) are important class of persist organic pollutants that were used as a component of paints especially in printings, as plastificator of plastics and insulating materials in transformers and capacitors, heat transfer fluids, additives in hydraulic fluids in vacuum and turbine pumps. There is always a need to establish reliable procedures for predicting the bioconcentration potential of chemicals from the knowledge of their molecular structure, or from readily measurable properties of the substance. Hence, correlation and prediction of biococentration factors (BCFs) based on λmax and vibration frequencies of various bonds viz υ(C-H) and υ(C=C) of biphenyl and its fifty-seven derivatives have been made. For the study, the molecular modeling and geometry optimization of the PCBs have been performed on workspace program of CAChe Pro 5.04 software of Fujitsu using DFT method. UV-visible spectra for each compound were created by electron transition between molecular orbitals as electromagnetic radiation in the visible and ultraviolet (UV-visible) region is absorbed by the molecule. The energies of excited electronic states were computed quantum mechanically. IR spectra of transitions for each compound were created by coordinated motions of the atoms as electromagnetic radiation in the infrared region is absorbed by the molecule. The force necessary to distort the molecule was computed quantum mechanically from its equilibrium geometry and thus frequency of vibrational transitions was predicted. Project Leader Program associated with CAChe has been used for multiple linear regression (MLR) analysis using above spectroscopic data as independent variables and BCFs of PCBs as dependent variables. The reliability of correlation and predicting ability of the MLR equations (models) are judged by R2, R2adj, se, q2L10O and F values. This study reflected clearly that UV and IR spectroscopic data can be used to predict BCFs of a large number of related compounds within limited time without any difficulty.
Figures
PreviousNext
Editorial Article
Open Access September 28, 2025

Gut-Brain Axis in Autism Spectrum Disorder: A Bibliometric and Microbial-Metabolite-Neural Pathway Analysis

Abstract The gut-brain axis (GBA) has emerged as a central focus in the study of neurodevelopmental disorders, particularly autism spectrum disorder (ASD). Research suggests that microbial composition and its metabolic byproducts influence neural development, synaptic plasticity, and behavior [1,2,3]. A structured bibliometric analysis of Scopus and Web of Science records was performed using Bibliometrix [...] Read more.
The gut-brain axis (GBA) has emerged as a central focus in the study of neurodevelopmental disorders, particularly autism spectrum disorder (ASD). Research suggests that microbial composition and its metabolic byproducts influence neural development, synaptic plasticity, and behavior [1,2,3]. A structured bibliometric analysis of Scopus and Web of Science records was performed using Bibliometrix and VOSviewer to trace trends and thematic evolution of GBA–ASD literature [7,8]. In parallel, a data-driven pathway modeling approach maps microbial metabolites (e.g., short-chain fatty acids, tryptophan catabolites) to host signaling pathways including vagal stimulation, immune cytokine modulation, and blood–brain barrier (BBB) permeability [4,5]. Simulations implemented in Python’s NetworkX illustrate how perturbations in metabolite flux may influence CNS outcomes. The findings reveal growing emphasis on butyrate, serotonin, microglial priming, and maternal immune activation in ASD-related GBA studies, and highlight the need for rigorous empirical validation of computational predictions [9,10,11].
Brief Report
Open Access June 28, 2025

Development of a Hemodialysis Data Collection and Clinical Information System and Establishment of an Intradialytic Blood Pressure/Pulse Rate Predictive Model

Abstract This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models [...] Read more.
This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models, with subsequent suggestions provided. Both objectives were executed under the supervision of the Institutional Review Board (IRB) at Mackay Memorial Hospital in Taiwan. The system completed for objective one has introduced three significant services to the clinic, including automated hemodialysis data collection, digitized data storage, and an information-rich human-machine interface as well as graphical data displays, which replaces traditional paper-based clinical administrative operations, thereby enhancing healthcare efficiency. The graphical data presented through web and app interfaces aids in real-time, intuitive comprehension of the patients’ conditions during hemodialysis. Moreover, the data stored in the backend database is available for physicians to conduct relevant analyses, unearth insights into medical practices, and provide precise medical care for individual patients. The training and evaluation of the predictive models for objective two, along with related comparisons, analyses, and recommendations, suggest that in situations with limited computational resources and data, an Artificial Neural Network (ANN) model with six hidden layers, SELU activation function, and a focus on artery-related features can be employed for hourly intradialytic BP/PR prediction tasks. It is believed that this contributes to the collaborating clinic and relevant research communities.
Figures
Figure 3 (c)
Figure 3 (d)
Figure 4 (b)
Figure 4 (c)
Figure 4 (d)
Figure 4 (e)
Figure 4 (f)
Figure 4 (g)
Figure 4 (h)
Figure 5 (b)
Figure 6 (b)
Figure 6 (c)
Figure 6 (d)
Figure 6 (e)
Figure 6 (f)
Figure 7 (b)
Figure 7 (c)
Figure 7 (d)
Figure 7 (e)
Figure 7 (f)
Figure 7 (g)
Figure 8 (b)
Figure 8 (c)
Figure 8 (d)
Figure 9 (b)
Figure 9 (c)
Figure 9 (d)
Figure 10 (b)
Figure 10 (c)
Figure 10 (d)
Figure 10 (e)
Figure 10 (f)
Figure 11 (b)
Figure 11 (c)
Figure 11 (d)
Figure 11 (e)
PreviousNext
PDF Html Xml
Article
Open Access July 10, 2024

Achieving Maintainability, Readability & Understandability of Software Projects using Code Smell Prediction

Abstract Maintenance of large-scale software is difficult due to large size and high complexity of code.80% of software development is on maintenance and the other 60% is on trying to understand the code. The severity of the code smells must be measured as well as fairness on it because it will help the developers especially in large scale source code projects. Code smell is not a bug in the system as it [...] Read more.
Maintenance of large-scale software is difficult due to large size and high complexity of code.80% of software development is on maintenance and the other 60% is on trying to understand the code. The severity of the code smells must be measured as well as fairness on it because it will help the developers especially in large scale source code projects. Code smell is not a bug in the system as it doesn’t prevent the program from functioning but it may increase the risk of software failure or performance slowdown. Therefore, this paper seeks to help developers with early prediction of severity of code smells and test the level of fairness on the predictions especially in large scale source code projects. Data is the collection of facts and observations in terms of events, it is continuously growing, getting denser and more varied by the minute across different disciplines or fields. Hence, Big Data emerged and is evolving rapidly, the various types of data being processed are huge, but no one has ever thought of where this data resides, we therefore noticed this data resides in software’s and the codebases of the software’s are increasingly growing that is the size of the modules, functionalities, the size of the classes etc. Since data is growing so rapidly it also mean the codebases of software’s or code are also growing as well. Therefore, this paper seeks to discuss the 5V’s of big data in the context of software code and how to optimize or manage the big code. When we talk of "Big Code for Big Software's," we are referring to the specific challenges and considerations involved in developing, managing, and maintaining of code in large-scale software systems.
Figures
PreviousNext
Technical Note
Open Access November 15, 2023

Predictive Failure Analytics in Critical Automotive Applications: Enhancing Reliability and Safety through Advanced AI Techniques

Abstract Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time [...] Read more.
Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time to the time of failure. The goal is to make accurate predictions close to the failure time to provide early warnings. J S Grewal and J. Grewal provide a comprehensive definition of RUL in their paper "The Kalman Filter approach to RUL estimation." A process is a quadruple (XU f P), where X is the state space, U is the control space, P is the set of possible paths, and f represents the transition between states. The process involves applying control values to change the system's state over time.
Figures
PreviousNext
Article
Open Access February 15, 2024

Stock Closing Price and Trend Prediction with LSTM-RNN

Abstract The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use [...] Read more.
The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use the Long Short-Term Memory (LSTM-RNN) Recurrent Neural Networks (RNN) to predict the daily closing price of the Amazon Inc. stock (ticker symbol: AMZN). We study the influence of various hyperparameters in the model to see what factors the predictive power of the model. The root mean squared error (RMSE) on the training was 2.51 with a mean absolute percentage error (MAPE) of 1.84%.
Figures
PreviousNext
Article
Open Access December 14, 2022

Applying Artificial Intelligence (AI) for Mitigation Climate Change Consequences of the Natural Disasters

Abstract Climate change and weather-related disasters are speeded very fast in the last decades with the consequences bringing to humanity: insecurity, destructing the ecological systems, increasing poverty, human victims, and economical losses everywhere on the planet. The innovative methods applied to mitigate the magnitudes of natural disasters and to combat effectively their negative impact consist of [...] Read more.
Climate change and weather-related disasters are speeded very fast in the last decades with the consequences bringing to humanity: insecurity, destructing the ecological systems, increasing poverty, human victims, and economical losses everywhere on the planet. The innovative methods applied to mitigate the magnitudes of natural disasters and to combat effectively their negative impact consist of remote and earth constantly monitoring, data collection, creation of models for big data extrapolation, prediction, in-time warning for prevention, and others. Artificial intelligence (AI) is used to deal with big data, for calculations, forecasts, predictions of natural disasters in the near future, the establishment of the possibilities to escape the hazards or risky situations, as well as to prepare the human being for adverse changes, and drawing the different choices as assistance the right decision to be accepted. Many projects, programs, and frameworks are adopted and carried out the separate governments and business makers to common goals and actions for the formation of a friendly environment and measures for reducing undesired climate alterations and cataclysms. The aim of the article is to review the last programs and innovations applied in the mitigation of climate change using AI.
Figures
PreviousNext
Brief Review
Open Access November 10, 2022

Modeling and Forecasting Cryptocurrency Returns and Volatility: An Application of GARCH Models

Abstract The future of e-money is crypocurrencies, it is the decentralize digital and virtual currency that is secured by cryptography. It has become increasingly popular in recent years attracting the attention of the individual, investor, media, academia and governments worldwide. This study aims to model and forecast the volatilities and returns of three top cryptocurrencies, namely; Bitcoin, Ethereum [...] Read more.
The future of e-money is crypocurrencies, it is the decentralize digital and virtual currency that is secured by cryptography. It has become increasingly popular in recent years attracting the attention of the individual, investor, media, academia and governments worldwide. This study aims to model and forecast the volatilities and returns of three top cryptocurrencies, namely; Bitcoin, Ethereum and Binance Coin. The data utilized in the study was extracted from the higher market capitalization at 31st December, 2021 and the data for the period starting from 9th November, 2017 to 31st December 2021. The Generalised Autoregressive conditional heteroscedasticity (GARCH) type models with several distributions were fitted to the three cryptocurrencies dataset with their performances assessed using some model criterion tests. The result shows that the mean of all the returns are positive indicating the fact that the price of this three crptocurrencies increase throughout the period of study. The ARCH-LM test shows that there is no ARCH effect in volatility of Bitcoin and Ethereum but present in Binance Coin. The GARCH model was fitted on Binance Coin, the AIC and log L shows that the CGARCH is the best model for Binance Coin. Automatic forecasting was perform based on the selected ARIMA (2,0,1), ARIMA (0,1,2) and the random walk model which has the lowest AIC for ETH-USD, BNB-USD and BTC-USD respectively. This finding could aid investors in determining a cryptocurrency's unique risk-reward characteristics. The study contributes to a better deployment of investor’s resources and prediction of the future prices the three cryptocurrencies.
Figures
Figure 2 (c)
Figure 4 (b)
Figure 4 (c)
Figure 5 (b)
Figure 5 (c)
PreviousNext
PDF Html Xml
Article
Open Access July 22, 2022

DFT-Based Prediction of Anti-Leishmanial Activity of Carboxylates and Their Antimony(III) Complexes Against Five Leishmanial Strains

Abstract Carboxylates and their antimony(III) complexes experimentally scanned earlier for anti-leishmanial activity (IC50) against five leishmanial strains viz., L. major, L. major (Pak), L. tropica, L. mex mex, and L. donovani. These activities have been theoretically predicted by DFT method along with quantitative structure-activity relationship (QSAR) study. Molecular modeling and geometry optimization of the all the eight compounds have been performed on workspace program of CAChe Pro software of Fujitsu by opting B88-PW91 (Becke '88; Perdew & Wang '91) GGA (generalized-gradient approximation) energy functional with DZVP (double-zeta valence polarized ) basis set in DFT (Density Functional Theory). For QSAR, multiple linear regression (MLR) analysis has been performed on Project Leader Program associated with CAChe. The reliability of correlation between experimental activities and predicted activities are r2 = 0.826, r2CV = 0.426 (L. major); r2 = 0.905, r2CV = 0.507 (L. major (Pak)); r2 = 0.980, r2CV = 0.932 (L. tropica); r2 = 0.781, r2CV = 0.580 (L. mex mex) and r2 = 0.634, r2CV = 0.376 (L. donovani [...] Read more.
Carboxylates and their antimony(III) complexes experimentally scanned earlier for anti-leishmanial activity (IC50) against five leishmanial strains viz., L. major, L. major (Pak), L. tropica, L. mex mex, and L. donovani. These activities have been theoretically predicted by DFT method along with quantitative structure-activity relationship (QSAR) study. Molecular modeling and geometry optimization of the all the eight compounds have been performed on workspace program of CAChe Pro software of Fujitsu by opting B88-PW91 (Becke '88; Perdew & Wang '91) GGA (generalized-gradient approximation) energy functional with DZVP (double-zeta valence polarized ) basis set in DFT (Density Functional Theory). For QSAR, multiple linear regression (MLR) analysis has been performed on Project Leader Program associated with CAChe. The reliability of correlation between experimental activities and predicted activities are r2 = 0.826, r2CV = 0.426 (L. major); r2 = 0.905, r2CV = 0.507 (L. major (Pak)); r2 = 0.980, r2CV = 0.932 (L. tropica); r2 = 0.781, r2CV = 0.580 (L. mex mex) and r2 = 0.634, r2CV = 0.376 (L. donovani), and a comparison of the experimental values and the values obtained by theoretical calculations has been presented pictorially that shows close resemblance.
Figures
PreviousNext
Article
Open Access October 07, 2021

Estimation of Clear Sky Normal Irradiance over Northern Nigeria Atmosphere

Abstract Energy from the sun is an ideal new energy source for power systems, in a context of sustainable development, enthusiasm for concentrated solar power technologies is developing. Accurate estimation of clear-sky radiation is needed in many engineering, architectural and agricultural applications in order to integrate solar energy into the power grid. An evaluation of the irradiance input to solar [...] Read more.
Energy from the sun is an ideal new energy source for power systems, in a context of sustainable development, enthusiasm for concentrated solar power technologies is developing. Accurate estimation of clear-sky radiation is needed in many engineering, architectural and agricultural applications in order to integrate solar energy into the power grid. An evaluation of the irradiance input to solar power systems is required in many applications. Clear-sky models represent the maximum input of solar power systems, which is especially useful for forecasting solar irradiance and numerical weather prediction. This work examined the application of Yang model to estimate the monthly mean clear sky normal irradiance for northern Nigeria using meteorological variables like temperature, relative humidity and solar radiation considering the shading effect of the complex topography of terrain in Norther region of Nigeria, also to know the variation of beam radiation and diffuse radiation among the selected stations and also to ascertain the significance of aerosols, water vapor, and other transmittances in the estimation of the beam and diffuse radiation in the northern atmosphere. The modeling was computed using monthly mean maximum temperature and relative humidity gotten from the Nigeria Meteorological Agency (NIMET) for the period of fourteen years (1983-1997. The beam and diffuse irradiance for the northern atmosphere is compared by estimating their mean and standard deviation. Also, detailed information about the trend of radiation in each of the selected states in the northern hemisphere of Nigeria was obtained using a graphical method of data analysis. Result reveals that the value of beam and diffused radiation getting to the earth's surface depends on the aerosols, water vapour, atmospheric Ozone, gas transmittance and Rayleigh scattering. From the result above, the maximum beam radiation and the minimum diffused radiation occur during the raining season and the minimum beam radiation and maximum diffuse radiation occur during the dry season. This is due to the variations of these atmospheric constituents (aerosols, water vapour, atmospheric Ozone, gas transmittance and Rayleigh scattering) in the northern atmosphere on these seasons.
Figures
PreviousNext
Article
Open Access August 09, 2021

Optimization and Prediction of Biodiesel Yield from Moringa Seed Oil and Characterization

Abstract In this study, oil was extracted from Moringa seed using mechanical and solvent methods. To transesterify the oil into biodiesel, factorial design of experiment of 24 was used to obtain different combination factors at different level of reaction temperature, catalyst amount, reaction time and alcohol to oil ratio, giving rise to 48 experimental runs. The oil sample was transesterified [...] Read more.
In this study, oil was extracted from Moringa seed using mechanical and solvent methods. To transesterify the oil into biodiesel, factorial design of experiment of 24 was used to obtain different combination factors at different level of reaction temperature, catalyst amount, reaction time and alcohol to oil ratio, giving rise to 48 experimental runs. The oil sample was transesterified in 48 experimental runs, in each case the biodiesel yield was recorded in percentage. The biodiesel was then characterized according to ASTM test protocol. Factorial design model was developed using Design Expert 7.0, the model generated R of 0.987 and Mean Square Error (MSE) of 5.0453 and was used to predict and optimize biodiesel yield. Artificial Neural Network (ANN) model from MATLAB R2016a was developed using 4 input variables and 30 runs, the remaining 18 runs were tested with the ANN model to predict and compare the biodiesel yield with the experimental biodiesel yield, the model generated R value of 0.99687 and MSE of 3.50804. It was found that solvent method yielded more oil than mechanical method, the biodiesel has good thermo-physical property, optimum biodiesel yield of 91.45 % was obtained at 5:1 alcohol/ oil molar ratio, 18.89 wt% catalyst amounts, 45 minutes reaction time and at 45 reaction temperature. The experimental validation yielded 88.33 % biodiesel. The ANN model adequately predicted the remaining 18 runs with R2 value of 0.99649 and MSE of 4.914243. Both models proved adequate enough to predict biodiesel yield but ANN model proved more adequate.
Figures
PreviousNext
Article
Open Access July 17, 2021

Nonlinear Whole Seismology, Topological Seismology, Magnitude-Period Formula of Earthquakes and Their Predictions

Abstract First, we propose the nonlinear whole seismology and its three basic laws. Next, based on the nonlinear equations of fluid dynamics in Earth’s crust, we obtain a chaos equation, in which chaos corresponds to the earthquake, and shows complexity on seismology. But, combining the Carlson-Langer model and the Gutenberg-Richter relation, a simplified nonlinear solution and corresponding [...] Read more.
First, we propose the nonlinear whole seismology and its three basic laws. Next, based on the nonlinear equations of fluid dynamics in Earth’s crust, we obtain a chaos equation, in which chaos corresponds to the earthquake, and shows complexity on seismology. But, combining the Carlson-Langer model and the Gutenberg-Richter relation, a simplified nonlinear solution and corresponding magnitude-period formula of earthquakes may be derived approximately. Further, we research the topological seismology. From these theories some predictions can be calculated quantitatively and are already tested. Combining the Lorenz nonlinear model, we may discuss the earthquake migration to and fro. Finally, if various modern scientific instruments, different scientific theories and some paranormal ways for earthquake are combined each other, the accuracy of multilevel prediction will be increased.
Article
Open Access December 27, 2020

Exploring AI Algorithms for Cancer Classification and Prediction Using Electronic Health Records

Abstract Cell division that is not controlled leads to cancer, an incurable condition. An early diagnosis has the potential to lower death rates from breast cancer, the most frequent disease in women worldwide. Imaging studies of the breast may help doctors find the disease and diagnose it. This study explores an effectiveness of DL and ML models in a classification of mammography images for breast cancer [...] Read more.
Cell division that is not controlled leads to cancer, an incurable condition. An early diagnosis has the potential to lower death rates from breast cancer, the most frequent disease in women worldwide. Imaging studies of the breast may help doctors find the disease and diagnose it. This study explores an effectiveness of DL and ML models in a classification of mammography images for breast cancer detection, utilizing the publicly available CBIS-DDSM dataset, which comprises 5,000 images evenly divided between benign and malignant cases. To improve diagnostic accuracy, models such as Gaussian Naïve Bayes (GNB), CNNs, KNN, and MobileNetV2 were assessed employing performance measures including F1-score, recall, accuracy, and precision. The methodology involved data preprocessing techniques, including transfer learning and feature extraction, followed by data splitting for robust model training and evaluation. Findings indicate that MobileNetV2 achieved a highest accuracy99.4%, significantly outperforming GNB (87.2%), CNN (96.7%), and KNN (91.2%). The outstanding capacity of MobileNetV2 to identify between benign and malignant instances was shown by the investigation, which also made use of confusion matrices and ROC curves to evaluate model performance.
Figures
PreviousNext
Review Article
Open Access October 15, 2022

Big Data and AI/ML in Threat Detection: A New Era of Cybersecurity

Abstract The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even [...] Read more.
The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even under pressure from regulatory boards, have strived to harness the power of data and leverage it to enhance safety and security, maximize performance, and mitigate risks. However, the adversaries themselves have capitalized on the unequal battle of big data and artificial intelligence to inflict widespread chaos. Therefore, the demand for big data analytics and AI/ML for high-fidelity intelligence, surveillance, and reconnaissance is at its highest. Today, in the cybersecurity realm, the detection of adverse incidents poses substantial challenges due to the sheer variety, volume, and velocity of deep packet inspection data. State-of-the-art detection techniques have fallen short of detecting the latest attacks after a big data breach incident. On the other hand, computational intelligence techniques such as machine learning have reignited the search for solutions for diverse monitoring problems. Recent advancements in AI/ML frameworks have the potential to analyze IoT/edge-generated big data in near real-time and assist risk assessment and mitigation through automated threat detection and modeling in the big data and AI/ML domain. Industry best practices and case studies are examined that endeavor to showcase how big data coupled with AI/ML unlocks new dimensions and capabilities in improved vigilance and monitoring, prediction of adverse incidents, intelligent modeling, and future uncertainty quantification by data resampling correction. All of these avenues lead to enhanced robustness, security, safety, and performance of industrial processes, computing, and infrastructures. A view of the future and how the potential threats due to the misuse of new technologies from bandwidth to IoT/edge, blockchain, AI, quantum, and autonomous fields is discussed. Cybersecurity is again playing out at a pace set by adversaries with low entry barriers and debilitating tools. The need for innovative solutions for defense from the emerging threat landscape, harnessing the power of new technologies and collaboration, is emphasized.
Figures
PreviousNext
Article
Open Access December 27, 2021

Leveraging AI in Urban Traffic Management: Addressing Congestion and Traffic Flow with Intelligent Systems

Abstract Traffic congestion across the globe is a multimodal problem, intertwining vehicular, pedestrian, and bicycle traffic. The relationship between the multimodal traffic flow is a key factor in understanding urban traffic dynamics. The impact of excessive congestion extends to the excessive cost spent on traffic maintenance, as well as the inherent transportation inefficiency and delayed travel times. [...] Read more.
Traffic congestion across the globe is a multimodal problem, intertwining vehicular, pedestrian, and bicycle traffic. The relationship between the multimodal traffic flow is a key factor in understanding urban traffic dynamics. The impact of excessive congestion extends to the excessive cost spent on traffic maintenance, as well as the inherent transportation inefficiency and delayed travel times. From an urban transportation standpoint, an immediate consideration on one hand is monitoring traffic conditions and demand cycles, while on the other hand inducing flow modifications that benefit the traffic network and mitigate congestion. Embedded and centralized control systems that characterize modern traffic management systems extract traffic conditions specific to their regions but lack communication between networks. Moreover, innovative methods are required to provide more accurate up-to-date traffic forecasts that characterize real-world traffic dynamics and facilitate optimal traffic management decisions. In this chapter, we briefly outline the main difficulties and complexities in modeling, managing, and forecasting traffic dynamics. We also compare various conventional and modern Intelligent Transportation Strategies in terms of accuracy and applicability, their performance, and potential opportunities for optimization of multimodal traffic flow and congestion reduction. This chapter introduces various proposed data-driven models and tools employed for traffic flow prediction and management, investigating specific strategies' strengths, weaknesses, and benefits in addressing various real-world traffic management problems. We describe that the design phase of dependable Intelligent Transportation Systems bears unique requirements in terms of the robustness, safety, and response times of their components and the encompassing system model. Furthermore, this architectural blueprint shares similarities with distributed coordinate searching and collective adaptive systems. Town size-independent models induce systemic performance improvements through reconfigurable embedded functionality. These AI techniques feature elaborate anytime planner-engagers ensuring near-optimal performances in an unbiased behavior when the model complexity is varied. Sustainable models minimize congestion during peaks, flooding, and emergency occurrences as they adhere to area-specific regulations. Security-aware and fail-safe traffic management systems relinquish reasonable assurances of persistent operation under various environmental settings, to acknowledge metropolis and complex traffic junctions. The chapter concludes by outlining challenges, research questions, and future research paths in the field of transportation management.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Prediction

View options

Citations of

Views of

Downloads of