Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access February 06, 2026

Predictive Modeling of Public Sentiment Using Social Media Data and Natural Language Processing Techniques

Abstract Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled [...] Read more.
Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled tweets, and develops predictive models for binary sentiment classification using Naive Bayes, Logistic Regression, and the transformer-based BERT model. Experiments were conducted on a balanced subset of 12,000 tweets after comprehensive NLP preprocessing. Evaluation using accuracy, F1-score, and confusion matrices revealed that BERT significantly outperforms traditional models, achieving an accuracy of 89.5% and an F1-score of 0.89 by effectively modeling contextual and semantic nuances. In contrast, Naive Bayes and Logistic Regression demonstrated reasonable but consistently lower performance. To support practical deployment, we introduce SentiFeel, an interactive tool enabling real-time sentiment analysis. While resource constraints limited the dataset size and training epochs, future work will explore full corpus utilization and the inclusion of neutral sentiment classes. These findings underscore the potential of transformer models for enhanced public opinion monitoring, marketing analytics, and policy forecasting.
Figures
PreviousNext
Article
Open Access January 10, 2025

Artificial Immune Systems: A Bio-Inspired Paradigm for Computational Intelligence

Abstract Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a [...] Read more.
Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a comprehensive exploration of AIS applications in domains such as cybersecurity, resource allocation, and autonomous systems, highlighting the growing importance of hybrid AIS models. Recent advancements, including integrations with machine learning, quantum computing, and bioinformatics, are discussed as solutions to scalability, high-dimensional data processing, and efficiency challenges. Core algorithms, such as the Negative Selection Algorithm (NSA) and Clonal Selection Algorithm (CSA), are examined, along with limitations in interpretability and compatibility with emerging AI paradigms. The paper concludes by proposing future research directions, emphasizing scalable hybrid frameworks, quantum-inspired approaches, and real-time adaptive systems, underscoring AIS's transformative potential across diverse computational fields.
Figures
PreviousNext
Article
Open Access February 17, 2024

An Overview of Short- and Long-Term Adverse Outcomes and Complications of Perinatal Depression on Mother and Offspring

Abstract Antenatal and postpartum major depressive episode (MDE) according to Diagnostic and Statistical Manual of Mental Disorders 5th Edition (DSM-V) is defined as either daily sustained sad mood or lack of enjoyment or desire for a minimum two weeks plus four associated manifestations (only three if the two major symptoms are present) that start throughout pregnancy or during the first 4 weeks [...] Read more.
Antenatal and postpartum major depressive episode (MDE) according to Diagnostic and Statistical Manual of Mental Disorders 5th Edition (DSM-V) is defined as either daily sustained sad mood or lack of enjoyment or desire for a minimum two weeks plus four associated manifestations (only three if the two major symptoms are present) that start throughout pregnancy or during the first 4 weeks postpartum respectively: 1) Unintentional notable slimming up or down; 2) Sleepiness or sleeplessness; 3) Tiredness sensation; 4) Guilty or futility sensation; 5) Declined concentration capacity; 6) Frequent suicidal thoughts; 7) Psychomotor excitation or delay. Perinatal depression carries vital and adverse consequences on mother’s psychosocial aspects of life, pregnancy and delivery outcomes, her interrelations specifically with the new born with poorer overall health and influences negatively on offspring from the intrauterine life passing by complicated delivery experiencing hard unstable childhood reaching unhealthy adolescence and adulthood. These negative consequences necessitate a great attention for prevention, screening and prompt treatment for antenatal and postnatal depression to prevent such disastrous effects.
Brief Review
Open Access June 28, 2025

Development of a Hemodialysis Data Collection and Clinical Information System and Establishment of an Intradialytic Blood Pressure/Pulse Rate Predictive Model

Abstract This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models [...] Read more.
This research is a collaboration involving a university team, a partnering corporation, and a hemodialysis clinic, which is a cross-disciplinary research initiative in the field of Artificial Intelligence of Things (AIoT) within the medical informatics domain. The research has two objectives: (1) The development of an Internet of Things (IoT)-based Information System customized for the hemodialysis machines at the clinic, including transmission bridges, clinical personnel dedicated web/app, and a backend server. The system has been deployed at the clinic and is now officially operational; (2) The research also utilized de-identified, anonymous data (collected by the officially operational system) to train, evaluate, and compare Deep Learning-based Intradialytic Blood Pressure (BP)/Pulse Rate (PR) Predictive Models, with subsequent suggestions provided. Both objectives were executed under the supervision of the Institutional Review Board (IRB) at Mackay Memorial Hospital in Taiwan. The system completed for objective one has introduced three significant services to the clinic, including automated hemodialysis data collection, digitized data storage, and an information-rich human-machine interface as well as graphical data displays, which replaces traditional paper-based clinical administrative operations, thereby enhancing healthcare efficiency. The graphical data presented through web and app interfaces aids in real-time, intuitive comprehension of the patients’ conditions during hemodialysis. Moreover, the data stored in the backend database is available for physicians to conduct relevant analyses, unearth insights into medical practices, and provide precise medical care for individual patients. The training and evaluation of the predictive models for objective two, along with related comparisons, analyses, and recommendations, suggest that in situations with limited computational resources and data, an Artificial Neural Network (ANN) model with six hidden layers, SELU activation function, and a focus on artery-related features can be employed for hourly intradialytic BP/PR prediction tasks. It is believed that this contributes to the collaborating clinic and relevant research communities.
Figures
Figure 3 (c)
Figure 3 (d)
Figure 4 (b)
Figure 4 (c)
Figure 4 (d)
Figure 4 (e)
Figure 4 (f)
Figure 4 (g)
Figure 4 (h)
Figure 5 (b)
Figure 6 (b)
Figure 6 (c)
Figure 6 (d)
Figure 6 (e)
Figure 6 (f)
Figure 7 (b)
Figure 7 (c)
Figure 7 (d)
Figure 7 (e)
Figure 7 (f)
Figure 7 (g)
Figure 8 (b)
Figure 8 (c)
Figure 8 (d)
Figure 9 (b)
Figure 9 (c)
Figure 9 (d)
Figure 10 (b)
Figure 10 (c)
Figure 10 (d)
Figure 10 (e)
Figure 10 (f)
Figure 11 (b)
Figure 11 (c)
Figure 11 (d)
Figure 11 (e)
PreviousNext
PDF Html Xml
Article
Open Access March 22, 2025

Enhancing Scalability and Performance in Analytics Data Acquisition through Spark Parallelism

Abstract Data acquisition serves as a critical component of modern data architecture, with REST API integration emerging as one of the most common approaches for sourcing external data. This study evaluates the efficiency of various methodologies for collecting data via REST APIs and benchmark their performance. It explores how leveraging the Spark distributed computing platform can optimize large scale [...] Read more.
Data acquisition serves as a critical component of modern data architecture, with REST API integration emerging as one of the most common approaches for sourcing external data. This study evaluates the efficiency of various methodologies for collecting data via REST APIs and benchmark their performance. It explores how leveraging the Spark distributed computing platform can optimize large scale REST API calls, enabling enhanced scalability and improved processing speeds to meet the demands of high volume data workflows.
Figures
PreviousNext
Review Article
Open Access February 26, 2025

Innovations and Challenges in Pharmaceutical Supply Chain, Serialization and Regulatory Landscape

Abstract The pharmaceutical supply chain has become increasingly complex and vulnerable to various risks, including counterfeit drugs, diversion, and fraud. As these challenges threaten patient safety and the integrity of global healthcare systems, serialization has emerged as a pivotal innovation in pharmaceutical logistics and regulatory compliance. Serialization involves assigning unique identifiers to [...] Read more.
The pharmaceutical supply chain has become increasingly complex and vulnerable to various risks, including counterfeit drugs, diversion, and fraud. As these challenges threaten patient safety and the integrity of global healthcare systems, serialization has emerged as a pivotal innovation in pharmaceutical logistics and regulatory compliance. Serialization involves assigning unique identifiers to individual drug packages, enabling precise tracking and authentication at every stage of the supply chain. This process provides unprecedented transparency, enhances product security, and facilitates real-time monitoring of pharmaceutical products as they move from manufacturers to end consumers. Despite its potential to revolutionize pharmaceutical traceability, the integration of serialization technologies faces numerous obstacles. These include high implementation costs, regulatory inconsistencies across regions, and the technological challenges of managing vast amounts of data. Moreover, the complex, multi-tiered nature of the global supply chain introduces additional risks related to data integrity, cybersecurity, and interoperability between systems. As pharmaceutical companies seek to navigate these challenges, innovations in serialization technology—such as blockchain, artificial intelligence (AI), the Internet of Things (IoT), and radio frequency identification (RFID)—are providing promising solutions to enhance efficiency, reduce fraud, and increase visibility. This manuscript explores both the innovative advancements and the key challenges associated with the integration of serialization in the pharmaceutical supply chain. It delves into the evolving regulatory landscape, highlighting the need for global harmonization of serialization standards, and examines the impact of serialization on securing pharmaceutical distribution networks. Additionally, the paper emphasizes the importance of collaboration among manufacturers, technology providers, and regulatory bodies in overcoming implementation barriers and realizing the full potential of serialization. As the pharmaceutical industry moves towards a more interconnected and data-driven future, serialization promises to play a central role in shaping the next generation of drug safety and supply chain management. By addressing the hurdles to adoption and leveraging emerging technologies, the pharmaceutical sector can create a more secure, transparent, and efficient supply chain that better serves public health and fosters greater trust among consumers and healthcare professionals alike.
Review Article
Open Access February 09, 2025

The Future of Longevity Medicine from the Lens of Digital Therapeutics

Abstract Digital therapeutics (DTx) are emerging as a pivotal tool in promoting longevity by addressing non-communicable diseases (NCDs) such as diabetes, cardiovascular diseases, and mental health disorders. These software-driven interventions offer personalized, evidence-based treatments that can be accessed via digital devices, making healthcare more accessible and scalable. One of the key advancements [...] Read more.
Digital therapeutics (DTx) are emerging as a pivotal tool in promoting longevity by addressing non-communicable diseases (NCDs) such as diabetes, cardiovascular diseases, and mental health disorders. These software-driven interventions offer personalized, evidence-based treatments that can be accessed via digital devices, making healthcare more accessible and scalable. One of the key advancements in DTx is the integration of artificial intelligence (AI) and machine learning (ML) to tailor interventions based on individual health data. This personalization enhances the effectiveness of treatments and supports preventive care by identifying risk factors early. The need for digital therapeutics is underscored by the rising prevalence of NCDs, which are responsible for a significant portion of global mortality and healthcare costs. Traditional healthcare systems often struggle to provide timely and personalized care, especially in low-resource settings. DTx can bridge this gap by offering cost-effective solutions that are easily scalable. Moreover, digital therapeutics can address health inequities by providing low-cost interventions to underserved populations, thereby reducing the burden of NCDs and improving overall health outcomes. As technology continues to evolve, the potential for DTx to enhance longevity and quality of life becomes increasingly promising. Recent advancements in longevity medicine and technology have focused on extending both lifespan and healthspan, ensuring that people not only live longer but also maintain good health throughout their extended years. This review article highlights these advancements that are contributing to this compelling subject of Longevity.
Figures
PreviousNext
Review Article
Open Access January 22, 2025

Tech Transformations: Modern Solutions for Obstructive Sleep Apnea

Abstract Recent advancements in the screening, diagnosis, and management of obstructive sleep apnea (OSA) have significantly improved patient outcomes. For screening, the use of home sleep apnea testing (HSAT) has become more prevalent, offering a convenient and cost-effective alternative to traditional in-lab polysomnography. HSAT devices have shown good specificity and sensitivity, particularly in [...] Read more.
Recent advancements in the screening, diagnosis, and management of obstructive sleep apnea (OSA) have significantly improved patient outcomes. For screening, the use of home sleep apnea testing (HSAT) has become more prevalent, offering a convenient and cost-effective alternative to traditional in-lab polysomnography. HSAT devices have shown good specificity and sensitivity, particularly in patients with a high pre-test probability of OSA. In terms of diagnosis, advancements in wearable technology and mobile health applications have enabled continuous monitoring of sleep patterns and respiratory parameters. These tools provide valuable data that can be used to identify OSA more accurately and promptly. Additionally, machine learning algorithms are being integrated into diagnostic processes to enhance the accuracy of OSA detection by analyzing large datasets and identifying patterns indicative of the condition. Management of OSA has also seen significant progress. Continuous positive airway pressure (CPAP) therapy remains the gold standard, but new developments include auto-adjusting CPAP devices that optimize pressure settings based on real-time feedback. Mandibular advancement devices and hypoglossal nerve stimulation are emerging as effective alternatives for patients who are CPAP-intolerant. Furthermore, lifestyle interventions such as weight management, positional therapy, and exercise have been shown to complement medical treatments, leading to better overall outcomes. This review article highlights these advancements that collectively contribute to improved patient adherence, reduced symptoms, and enhanced quality of life for individuals with OSA.
Review Article
Open Access November 16, 2024

Digital Therapeutics: A New Dimension to Diabetes Mellitus Management

Abstract Digital therapeutics (DTx) play a transformative role in diabetes management by leveraging technology to provide personalized, data-driven medical interventions. These tools enhance self-management by offering continuous monitoring and real-time feedback on glucose levels, diet, and physical activity. This personalized approach helps patients adhere to treatment plans and make informed lifestyle [...] Read more.
Digital therapeutics (DTx) play a transformative role in diabetes management by leveraging technology to provide personalized, data-driven medical interventions. These tools enhance self-management by offering continuous monitoring and real-time feedback on glucose levels, diet, and physical activity. This personalized approach helps patients adhere to treatment plans and make informed lifestyle changes, leading to improved clinical outcomes such as reduced HbA1c levels and better overall diabetes control. The importance of DTx lies in their ability to make diabetes care more accessible and convenient. Mobile apps and telemedicine platforms enable patients to receive support and guidance from anywhere, reducing the need for frequent in-person visits. Additionally, DTx often include behavioral support features like reminders, educational content, and motivational tools, which are crucial for maintaining healthy habits and managing stress. Currently, the dynamics of DTx in diabetes are rapidly evolving, with increasing integration of artificial intelligence and machine learning to further personalize and optimize care. As the adoption of these technologies grows, they hold the potential to significantly improve patient outcomes and revolutionize diabetes management on a global scale. This article will focus on the benefits of novel digital therapeutics for prevention and management of type II diabetes that are currently available in the market.
Figures
PreviousNext
Article
Open Access July 10, 2024

Achieving Maintainability, Readability & Understandability of Software Projects using Code Smell Prediction

Abstract Maintenance of large-scale software is difficult due to large size and high complexity of code.80% of software development is on maintenance and the other 60% is on trying to understand the code. The severity of the code smells must be measured as well as fairness on it because it will help the developers especially in large scale source code projects. Code smell is not a bug in the system as it [...] Read more.
Maintenance of large-scale software is difficult due to large size and high complexity of code.80% of software development is on maintenance and the other 60% is on trying to understand the code. The severity of the code smells must be measured as well as fairness on it because it will help the developers especially in large scale source code projects. Code smell is not a bug in the system as it doesn’t prevent the program from functioning but it may increase the risk of software failure or performance slowdown. Therefore, this paper seeks to help developers with early prediction of severity of code smells and test the level of fairness on the predictions especially in large scale source code projects. Data is the collection of facts and observations in terms of events, it is continuously growing, getting denser and more varied by the minute across different disciplines or fields. Hence, Big Data emerged and is evolving rapidly, the various types of data being processed are huge, but no one has ever thought of where this data resides, we therefore noticed this data resides in software’s and the codebases of the software’s are increasingly growing that is the size of the modules, functionalities, the size of the classes etc. Since data is growing so rapidly it also mean the codebases of software’s or code are also growing as well. Therefore, this paper seeks to discuss the 5V’s of big data in the context of software code and how to optimize or manage the big code. When we talk of "Big Code for Big Software's," we are referring to the specific challenges and considerations involved in developing, managing, and maintaining of code in large-scale software systems.
Figures
PreviousNext
Technical Note
Open Access June 28, 2024

Nigeria Exchange Rate Volatility: A Comparative Study of Recurrent Neural Network LSTM and Exponential Generalized Autoregressive Conditional Heteroskedasticity Models

Abstract Business merchants and investors in Nigeria are interested in the foreign exchange volatility forecasting accuracy performance because they need information on how volatile the exchange rate will be in the future. In the paper, we compared Exponential Generalized Autoregressive Conditional Heteroskedasticity with order p=1 and q= 1, (EGARCH (1,1)) and Recurrent Neural Network (RNN) based on long [...] Read more.
Business merchants and investors in Nigeria are interested in the foreign exchange volatility forecasting accuracy performance because they need information on how volatile the exchange rate will be in the future. In the paper, we compared Exponential Generalized Autoregressive Conditional Heteroskedasticity with order p=1 and q= 1, (EGARCH (1,1)) and Recurrent Neural Network (RNN) based on long short term memory (LSTM) model with the combinations of p = 10 and q = 1 layers to model the volatility of Nigerian exchange rates. Our goal is to determine the preferred model for predicting Nigeria’s Naira exchange rate volatility with Euro, Pounds and US Dollars. The dataset of monthly exchange rates of the Nigerian Naira to US dollar, Euro and Pound Sterling for the period December 2001 – August 2023 was extracted from the Central Bank of Nigeria Statistical Bulletin. The model efficiency and performance was measured with the Mean Squared Error (MSE) criteria. The results indicated that the Nigeria exchange rate volatility is asymmetric, and leverage effects are evident in the results of the EGARCH (1, 1) model. It was observed also that there is a steady increase in the Nigeria Naira exchange rate with the euro, pounds sterling and US dollar from 2016 to its highest peak in 2023. Result of the comparative analysis indicated that, EGARCH (1,1) performed better than the LSTM model because it provided a smaller MSE values of 224.7, 231.3 and 138.5 for euros, pounds sterling and US Dollars respectively.
Figures
PreviousNext
Article
Open Access November 15, 2023

Predictive Failure Analytics in Critical Automotive Applications: Enhancing Reliability and Safety through Advanced AI Techniques

Abstract Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time [...] Read more.
Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time to the time of failure. The goal is to make accurate predictions close to the failure time to provide early warnings. J S Grewal and J. Grewal provide a comprehensive definition of RUL in their paper "The Kalman Filter approach to RUL estimation." A process is a quadruple (XU f P), where X is the state space, U is the control space, P is the set of possible paths, and f represents the transition between states. The process involves applying control values to change the system's state over time.
Figures
PreviousNext
Article
Open Access April 11, 2024

5V’s of Big Data Shifted to Suite the Context of Software Code: Big Code for Big Software Projects

Abstract Data is the collection of facts and observations in terms of events, it is continuously growing, getting denser and more varied by the minute across different disciplines or fields. Hence, Big Data emerged and is evolving rapidly, the various types of data being processed are huge, but no one has ever thought of where this data resides, we therefore noticed this data resides in software’s and the [...] Read more.
Data is the collection of facts and observations in terms of events, it is continuously growing, getting denser and more varied by the minute across different disciplines or fields. Hence, Big Data emerged and is evolving rapidly, the various types of data being processed are huge, but no one has ever thought of where this data resides, we therefore noticed this data resides in software’s and the codebases of the software’s are increasingly growing that is the size of the modules, functionalities, the size of the classes etc. Since data is growing so rapidly it also mean the codebases of software’s or code are also growing as well. Therefore, this paper seeks to discuss the 5V’s of big data in the context of software code and how to optimize or manage the big code. When we talk of "Big Code for Big Software's," we are referring to the specific challenges and considerations involved in developing, managing, and maintaining of code in large-scale software systems.
Article
Open Access March 06, 2024

The Advantages of Cloud ERP in the Global Business Landscape

Abstract Among the most significant systems that organizations of all stripes, whether public or private, use is the Enterprise Resource Planning (ERP) system. Due in large part to the rapid growth of Internet services and the growing reliance on the infrastructure of Cloud service providers, ERP design has advanced, and numerous types of Internet-service-dependent ERP systems have emerged. In addition to [...] Read more.
Among the most significant systems that organizations of all stripes, whether public or private, use is the Enterprise Resource Planning (ERP) system. Due in large part to the rapid growth of Internet services and the growing reliance on the infrastructure of Cloud service providers, ERP design has advanced, and numerous types of Internet-service-dependent ERP systems have emerged. In addition to the traditional ERP system, the most significant ERP types are Web-based ERP and Cloud ERP. As a result, ERP system vendors and designers, including Oracle and SAP, are relying on cloud-based ERP system design, and offering the ERP system as a service for monthly and annual subscription, where the system is external to the organization and does not need to exist within the organization.
Review Article
Open Access March 06, 2024

Embedded Architecture of SAP S/4 HANA ERP Application

Abstract The SAP HANA Application to handle operational workloads that are consistent with transactions while also supporting intricate business analytics operations. Technically speaking, the SAP HANA database is made up of several data processing engines that work together with a distributed query processing environment to provide the entire range of data processing capabilities. This includes graph and [...] Read more.
The SAP HANA Application to handle operational workloads that are consistent with transactions while also supporting intricate business analytics operations. Technically speaking, the SAP HANA database is made up of several data processing engines that work together with a distributed query processing environment to provide the entire range of data processing capabilities. This includes graph and text processing for managing semi-structured and unstructured data within the same system, as well as classical relational data that supports both row- and column-oriented physical representations in a hybrid engine. The next-generation SAP Business Suite program designed specifically for the SAP HANA Platform is called SAP S/4HANA. The key features of SAP S/4HANA are an intuitive, contemporary user interface (SAP Fiori); planning and simulation options in many conventional transactions; simplification of business processes; significantly improved transaction efficiency; faster analytics.
Review Article
Open Access February 19, 2024

The use of contemporary Enterprise Resource Planning (ERP) technologies for digital transformation

Abstract Our lives are becoming more and more digital, and this has an impact on how we work, study, communicate, and interact. Businesses are currently digitally altering their information systems, procedures, culture, and strategy. Existing businesses and economies are severely disrupted by the digital revolution. The Internet of Things, microservices, and mobile services are examples of IT systems with [...] Read more.
Our lives are becoming more and more digital, and this has an impact on how we work, study, communicate, and interact. Businesses are currently digitally altering their information systems, procedures, culture, and strategy. Existing businesses and economies are severely disrupted by the digital revolution. The Internet of Things, microservices, and mobile services are examples of IT systems with numerous, dispersed, and very small structures that are made possible by digitization. Utilizing the possibilities of cloud computing, mobile systems, big data and analytics, services computing, Internet of Things, collaborative networks, and decision support, numerous new business prospects have emerged throughout the years. The logical basis for robust and self-optimizing run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures comes from biological metaphors of living, dynamic ecosystems. This has a significant effect on how digital services and products are designed from a value- and service-oriented perspective. The evolution of enterprise architectures and the shift from a closed-world modeling environment to a more flexible open-world composition establish the dynamic framework for highly distributed and adaptive systems, which are crucial for enabling the digital transformation. This study examines how enterprise architecture has changed over time, taking into account newly established, value-based relationships between digital business models, digital strategies, and enhanced enterprise architecture.
Review Article
Open Access February 18, 2024

An Appraisal of Challenges in Developing Information Literacy Skills in the Colleges of Education of Ghana

Abstract The purpose of this study was to examine the challenges faced by students of Colleges of Education (CoEs) in developing their Information Literacy skills. The study adopted the post-positivism paradigm. Descriptive survey research design used in this study Survey. The population for this study comprised all Level 200 students at Wiawso CoE, Enchi CoE, and Bia Lamplighter CoE in the Western North [...] Read more.
The purpose of this study was to examine the challenges faced by students of Colleges of Education (CoEs) in developing their Information Literacy skills. The study adopted the post-positivism paradigm. Descriptive survey research design used in this study Survey. The population for this study comprised all Level 200 students at Wiawso CoE, Enchi CoE, and Bia Lamplighter CoE in the Western North Region. Purposive, stratified, and convenience sampling techniques were used to select colleges of education and level 200 students. The three (3) colleges of education were stratified and purposively selected while 256 level 200 students were stratified and conveniently sampled. The study employed questionnaires to collect data from the sampled students. Questionnaires (open and closed-ended questions) focused on the challenges faced by the students in developing their Information Literacy (IL) skills. The quantitative data was captured, analysed, and presented in descriptive statistics such as percentages, and frequency tables, to determine the objective of the study. It is recommended that to improve digital literacy and academic pursuits, the college management should improve access to desktop computers and the Internet in the library and computer centre. It is also recommended that Management and librarians of the Colleges of Education ensure that students have access to these devices at the library and can use them to develop their IL skills and help them manage their references more effectively.
Article
Open Access February 17, 2024

Universal Evaluation of SAP S/4 Hana ERP Cloud System

Abstract Regardless of their traditional ERP Systems, it is essential for every business to acquire a universal advantage in the contemporary international market. When everything is considered, end users in these kinds of businesses have to deal with poorly designed interfaces and unusable technologies. Despite the claims of significant benefits from using S4 Hana cloud ERP software, the possibility of [...] Read more.
Regardless of their traditional ERP Systems, it is essential for every business to acquire a universal advantage in the contemporary international market. When everything is considered, end users in these kinds of businesses have to deal with poorly designed interfaces and unusable technologies. Despite the claims of significant benefits from using S4 Hana cloud ERP software, the possibility of achieving maximum productivity is not fully utilized. One of the causes of this reality is the underfunding of ergonomic measures and the newest technologies. Through the design of S4 Hana cloud ERP software applications, we will demonstrate how important and highly recommended ergonomic research is in order to minimize the financial and human costs that enterprises are currently facing.
Figures
PreviousNext
Review Article
Open Access February 15, 2024

Stock Closing Price and Trend Prediction with LSTM-RNN

Abstract The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use [...] Read more.
The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use the Long Short-Term Memory (LSTM-RNN) Recurrent Neural Networks (RNN) to predict the daily closing price of the Amazon Inc. stock (ticker symbol: AMZN). We study the influence of various hyperparameters in the model to see what factors the predictive power of the model. The root mean squared error (RMSE) on the training was 2.51 with a mean absolute percentage error (MAPE) of 1.84%.
Figures
PreviousNext
Article

Query parameters

Keyword:  Big data

View options

Citations of

Views of

Downloads of