Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access February 06, 2026

Predictive Modeling of Public Sentiment Using Social Media Data and Natural Language Processing Techniques

Abstract Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled [...] Read more.
Social media platforms like X (formerly Twitter) generate vast volumes of user-generated content that provide real-time insights into public sentiment. Despite the widespread use of traditional machine learning methods, their limitations in capturing contextual nuances in noisy social media text remain a challenge. This study leverages the Sentiment140 dataset, comprising 1.6 million labeled tweets, and develops predictive models for binary sentiment classification using Naive Bayes, Logistic Regression, and the transformer-based BERT model. Experiments were conducted on a balanced subset of 12,000 tweets after comprehensive NLP preprocessing. Evaluation using accuracy, F1-score, and confusion matrices revealed that BERT significantly outperforms traditional models, achieving an accuracy of 89.5% and an F1-score of 0.89 by effectively modeling contextual and semantic nuances. In contrast, Naive Bayes and Logistic Regression demonstrated reasonable but consistently lower performance. To support practical deployment, we introduce SentiFeel, an interactive tool enabling real-time sentiment analysis. While resource constraints limited the dataset size and training epochs, future work will explore full corpus utilization and the inclusion of neutral sentiment classes. These findings underscore the potential of transformer models for enhanced public opinion monitoring, marketing analytics, and policy forecasting.
Figures
PreviousNext
Article
Open Access January 11, 2025

Exploring LiDAR Applications for Urban Feature Detection: Leveraging AI for Enhanced Feature Extraction from LiDAR Data

Abstract The integration of LiDAR and Artificial Intelligence (AI) has revolutionized feature detection in urban environments. LiDAR systems, which utilize pulsed laser emissions and reflection measurements, produce detailed 3D maps of urban landscapes. When combined with AI, this data enables accurate identification of urban features such as buildings, green spaces, and infrastructure. This synergy is [...] Read more.
The integration of LiDAR and Artificial Intelligence (AI) has revolutionized feature detection in urban environments. LiDAR systems, which utilize pulsed laser emissions and reflection measurements, produce detailed 3D maps of urban landscapes. When combined with AI, this data enables accurate identification of urban features such as buildings, green spaces, and infrastructure. This synergy is crucial for enhancing urban development, environmental monitoring, and advancing smart city governance. LiDAR, known for its high-resolution 3D data capture capabilities, paired with AI, particularly deep learning algorithms, facilitates advanced analysis and interpretation of urban areas. This combination supports precise mapping, real-time monitoring, and predictive modeling of urban growth and infrastructure. For instance, AI can process LiDAR data to identify patterns and anomalies, aiding in traffic management, environmental oversight, and infrastructure maintenance. These advancements not only improve urban living conditions but also contribute to sustainable development by optimizing resource use and reducing environmental impacts. Furthermore, AI-enhanced LiDAR is pivotal in advancing autonomous navigation and sophisticated spatial analysis, marking a significant step forward in urban management and evaluation. The reviewed paper highlights the geometric properties of LiDAR data, derived from spatial point positioning, and underscores the effectiveness of machine learning algorithms in object extraction from point clouds. The study also covers concepts related to LiDAR imaging, feature selection methods, and the identification of outliers in LiDAR point clouds. Findings demonstrate that AI algorithms, especially deep learning models, excel in analyzing high-resolution 3D LiDAR data for accurate urban feature identification and classification. These models leverage extensive datasets to detect patterns and anomalies, improving the detection of buildings, roads, vegetation, and other elements. Automating feature extraction with AI minimizes the need for manual analysis, thereby enhancing urban planning and management efficiency. Additionally, AI methods continually improve with more data, leading to increasingly precise feature detection. The results indicate that the pulse emitted by continuous wave LiDAR sensors changes when encountering obstacles, causing discrepancies in measured physical parameters.
Figures
PreviousNext
Article
Open Access January 10, 2025

Artificial Immune Systems: A Bio-Inspired Paradigm for Computational Intelligence

Abstract Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a [...] Read more.
Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a comprehensive exploration of AIS applications in domains such as cybersecurity, resource allocation, and autonomous systems, highlighting the growing importance of hybrid AIS models. Recent advancements, including integrations with machine learning, quantum computing, and bioinformatics, are discussed as solutions to scalability, high-dimensional data processing, and efficiency challenges. Core algorithms, such as the Negative Selection Algorithm (NSA) and Clonal Selection Algorithm (CSA), are examined, along with limitations in interpretability and compatibility with emerging AI paradigms. The paper concludes by proposing future research directions, emphasizing scalable hybrid frameworks, quantum-inspired approaches, and real-time adaptive systems, underscoring AIS's transformative potential across diverse computational fields.
Figures
PreviousNext
Article
Open Access September 13, 2023

A Comparative Study of Attention-Based Transformer Networks and Traditional Machine Learning Methods for Toxic Comments Classification

Abstract With the rapid growth of online communication platforms, the identification and management of toxic comments have become crucial in maintaining a healthy online environment. Various machine learning approaches have been employed to tackle this problem, ranging from traditional models to more recent attention-based transformer networks. This paper aims to compare the performance of attention-based [...] Read more.
With the rapid growth of online communication platforms, the identification and management of toxic comments have become crucial in maintaining a healthy online environment. Various machine learning approaches have been employed to tackle this problem, ranging from traditional models to more recent attention-based transformer networks. This paper aims to compare the performance of attention-based transformer networks with several traditional machine learning methods for toxic comments classification. We present an in-depth analysis and evaluation of these methods using a common benchmark dataset. The experimental results demonstrate the strengths and limitations of each approach, shedding light on the suitability and efficacy of attention-based transformers in this domain.
Article
Open Access November 30, 2022

A Review of Application of LiDAR and Geospatial Modeling for Detection of Buildings Using Artificial Intelligence Approaches

Abstract Today, the presentation of a three-dimensional model of real-world features is very important and widely used and has attracted the attention of researchers in various fields, including surveying and spatial information systems, and those interested in the three-dimensional reconstruction of buildings. The building is the key part of the information in a three-dimensional city model, so extracting [...] Read more.
Today, the presentation of a three-dimensional model of real-world features is very important and widely used and has attracted the attention of researchers in various fields, including surveying and spatial information systems, and those interested in the three-dimensional reconstruction of buildings. The building is the key part of the information in a three-dimensional city model, so extracting and modeling buildings from remote sensing data is an important step in building a digital model of a city. LiDAR technology due to its ability to map in all three modes of one-dimensional, two-dimensional, and three-dimensional is a suitable solution to provide hyperspectral and comprehensive images of the building in an urban environment. In this review article, a comprehensive review of the methods used in identifying buildings from the past to the present and appropriate solutions for the future is discussed.
Figures
PreviousNext
Review Article
Open Access November 29, 2022

The Application of Machine Learning in the Corona Era, With an Emphasis on Economic Concepts and Sustainable Development Goals

Abstract The aim of this article is to examine the impacts of Coronavirus Disease -19 (Covid-19) vaccines on economic condition and sustainable development goals. In other words, we are going to study the economic condition during Covid19. We have studied the economic costs of pandemic, benefits in terms of gross domestic product (GDP), public finances and employment, investment on vaccines around the [...] Read more.
The aim of this article is to examine the impacts of Coronavirus Disease -19 (Covid-19) vaccines on economic condition and sustainable development goals. In other words, we are going to study the economic condition during Covid19. We have studied the economic costs of pandemic, benefits in terms of gross domestic product (GDP), public finances and employment, investment on vaccines around the world, progress and totally the economic impacts of vaccines and the impacts of emerging markets (EM) on achieving sustainable development goals (SDGs), including no poverty, good health and well-being, zero hunger, reduced inequality etc. The importance of emerging economies in reducing the harmful effects of the Corona has also been noted. We have tried to do experimental results and forecast daily new death cases from Feb-2020 to Aug-2021 in Iran using Artificial Neural Network (ANN) and Beetle Antennae Search (BAS) algorithm as a case study with econometric models and regression analysis. The findings show that Covid19 has had devastating economic and health effects on the world, and the vaccine can be very helpful in eliminating these effects specially in long-term. We observed that there is inequality in the distribution of Corona vaccines in rich countries compared to poor which EM can decrease the gap between them. The results show that both models (i.e., Artificial intelligence (AI) and econometric models) almost have the same results but AI optimization models can robust the model and prediction. The main contribution of this article is that we have surveyed the impacts of vaccination from socio-economic viewpoint not just report some facts and truth. We have surveyed the impacts of vaccines on sustainable development goals and the role of EM in achieving SDGs. In addition to using the theoretical framework, we have also used quantitative and empirical results that have rarely been seen in other articles.
Figures
PreviousNext
Article
Open Access April 10, 2025

Advancements in Pharmaceutical IT: Transforming the Industry with ERP Systems

Abstract The pharmaceutical industry is undergoing a profound transformation driven by advancements in Information Technology (IT), with Enterprise Resource Planning (ERP) systems playing a pivotal role in reshaping operations. These systems offer integrated solutions that streamline key business processes, such as production, inventory management, supply chain optimization, regulatory compliance, and data [...] Read more.
The pharmaceutical industry is undergoing a profound transformation driven by advancements in Information Technology (IT), with Enterprise Resource Planning (ERP) systems playing a pivotal role in reshaping operations. These systems offer integrated solutions that streamline key business processes, such as production, inventory management, supply chain optimization, regulatory compliance, and data integration, contributing significantly to operational efficiency and organizational agility. This paper explores the evolution and impact of ERP systems within the pharmaceutical sector, highlighting their contributions to overcoming the industry’s inherent challenges, including complex regulatory requirements, the need for accurate and real-time data, and the demand for supply chain resilience. The integration of cloud-based ERP solutions, the incorporation of emerging technologies like Artificial Intelligence (AI), Machine Learning (ML), and the Internet of Things (IoT), and enhanced data analytics capabilities have revolutionized pharmaceutical IT. These advancements not only reduce operational costs, improve forecasting accuracy, and enhance collaboration but also ensure compliance with stringent global regulations, such as Good Manufacturing Practices (GMP) and FDA guidelines. Moreover, ERP systems have been instrumental in managing the pharmaceutical supply chain, ensuring product traceability, and improving inventory control and order fulfillment processes. This manuscript examines how ERP systems enable pharmaceutical companies to maintain high standards of product quality, improve decision-making, and ensure the safety and efficacy of drugs through robust tracking and auditing mechanisms. A case study of a pharmaceutical company that implemented an ERP system demonstrates the tangible benefits, including increased operational efficiency, improved compliance rates, and enhanced customer satisfaction. However, despite the clear advantages, challenges such as customization complexities, data integration issues, and resistance to change remain. As the pharmaceutical industry continues to evolve, ERP systems will remain a cornerstone of digital transformation, facilitating smarter decision-making, better resource management, and enhanced collaboration across global operations. This paper also identifies future trends, including the potential of AI and blockchain technologies in further strengthening ERP systems and transforming the pharmaceutical landscape.
Review Article
Open Access February 09, 2025

The Future of Longevity Medicine from the Lens of Digital Therapeutics

Abstract Digital therapeutics (DTx) are emerging as a pivotal tool in promoting longevity by addressing non-communicable diseases (NCDs) such as diabetes, cardiovascular diseases, and mental health disorders. These software-driven interventions offer personalized, evidence-based treatments that can be accessed via digital devices, making healthcare more accessible and scalable. One of the key advancements [...] Read more.
Digital therapeutics (DTx) are emerging as a pivotal tool in promoting longevity by addressing non-communicable diseases (NCDs) such as diabetes, cardiovascular diseases, and mental health disorders. These software-driven interventions offer personalized, evidence-based treatments that can be accessed via digital devices, making healthcare more accessible and scalable. One of the key advancements in DTx is the integration of artificial intelligence (AI) and machine learning (ML) to tailor interventions based on individual health data. This personalization enhances the effectiveness of treatments and supports preventive care by identifying risk factors early. The need for digital therapeutics is underscored by the rising prevalence of NCDs, which are responsible for a significant portion of global mortality and healthcare costs. Traditional healthcare systems often struggle to provide timely and personalized care, especially in low-resource settings. DTx can bridge this gap by offering cost-effective solutions that are easily scalable. Moreover, digital therapeutics can address health inequities by providing low-cost interventions to underserved populations, thereby reducing the burden of NCDs and improving overall health outcomes. As technology continues to evolve, the potential for DTx to enhance longevity and quality of life becomes increasingly promising. Recent advancements in longevity medicine and technology have focused on extending both lifespan and healthspan, ensuring that people not only live longer but also maintain good health throughout their extended years. This review article highlights these advancements that are contributing to this compelling subject of Longevity.
Figures
PreviousNext
Review Article
Open Access January 22, 2025

Tech Transformations: Modern Solutions for Obstructive Sleep Apnea

Abstract Recent advancements in the screening, diagnosis, and management of obstructive sleep apnea (OSA) have significantly improved patient outcomes. For screening, the use of home sleep apnea testing (HSAT) has become more prevalent, offering a convenient and cost-effective alternative to traditional in-lab polysomnography. HSAT devices have shown good specificity and sensitivity, particularly in [...] Read more.
Recent advancements in the screening, diagnosis, and management of obstructive sleep apnea (OSA) have significantly improved patient outcomes. For screening, the use of home sleep apnea testing (HSAT) has become more prevalent, offering a convenient and cost-effective alternative to traditional in-lab polysomnography. HSAT devices have shown good specificity and sensitivity, particularly in patients with a high pre-test probability of OSA. In terms of diagnosis, advancements in wearable technology and mobile health applications have enabled continuous monitoring of sleep patterns and respiratory parameters. These tools provide valuable data that can be used to identify OSA more accurately and promptly. Additionally, machine learning algorithms are being integrated into diagnostic processes to enhance the accuracy of OSA detection by analyzing large datasets and identifying patterns indicative of the condition. Management of OSA has also seen significant progress. Continuous positive airway pressure (CPAP) therapy remains the gold standard, but new developments include auto-adjusting CPAP devices that optimize pressure settings based on real-time feedback. Mandibular advancement devices and hypoglossal nerve stimulation are emerging as effective alternatives for patients who are CPAP-intolerant. Furthermore, lifestyle interventions such as weight management, positional therapy, and exercise have been shown to complement medical treatments, leading to better overall outcomes. This review article highlights these advancements that collectively contribute to improved patient adherence, reduced symptoms, and enhanced quality of life for individuals with OSA.
Review Article
Open Access January 20, 2025

Deep Learning-Based Sentiment Analysis: Enhancing IMDb Review Classification with LSTM Models

Abstract Sentiment analysis, a vital aspect of natural language processing, involves the application of machine learning models to discern the emotional tone conveyed in textual data. The use case for this type of problem is where businesses can make informed decisions based on customer feedback, identify the sentiments of their employees, and make decisions on hiring or retention, or for that matter, [...] Read more.
Sentiment analysis, a vital aspect of natural language processing, involves the application of machine learning models to discern the emotional tone conveyed in textual data. The use case for this type of problem is where businesses can make informed decisions based on customer feedback, identify the sentiments of their employees, and make decisions on hiring or retention, or for that matter, classify a text based on its topic like whether it is about a particular subject like physics or chemistry as is useful in search engines. The model leverages a sequential architecture, transforms words into dense vectors using an Embedding layer, and captures intricate sequential patterns with two Long Short-Term Memory (LSTM) layers. This model aims to effectively classify sentiments in text data using a 50-dimensional embedding dimension and 20 % dropout layers. The use of rectified linear unit (ReLU) activations enhances non-linearity, while the SoftMax activation in the output layer aligns with the multi-class nature of sentiment analysis. Both training and test accuracy were well over 80%.
Figures
PreviousNext
Article
Open Access January 09, 2025

Advances in the Synthesis and Optimization of Pharmaceutical APIs: Trends and Techniques

Abstract The synthesis and optimization of Active Pharmaceutical Ingredients (APIs) is fundamental to pharmaceutical drug development, directly influencing drug efficacy, safety, and cost-effectiveness. Over recent years, significant advancements in synthetic methodologies and manufacturing technologies have transformed API production. This manuscript provides an overview of the latest innovations in API [...] Read more.
The synthesis and optimization of Active Pharmaceutical Ingredients (APIs) is fundamental to pharmaceutical drug development, directly influencing drug efficacy, safety, and cost-effectiveness. Over recent years, significant advancements in synthetic methodologies and manufacturing technologies have transformed API production. This manuscript provides an overview of the latest innovations in API synthesis, focusing on key techniques such as green chemistry, continuous flow chemistry, biocatalysis, and automation. Green chemistry principles, including solvent substitution and catalytic reactions, have enhanced sustainability by reducing waste and energy consumption. Continuous flow chemistry offers improved reaction control, scalability, and safety, while biocatalysis provides an eco-friendly alternative for synthesizing complex and chiral APIs. Additionally, the integration of automation and advanced process control using machine learning and real-time monitoring has optimized production efficiency and consistency. The manuscript also discusses the challenges associated with regulatory compliance and quality assurance, highlighting the role of advanced analytical techniques such as HPLC, NMR, and mass spectrometry in ensuring API purity. Looking ahead, personalized medicine and smart manufacturing technologies, including blockchain for traceability, are expected to drive further innovation in API production. This review concludes by emphasizing the need for continued advancements in sustainability, efficiency, and scalability to meet the evolving demands of the pharmaceutical industry, ultimately enabling the development of safer, more effective, and environmentally responsible medicines.
Review Article
Open Access November 16, 2024

Digital Therapeutics: A New Dimension to Diabetes Mellitus Management

Abstract Digital therapeutics (DTx) play a transformative role in diabetes management by leveraging technology to provide personalized, data-driven medical interventions. These tools enhance self-management by offering continuous monitoring and real-time feedback on glucose levels, diet, and physical activity. This personalized approach helps patients adhere to treatment plans and make informed lifestyle [...] Read more.
Digital therapeutics (DTx) play a transformative role in diabetes management by leveraging technology to provide personalized, data-driven medical interventions. These tools enhance self-management by offering continuous monitoring and real-time feedback on glucose levels, diet, and physical activity. This personalized approach helps patients adhere to treatment plans and make informed lifestyle changes, leading to improved clinical outcomes such as reduced HbA1c levels and better overall diabetes control. The importance of DTx lies in their ability to make diabetes care more accessible and convenient. Mobile apps and telemedicine platforms enable patients to receive support and guidance from anywhere, reducing the need for frequent in-person visits. Additionally, DTx often include behavioral support features like reminders, educational content, and motivational tools, which are crucial for maintaining healthy habits and managing stress. Currently, the dynamics of DTx in diabetes are rapidly evolving, with increasing integration of artificial intelligence and machine learning to further personalize and optimize care. As the adoption of these technologies grows, they hold the potential to significantly improve patient outcomes and revolutionize diabetes management on a global scale. This article will focus on the benefits of novel digital therapeutics for prevention and management of type II diabetes that are currently available in the market.
Figures
PreviousNext
Article
Open Access May 14, 2024

A review of reliability techniques for the evaluation of Programmable logic controller

Abstract PLCs, or programmable logic controllers, are essential parts of contemporary industrial automation systems and are responsible for managing and keeping an eye on a variety of operations. PLC reliability is critical to maintaining industrial systems' continuous and secure operation. A wide range of reliability strategies were used to improve the reliability of Programmable Logic Controllers, and [...] Read more.
PLCs, or programmable logic controllers, are essential parts of contemporary industrial automation systems and are responsible for managing and keeping an eye on a variety of operations. PLC reliability is critical to maintaining industrial systems' continuous and secure operation. A wide range of reliability strategies were used to improve the reliability of Programmable Logic Controllers, and this article methodically looks at them all. The evaluation classified PLC reliability techniques into Root Cause Analysis (RCA), Reliability Centered Maintenance (RCM), Hazard analysis (HA), Reliability block diagram (RBD), Fault tree analysis (FTA), Physics of failure (PoF) and FMEA/FMECA, after thoroughly reviewing the body of literature. The proportion of reviewed papers using either RCA, RCM, FMEA/FMECA, FTA, RBD, RCM, PoF, or Hazard analysis to increase the reliability of PLCs showed that RCA, which makes up 20% of the publications reviewed, has been used the most to increase the reliability of the PLC, followed by HA, RCM, RBD, FTA, and PoF, which account for 17%, 16%, 16%,13%, 10%, and 8% of the articles reviewed, respectively. The paper discusses new developments and trends in PLC reliability, such as the application of machine learning (ML) and artificial intelligence (AI) to fault detection and predictive maintenance.
Figures
PreviousNext
Review Article
Open Access April 16, 2024

Revolutionizing Automotive Supply Chain: Enhancing Inventory Management with AI and Machine Learning

Abstract Consumer behavior is evolving, demanding a wide range of products with fast shipping and reliable service. The automotive aftermarket industry, worth billions, requires efficient distribution systems to stay competitive. Manufacturers strive to balance growth with product and service excellence. Distributors and retailers face the challenge of maintaining competitive pricing while keeping [...] Read more.
Consumer behavior is evolving, demanding a wide range of products with fast shipping and reliable service. The automotive aftermarket industry, worth billions, requires efficient distribution systems to stay competitive. Manufacturers strive to balance growth with product and service excellence. Distributors and retailers face the challenge of maintaining competitive pricing while keeping inventory levels low. An adequate supply chain and accurate product data are crucial for product availability and reducing stock issues. This ultimately increases profits and customer satisfaction.
Figures
PreviousNext
Article
Open Access November 15, 2023

Predictive Failure Analytics in Critical Automotive Applications: Enhancing Reliability and Safety through Advanced AI Techniques

Abstract Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time [...] Read more.
Failure prediction can be achieved through prognostics, which provides timely warnings before failure. Failure prediction is crucial in an effective prognostic system, allowing preventive maintenance actions to avoid downtime. The prognostics problem involves estimating the remaining useful life (RUL) of a system or component at any given time. The RUL is defined as the time from the current time to the time of failure. The goal is to make accurate predictions close to the failure time to provide early warnings. J S Grewal and J. Grewal provide a comprehensive definition of RUL in their paper "The Kalman Filter approach to RUL estimation." A process is a quadruple (XU f P), where X is the state space, U is the control space, P is the set of possible paths, and f represents the transition between states. The process involves applying control values to change the system's state over time.
Figures
PreviousNext
Article
Open Access February 15, 2024

Stock Closing Price and Trend Prediction with LSTM-RNN

Abstract The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use [...] Read more.
The stock market is very volatile and hard to predict accurately due to the uncertainties affecting stock prices. However, investors and stock traders can only benefit from such models by making informed decisions about buying, holding, or investing in stocks. Also, financial institutions can use such models to manage risk and optimize their customers' investment portfolios. In this paper, we use the Long Short-Term Memory (LSTM-RNN) Recurrent Neural Networks (RNN) to predict the daily closing price of the Amazon Inc. stock (ticker symbol: AMZN). We study the influence of various hyperparameters in the model to see what factors the predictive power of the model. The root mean squared error (RMSE) on the training was 2.51 with a mean absolute percentage error (MAPE) of 1.84%.
Figures
PreviousNext
Article
Open Access December 03, 2023

Evolution of Enterprise Applications through Emerging Technologies

Abstract The extensive globalization of services and rapid technological advancements driven by IT have heightened the competitiveness of organizations in introducing innovative products and services. Among the noteworthy innovations is enterprise resource planning (ERP). An integral field in computer science, known as artificial intelligence (AI), is undergoing a transformative integration into various [...] Read more.
The extensive globalization of services and rapid technological advancements driven by IT have heightened the competitiveness of organizations in introducing innovative products and services. Among the noteworthy innovations is enterprise resource planning (ERP). An integral field in computer science, known as artificial intelligence (AI), is undergoing a transformative integration into various industries. Grasping the concept of artificial intelligence and its application in diverse business applications is crucial, given its broad and intricate nature. The primary focus of this paper is to delve into the realm of artificial intelligence and its utilization within enterprise resource planning. The study not only explores artificial intelligence but also delves into related concepts such as machine learning, deep learning, and neural networks in greater detail. Drawing upon existing literature, this research examines various books and online resources discussing the intersection of artificial intelligence and ERP. The findings reveal that the impact of AI is evident as businesses attain heightened levels of analytical efficiency across different ERP domains, thanks to remarkable advancements in AI, machine learning, and deep learning. Artificial intelligence is extensively employed in numerous ERP areas, with a particular emphasis on customer support, predictive analysis, operational planning, and sales projections.
Review Article
Open Access June 21, 2022

Create a Book Recommendation System using Collaborative Filtering

Abstract One of the most important applications of data science is the recommendation system. Every organization requires a good recommendation system to express a large range of items. This research focuses on the creation of a book recommender system using collaborative filtering.
One of the most important applications of data science is the recommendation system. Every organization requires a good recommendation system to express a large range of items. This research focuses on the creation of a book recommender system using collaborative filtering.
Figures
PreviousNext
Mini Review
Open Access May 06, 2022

Movie Recommendation System Modeling Using Machine Learning

Abstract The task of recommending products to customers based on their interests is important in business. It is possible to accomplish this with machine learning. To reduce human effort by proposing movies based on the user's interests efficiently and effectively without wasting much time in pointless browsing, the movie recommendation system is designed to assist movie aficionados. This work focuses on [...] Read more.
The task of recommending products to customers based on their interests is important in business. It is possible to accomplish this with machine learning. To reduce human effort by proposing movies based on the user's interests efficiently and effectively without wasting much time in pointless browsing, the movie recommendation system is designed to assist movie aficionados. This work focuses on developing a movie recommender system using a model that incorporates both cosine similarity and sentiment analysis. Cosine similarity is a standard used to determine how similar two items are to one another. An examination of the emotions expressed in a movie review can determine how excellent or negative a review is and, consequently the overall rating for a film. As a result, determining whether a review is favorable or adverse may be automated because the machine learns by training and evaluating the data. Comparing different systems based on content-based approaches will produce results that are increasingly explicit as time passes.
Figures
PreviousNext
Article
Open Access April 28, 2022

Analysis of Network Modeling for Real-world Recommender Systems

Abstract Nowadays, recommendation systems are existing everywhere in the internet world, online people are presented with the required needs not just for actual physical products, but also for several other things such as songs, places, books, friends, movies, and many more requirements. Most of the systems are developed with the basic collaborative and hybrid filtering, where the people or users are [...] Read more.
Nowadays, recommendation systems are existing everywhere in the internet world, online people are presented with the required needs not just for actual physical products, but also for several other things such as songs, places, books, friends, movies, and many more requirements. Most of the systems are developed with the basic collaborative and hybrid filtering, where the people or users are recommended items that the choices are based on the right preferences of other people by applying the machine intelligence strategies. In this research, the importance of network modeling is analyzed in solving real-world problems.
Figures
PreviousNext
Article
Open Access December 27, 2021

A Comparative Study for Recommended Triage Accuracy of AI Based Triage System MayaMD with Indian HCPs

Abstract Artificial intelligence (AI) based triage and diagnostic systems are increasingly being used in healthcare. Although these online tools can improve patient care, their reliability and accuracy remain variable. We hypothesized that an artificial intelligence (AI) powered triage and diagnostic system (MayaMD) would compare favorably with human doctors with respect to triage and diagnostic accuracy. [...] Read more.
Artificial intelligence (AI) based triage and diagnostic systems are increasingly being used in healthcare. Although these online tools can improve patient care, their reliability and accuracy remain variable. We hypothesized that an artificial intelligence (AI) powered triage and diagnostic system (MayaMD) would compare favorably with human doctors with respect to triage and diagnostic accuracy. We performed a prospective validation study of the accuracy and safety of an AI powered triage and diagnostic system. Identical cases were evaluated by an AI system and individual Indian healthcare practitioners (HCPs) to draw comparison for accuracy and safety. The same cases were validated with the help of consensus received from an expert panel of 3 doctors. These cases in the form of clinical vignettes were provided by an expert medical team. Overall, the study showed that the MayaMD AI based platform for virtual triage was able to recommend the most appropriate triage ensuring patient safety. In fact, the accuracy of triage recommendation by MayaMD was significantly better than that provided by individual HCPs (74% vs. 91.67%, p=0.04) with consensus being used as standard.
Figures
PreviousNext
Article
Open Access December 27, 2021

Leveraging AI and ML for Enhanced Efficiency and Innovation in Manufacturing: A Comparative Analysis

Abstract The manufacturing industry has embraced modern technologies such as big data, machine learning, and artificial intelligence. This paper examines AI and machine learning developments in the manufacturing industry, comparing current practices and data-driven projects. It aims better to understand these technologies and their potential benefits and challenges. The research identifies opportunities [...] Read more.
The manufacturing industry has embraced modern technologies such as big data, machine learning, and artificial intelligence. This paper examines AI and machine learning developments in the manufacturing industry, comparing current practices and data-driven projects. It aims better to understand these technologies and their potential benefits and challenges. The research identifies opportunities for innovative business solutions and explores industry practices and research results. The paper focuses on implementation rather than technical aspects, aiming to enhance knowledge in this area.
Figures
PreviousNext
Review Article
Open Access August 20, 2022

Advancing Predictive Failure Analytics in Automotive Safety: AI-Driven Approaches for School Buses and Commercial Trucks

Abstract The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D [...] Read more.
The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D visualization techniques to analyze the data. However, there needs to be more research on AI in school bus and commercial truck safety. This paper explores the importance of AI-driven predictive failure analytics in enhancing automotive safety for these vehicles. It will discuss challenges, required data, technologies involved in predictive failure analytics, and the potential benefits and implications for the future. The conclusion will summarize the findings and emphasize the significance of AI in improving driver safety. Overall, this paper contributes to the field of automotive safety and aims to attract more research in this area.
Figures
PreviousNext
Review Article
Open Access August 29, 2022

From Deterministic to Data-Driven: AI and Machine Learning for Next-Generation Production Line Optimization

Abstract The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes [...] Read more.
The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes beyond automation and utilizes IoT, AI, and big data for optimized production. In a smart factory, production can be linked and controlled innovatively, leading to increased performance, agility, and reduced costs.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

An Effective Predicting E-Commerce Sales & Management System Based on Machine Learning Methods

Abstract Due to influence of Internet, this e-commerce sector has developed rapidly. Most of the online retailing or selling businesses are seeking for way for predicting their products demand. Sales forecasting may help retailers develop a sales strategy that will enhance sales and attract more money and investment. The current research work puts forward a machine learning framework to forecast E-commerce [...] Read more.
Due to influence of Internet, this e-commerce sector has developed rapidly. Most of the online retailing or selling businesses are seeking for way for predicting their products demand. Sales forecasting may help retailers develop a sales strategy that will enhance sales and attract more money and investment. The current research work puts forward a machine learning framework to forecast E-commerce sales for strategic management using a dataset of E-commerce transactions. With 70 percent of the data for train and 30 percent for test, three models were produced, namely, Random Forest, Decision Tree, and XGBoost. In order to evaluate the models, performance measures inclusive of R-squared (R²) and Root Mean Squared Error (RMSE) were employed. Thus, the XGBoost model was the most accurate in marketing predictive capabilities for E-commerce sales with the R² score of 96.3%. This has demonstrated the increased capability of XGBoost algorithm to forecast E-commerce monthly sales more accurately than other models and can assist decision makers for managing inventory and arriving smart and quick decisions in this rapidly growing E-commerce market. The findings reiterate the importance of using advanced analytics in order to drive effectiveness and customer experience within E-commerce sector.
Figures
PreviousNext
Review Article
Open Access October 15, 2022

Big Data and AI/ML in Threat Detection: A New Era of Cybersecurity

Abstract The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even [...] Read more.
The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even under pressure from regulatory boards, have strived to harness the power of data and leverage it to enhance safety and security, maximize performance, and mitigate risks. However, the adversaries themselves have capitalized on the unequal battle of big data and artificial intelligence to inflict widespread chaos. Therefore, the demand for big data analytics and AI/ML for high-fidelity intelligence, surveillance, and reconnaissance is at its highest. Today, in the cybersecurity realm, the detection of adverse incidents poses substantial challenges due to the sheer variety, volume, and velocity of deep packet inspection data. State-of-the-art detection techniques have fallen short of detecting the latest attacks after a big data breach incident. On the other hand, computational intelligence techniques such as machine learning have reignited the search for solutions for diverse monitoring problems. Recent advancements in AI/ML frameworks have the potential to analyze IoT/edge-generated big data in near real-time and assist risk assessment and mitigation through automated threat detection and modeling in the big data and AI/ML domain. Industry best practices and case studies are examined that endeavor to showcase how big data coupled with AI/ML unlocks new dimensions and capabilities in improved vigilance and monitoring, prediction of adverse incidents, intelligent modeling, and future uncertainty quantification by data resampling correction. All of these avenues lead to enhanced robustness, security, safety, and performance of industrial processes, computing, and infrastructures. A view of the future and how the potential threats due to the misuse of new technologies from bandwidth to IoT/edge, blockchain, AI, quantum, and autonomous fields is discussed. Cybersecurity is again playing out at a pace set by adversaries with low entry barriers and debilitating tools. The need for innovative solutions for defense from the emerging threat landscape, harnessing the power of new technologies and collaboration, is emphasized.
Figures
PreviousNext
Article
Open Access October 29, 2022

Neural Networks for Enhancing Rail Safety and Security: Real-Time Monitoring and Incident Prediction

Abstract The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the [...] Read more.
The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the improvements in monitoring can be achieved using artificial intelligence algorithms such as neural networks. Neural networks have been used to achieve real-time incident identification in monitoring the track quality in terms of classifying the graphical outputs of an ultrasonic system working with the rails and track bed, to predict incidents on the rail infrastructure due to transmission channels becoming blocked, and also to attempt scheduling preemptive and preventative maintenance. In terms of forecasting incidents and accidents on board the trains, neural networks have been used to model passenger behavior and optimize responses during a train station evacuation. In tackling the incidents and accidents occurring on rail transport, we contribute with two methodologies to detect anomalies in real-time and identify the level of security risk: at the maintenance level with personnel operating along the railways, and onboard passenger trains. These methodologies were evaluated on real-world datasets and shown to be able to achieve a high accuracy in the results. The results generated from these case studies also reveal the potential for network-wide applications, which could enhance security and safety on railway networks by offering the possibility of better managing network disruptions and more rapidly identifying security issues. The speed and coverage of the information generated through the implementation of these methodologies have implications in utilizing prediction for decision support and enhancing safety and security on board the rail network.
Figures
PreviousNext
Review Article
Open Access October 30, 2022

Towards Autonomous Analytics: The Evolution of Self-Service BI Platforms with Machine Learning Integration

Abstract Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the [...] Read more.
Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the advantages of BI systems and discovers hidden and complex insights from very large business datasets, which a business analyst can miss during manual exploratory data analysis. Towards our future vision of autonomous analytics, we propose a collective machine learning model repository with an integration layer for user-defined analytical goals within the BI platform. The proposed architecture can effectively reduce the cognitive load on users for repetitive tasks, democratizing data science expertise across data workers and facilitating a less experienced user community to develop and use advanced machine learning and statistical algorithms.
Figures
PreviousNext
Review Article
Open Access November 05, 2022

Application of Neural Networks in Optimizing Health Outcomes in Medicare Advantage and Supplement Plans

Abstract The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, [...] Read more.
The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, personalized treatment recommendations, and risk stratification. This paper explores the application of neural networks in enhancing health outcomes within the context of Medicare Advantage and Supplement plans. We review how deep learning models can be leveraged to predict patient risk, optimize resource allocation, and identify at-risk populations for preventive interventions. Additionally, we discuss the potential for neural networks to improve claims processing, reduce fraud, and streamline administrative burdens. By integrating various data sources, including medical records, claims data, and demographic information, neural networks enable more accurate and efficient decision-making processes. Ultimately, this approach can lead to better patient care, reduced healthcare costs, and improved satisfaction for beneficiaries of these programs. The paper concludes by highlighting the current limitations, ethical considerations, and future directions for AI adoption in the Medicare Advantage and Supplement sectors.
Figures
PreviousNext
Review Article
Open Access November 16, 2023

Zero Carbon Manufacturing in the Automotive Industry: Integrating Predictive Analytics to Achieve Sustainable Production

Abstract This charge-ahead paper suggests that transitioning the automotive industry towards a zero-carbon ecosystem from material to end-of-life can be accomplished through disruptive zero-carbon manufacturing in the broad area of all-electric vehicle production technology. To accomplish zero carbon emission automotive manufacturing in the vehicle assembly domain, future paradigms must converge on the [...] Read more.
This charge-ahead paper suggests that transitioning the automotive industry towards a zero-carbon ecosystem from material to end-of-life can be accomplished through disruptive zero-carbon manufacturing in the broad area of all-electric vehicle production technology. To accomplish zero carbon emission automotive manufacturing in the vehicle assembly domain, future paradigms must converge on the decoupling of carbon dioxide emissions from automobile manufacturing and use the design, processing, and manufacturing conditions. The envisioned zero carbon emission vehicle manufacturing domain consists of two complementary components: (a) making more efficient use of energy and (b) reducing carbon in energy use. This paper presents the status of key scientific and technological advancements to bring the manufacturing model of today to a zero-carbon ecosystem for the entire automotive industry of tomorrow. This paper suggests the groundbreaking application of dynamic and distributed predictive scheduling algorithms and open sensing and visualization technology to meet the zero carbon emission vehicle manufacturing goals. Power-aware high-performance computing clusters have recently become a viable solution for sustainable production. Advances in scalable and self-adaptive monitoring, predictive analytics, timeline-based machine learning, and digital replica of cyber-physical systems are also seen co-evolving in the zero carbon manufacturing future. These methods are inspired by initiatives to decouple gross domestic product growth and energy-related carbon dioxide emissions. Stakeholders could co-design and implement shared roadmaps to transition the automotive manufacturing sector with relevant societal and environmental benefits. The automated mobility sector offers a program, an industry-leading example of transforming an automotive production facility to carbon neutrality status. The conclusions from this paper challenge automotive manufacturers to engage in industry offsetting and carbon tax programs to drive continuous improvement and circular vehicle flows via a multi-directional zero-carbon smart grid.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Enhancing Pharmaceutical Supply Chain Efficiency with Deep Learning-Driven Insights

Abstract The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the [...] Read more.
The growing complexity of the operating environment urges pharmaceutical innovation. This essay addresses the need for the integration of advanced technologies in the pharmaceutical supply chain. It justifies the value proposition and presents a concrete use case for the integration of deep learning insights to make data-driven decisions. The supply chain has always been a priority for the pharmaceutical industry; research and development recognizes companies' increasing investment in big data strategies, with plans for a CAGR in big data tool adoption. The work presented herein has a preliminary explorative character to recuperate and integrate evidence from partly overlooked practical experience and know-how. The practical relevance of the essay is directed toward practitioners in pharmaceutical production, supply chain management, logistics, and regulatory agencies. The literature has shown a long-term concern for enhanced performance in the pharmaceutical supply chain network. This essay demonstrates the application of deep learning-driven insights to reveal non-evident flow dependencies. The main aim is to present a comprehensive insight into deep learning-driven decision support. The supply chain is portrayed in a holistic manner, seeking end-to-end visibility. Implications for public policy are discussed, such as data equity: many countries are protecting their populations and economic growth by building resilience and efficiency to ensure the capacity to move goods across supply chains. The implementation strategy is covered. The combined reduction of variability, efficiency as matured richness, reliability (on stochastic flows and their understanding through deep learning and data), and system noise (increased dampening through the inclusiveness of all stakeholders) results in increased responsiveness of supply chains for pharmaceutical products. Future work involves the integration of external data, closing the loop between planning and its application in reality.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Predictive Analytics and Deep Learning for Logistics Optimization in Supply Chain Management

Abstract Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the [...] Read more.
Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the operations of a supply chain. An approach is presented on how predictive analytics can be used to improve logistics operations. In order to analyze big data in logistics effectively, an artificial intelligence computational technique, specifically deep learning, is employed. Two case studies are illustrated to demonstrate the practical employability of the proposed technique. This reveals the power and potential of using predictive analytics in logistics to project various KPI values ahead in the future based on the contemporary data from the logistics operations; sheds light on the innovative technique of employing deep learning through deep learning-based predictive analytics in logistics; suggests incorporating innovative techniques like deep learning with predictive analytics to develop an accurate forecasting technique in logistics and optimize operations and prevent disruption in the supply chain. The network of supply chains has become more complex, necessitating the need for the latest technological advancements. The sectors that have gained a fair amount of attention for the application of technology to optimize their operations are manufacturing, healthcare, aerospace, and the automotive industry. A little attention has been diverted to the logistics sector; many describe how analytics and artificial intelligence can be used in the logistics sector to achieve higher optimization. Currently, significant research has been done in optimizing logistics operations. Nevertheless, with the explosive volume of historical data being produced by the logistics operations of an organization, there is a great opportunity to learn valuable insights from the data accumulated over time for more long-term strategic planning. To develop the logistics operations in an organization, the use of historical data is essential to understand the trends in the operations. For example, regular maintenance planning and resource allocation based on trends are long-term activities that will not affect logistics operations immediately but can affect the business’s strategic planning in the long run. A predictive analysis technique employed on historical data of logistics can narrow down conclusions based on the future trends of logistics operations. Thus, the technique can be used to prevent the disruption of the supply chain.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

Leveraging Machine Learning Techniques for Predictive Analysis in Merger and Acquisition (M&A)

Abstract M&A is a strategic concept of business growth through consolidation, gaining market access, increasing strategic positions, and increasing operational efficiency. To understand the dynamics of M&A, this paper looks at aspects such as targeted firm identification, evaluation, bidding for the target firm, and post-acquisition integration. All forms of M&A, including horizontal, [...] Read more.
M&A is a strategic concept of business growth through consolidation, gaining market access, increasing strategic positions, and increasing operational efficiency. To understand the dynamics of M&A, this paper looks at aspects such as targeted firm identification, evaluation, bidding for the target firm, and post-acquisition integration. All forms of M&A, including horizontal, vertical, conglomerate, and acquisitions, are discussed in terms of goals and values, including synergy, cost reduction, competitive advantages, and access to better technology. However, issues such as cultural assimilation, adhesion to regulations, and calculating an inaccurate value are also resolved. The paper then goes deeper to provide insight into how predictive analytics applies to M&A, using ML to improve decision-making with forecasting benefits. Including healthcare, education, and construction industries, the presented predictive models using regression analysis, neural networks, and ensemble techniques help to make decisions. Through time series and real-time data, PDA enables sound M&A strategies, effective risk management and smooth integration.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

Understanding the Fundamentals of Digital Transformation in Financial Services: Drivers and Strategic Insights

Abstract The current financial services sector is realising considerable changes in its operations due to development in technology and embracing of digital platforms. This evolution is changing the established concepts of business, consumers and channels of delivery of services. Financial services firms are changing the way they work through digital transformation due to developments in technology, [...] Read more.
The current financial services sector is realising considerable changes in its operations due to development in technology and embracing of digital platforms. This evolution is changing the established concepts of business, consumers and channels of delivery of services. Financial services firms are changing the way they work through digital transformation due to developments in technology, changes in customer needs, and an increase in emphasis on sustainability. Understanding the opportunities, risks, and new trends in digital transformation is the focus of this paper. Opportunities include efficient real-time decision-making processes, increased transparency and better process controls, which are balanced by the threats of change management, dubious organization-technology fit, and high implementation costs. The study also examines recent advancements, including the application of machine learning and artificial intelligence, developments in mobile and online banking, integration of blockchain, and increasing focus on security and personalised banking. A literature review yields some findings from different studies on rural financial services, the evolution of the blockchain, drivers of digital transformation, cloud-based learning approaches, and emerging sustainability practices. All of these results suggest that more strategic planning, analytics, and more focus on ensuring that organisational objectives are met with transformations should be pursued. Hence, this research findings add to the existing literature in determining how innovative and digital technologies are likely to transform the financial services sector and advance sustainability.
Figures
PreviousNext
Review Article
Open Access November 19, 2022

Analyzing Behavioral Trends in Credit Card Fraud Patterns: Leveraging Federated Learning and Privacy-Preserving Artificial Intelligence Frameworks

Abstract We investigate and analyze the trends and behaviors in credit card fraud attacks and transactions. First, we perform logical analysis to find hidden patterns and trends, then we leverage game-theoretical models to illustrate the potential strategies of both the attackers and defenders. Next, we demonstrate the strength of industry-scale, privacy-preserving artificial intelligence solutions by [...] Read more.
We investigate and analyze the trends and behaviors in credit card fraud attacks and transactions. First, we perform logical analysis to find hidden patterns and trends, then we leverage game-theoretical models to illustrate the potential strategies of both the attackers and defenders. Next, we demonstrate the strength of industry-scale, privacy-preserving artificial intelligence solutions by presenting the results from our recent exploratory study in this respect. Furthermore, we describe the intrinsic challenges in the context of developing reliable predictive models using more stringent protocols, and hence the need for sector-specific benchmark datasets, and provide potential solutions based on state-of-the-art privacy models. Finally, we conclude the paper by discussing future research lines on the topic, and also the possible real-life implications. The paper underscores the challenges in creating robust AI models for the banking sector. The results also showcase that privacy-preserving AI models can potentially augment sharing capabilities while mitigating liability issues of public-private sector partnerships [1].
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Predictive Analytics in Biologics: Improving Production Outcomes Using Big Data

Abstract Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most [...] Read more.
Biopharmaceuticals, or biologics, are a burgeoning sector in the pharmaceutical industry, predicted to reach $239.4 billion by 2025. This unparalleled growth is often attributed to the enhanced specificity offered by large molecules over small molecules. The large size of the constituent proteins necessitates the continuous implementation of big data predictive analytics to elucidate the most effective candidates in the lead optimization process. These same methodologies can be applied, and with the advent of machine learning and automated predictive analytics, this is becoming an increasingly facile task, to the augmentation and optimization of the downstream production processes that comprise the majority of the development cost of any biologic. In this work, big data from cell line generation, product and process design, and large-scale lead validation studies have been used to compare the applicability of simple statistical models against these black-box approaches for the rapid acceleration of enzymes to the pilot plant stage. This research can be expanded upon to exploit the big datasets generated as part of the progression of biologics through the development pipeline to further optimize production outcomes. Over the coming months, data from the project will be used to probe which approaches are amenable to which processes and, as a result, more amenable to various economic simulations. The computed optimization objective for the HIT must include the cost of acquiring, storing, and analyzing data to construct these predictive models, alongside the expected commercial reward of choosing an optimally ranked candidate. In this vein, perspective must be taken in the probable future price, capability outputs, and ownership issues of increasingly sophisticated data analysis software as superstructures become more frequent. It is frequently stated that decisions made to reduce production costs are data-driven, but that is not because more economically or energetically costly experiments or production methods are employed; to truly evaluate production steps, dynamic energy, and economic models need to become more commonplace. Conversion of process quality approaches from large questionnaires, risk analysis, and expert opinion-driven methods to statistical and thus more reliable approaches is an area of future research in analytics used herein.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

A Comprehensive Study of Proactive Cybersecurity Models in Cloud-Driven Retail Technology Architectures

Abstract This is a comprehensive, multi-year study designed to explore proactive security technologies implemented in cloud-driven retail technology architectures. Deploying cloud technologies in the retail environment creates a need for more comprehensive and proactive security technologies that protect both the psychological estate and fiscal estate. This work contributes to cloud-driven retail research [...] Read more.
This is a comprehensive, multi-year study designed to explore proactive security technologies implemented in cloud-driven retail technology architectures. Deploying cloud technologies in the retail environment creates a need for more comprehensive and proactive security technologies that protect both the psychological estate and fiscal estate. This work contributes to cloud-driven retail research by investigating anticipatory security technologies across numerous case studies. These case studies offer best practice models for elevating proactive cybersecurity in retail environments. The academic and professional communities currently lack security information and practices that apply to the retail environment. It is anticipated that the final results of this project will have value in shaping the next set of research in cybersecurity in retail environments. Many retail organizations are restricted to reactive security operations. Advanced security technologies operate on piloted activations that require the intervention of security analysts. In actuality, basic security products and security operations are now piloted by automation and machine learning. In one case study, a retail CTO shares a forensics example using a proactive security technology aimed at both psychological estate and fiscal estate. In another case study, direct discussions provide a retail university lecturer with insight into the use of driven intelligence for inventory management. The use of card technology for a model is used as an example that can be implemented as security technology which can be offered as a service to retail organizations.
Figures
PreviousNext
Review Article
Open Access December 27, 2019

Data Engineering Frameworks for Optimizing Community Health Surveillance Systems

Abstract A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like [...] Read more.
A Changing World Demands Optimized Health Surveillance Systems – and How Data Engineering Can Help There is a growing urgency to manage the public health and emergency response practices effectively today, in light of complex and emerging health threats. Fortunately, a host of new tools, including big and streaming data sources, methods such as machine learning, new types of hardware like blockchain or secure enclaves, and means of data storage and retrieval, have emerged. But, with these innovations comes a grand challenge: how to blend with, and adapt them to, the traditional public health practices. The long-in-place infrastructures and protocols to protect and ensure the welfare of communities are in need of change, or at least update, to enhance their marked longevity of impact directly on the health outcomes and community wellbeing they were designed to fortify. It is in this vein that the essay is written and composed. The investigation in this essay is to query what, particularly, might be the aspects and influences of the emerging veritable cornucopia of new data engineering frameworks that are either being developed specifically for health surveillance and wellness, or are available to be co opted from devices and services already thriving in the current market and research milieu. Knowing what these ways may be could well aid in molding their uptake and spread, ensuring their beneficial impacts on those communities who stand to gain the most. The essay is divided into several key segments. After this introduction, section two details the research methods. In the section that follows, the maximum health outcome potentials of these novel frameworks are reviewed. Part four of the essay takes a more critical approach, addressing how the success of these methods may be hindered and future research avenues. Lastly, the concluding information suggests some actions to take to aid best suit the implementation of these ways, and suggests some thoughts for further research after the completion of these inquiriestrand [1].
Figures
PreviousNext
Case Report
Open Access December 21, 2016

Advanced Natural Language Processing (NLP) Techniques for Text-Data Based Sentiment Analysis on Social Media

Abstract The field of sentiment analysis is a crucial aspect of natural language processing (NPL) and is essential in discovering the emotional undertones within the text data and, hence, capturing public sentiments over a variety of issues. In this regard, this study suggests a deep learning technique for sentiment categorization on a Twitter dataset that is based on Long Short-Term Memory (LSTM) [...] Read more.
The field of sentiment analysis is a crucial aspect of natural language processing (NPL) and is essential in discovering the emotional undertones within the text data and, hence, capturing public sentiments over a variety of issues. In this regard, this study suggests a deep learning technique for sentiment categorization on a Twitter dataset that is based on Long Short-Term Memory (LSTM) networks. Preprocessing is done comprehensively, feature extraction is done through a bag of words method, and 80-20 data is split using training and testing. The experimental findings demonstrate that the LSTM model outperforms the conventional models, such as SVM and Naïve Bayes, with an F1-score of 99.46%, accuracy of 99.13%, precision of 99.45%, and recall of 99.25%. Additionally, AUC-ROC and PR curves validate the model’s effectiveness. Although, it performs well the model consumes heavy computational resources and longer training time. In summary, the results show that deep learning performs well in sentiment analysis and can be used to social media monitoring, customer feedback evaluation, market sentiment analysis, etc.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Advancing Healthcare Innovation in 2021: Integrating AI, Digital Health Technologies, and Precision Medicine for Improved Patient Outcomes

Abstract Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial [...] Read more.
Advances of wearables, sensors, smart devices, and electronic health records have generated patient-oriented longitudinal data sources that are analyzed with advanced analytical tools to generate enormous opportunities to understand patient health conditions and needs, transforming healthcare significantly from conventional paradigms to more patient-specific and preventive approaches. Artificial intelligence (AI) with a machine learning methodology is prominently considered as it is uniquely suitable to derive predictions and recommendations from complex patient datasets. Recent studies have shown that precise data aggregation methods exhibit an important role in the precision and reliability of clinical outcome distribution models. There is an essential need to develop an effective and powerful multifunctional machine learning platform to enable healthcare professionals to comprehend challenging biomedical multifactorial datasets to understand patient-specific scenarios and to make better clinical decisions, potentially leading to the optimist patient outcomes. There is a substantial drive to develop the networking and interoperability of clinical systems, the laboratory, and public health. These steps are delivered in concert with efforts at enabling usefully analytic tools and technologies for making sense of the eruption of overall patient’s information from various sources. However, the full efficiency of this technology can only be eliminated when ethical, legal, and social challenges related to reducing the privacy of healthcare information are successfully absorbed. Public and media are to be informed about the capabilities and limitations of the technologies and the paramount to be balanced is juvenile public healthcare data privacy debate. While this is ongoing, the measures have been progressed from patient data protection abuses for progress to realize the full potential of AI technology for hosting the health system, with benefits for all stakeholders. Any protection program should be based on fairness, transparency, and a full commitment to data privacy. On-going innovative systems that use AI to manage clinical data and analyzes are proposed. These tools can be used by healthcare providers, especially in defining specific scenarios related to biomedical data management and analysis. These platforms ensure that the significant and potentially predictive parameters associated with the diagnosis, treatment, and progression of the disease have been recognized. With the systematic use of these solutions, this work can contribute to the realization of noticeable improvements in the provision of real-time, personalized, and efficient medicine at a reduced cost [1].
Figures
PreviousNext
Case Report
Open Access December 27, 2021

Advancements in Smart Medical and Industrial Devices: Enhancing Efficiency and Connectivity with High-Speed Telecom Networks

Abstract Emerging smart medical instruments combined with advanced smart industrial equipment facilitate the collection of vast volumes of critical data. This data not only enables significantly more accurate and cost-effective diagnosis and maintenance but also enriches the datasets available for AI algorithms, leading to improved insights and outcomes. The integration of high-speed and ultra-reliable [...] Read more.
Emerging smart medical instruments combined with advanced smart industrial equipment facilitate the collection of vast volumes of critical data. This data not only enables significantly more accurate and cost-effective diagnosis and maintenance but also enriches the datasets available for AI algorithms, leading to improved insights and outcomes. The integration of high-speed and ultra-reliable telecommunications infrastructure is crucial, as it supports the cloud model. This model allows for off-device aggregation in the cloud, which effectively offloads infrastructure demands and provides an extended runway for future technological improvements before the deployment of the next generation of devices. However, in certain scenarios, latency and bandwidth limitations present significant challenges. These limitations require that a substantial amount of AI and machine learning processing is conducted directly on the transmitted data, which places rigorous demands on both the processing subsystems and the communications links themselves. The current project directly addresses the accelerator side of this multifaceted issue. It will carry out comprehensive end-to-end demonstrations leveraging pilot 5G networks and telemedicine facilities, collaborating closely with major industry participants to showcase the capabilities and potential of this innovative technology. This collaborative effort is essential to pushing the boundaries of what is possible in smart medical instruments and industrial applications [1].
Figures
PreviousNext
Review Article
Open Access December 17, 2024

An Analysis of Performance and Comparison of Models for Cardiovascular Disease Prediction via Machine Learning Models in Healthcare

Abstract Over the past few decades, cardiovascular disease and related complications have surpassed all others as the important causes of death on a universal scale. At the moment, they are the important cause of mortality universal, including in India. It is important to know how to find cardiovascular problems early so that patients get better care and prices go down. This project utilizes the UCI Heart [...] Read more.
Over the past few decades, cardiovascular disease and related complications have surpassed all others as the important causes of death on a universal scale. At the moment, they are the important cause of mortality universal, including in India. It is important to know how to find cardiovascular problems early so that patients get better care and prices go down. This project utilizes the UCI Heart Disease Dataset to develop ML and DL models capable of detecting cardiac diseases. Heart illness was categorized using Convolutional Neural Network (CNN) models, which are able to detect intricate patterns in supplied data. A confusion matrix rating, an F1-score, a ROC curve, accuracy, precision, and recall were some of the measures used to grade the model. It did much better than the Neural Network, Deep Neural Network (DNN), and Gradient Boosted Trees (GBT) models, with 91.71% accuracy, 88.88% precision, 82.75% memory, and 85.70% F1-score. Comparative study showed that CNN was the most accurate model. Other models had different balances between accuracy and recall. The experiment results show that the optional CNN model is a decent way to identify cardiovascular disease. This means that it could be used in healthcare systems to find diseases earlier and treat patients better.
Figures
PreviousNext
Article
Open Access December 19, 2024

Intelligent Detection of Injection Attacks via SQL Based on Supervised Machine Learning Models for Enhancing Web Security

Abstract The most prevalent technique behind security data breaches exists through SQL Injection Attacks. Organizations and individuals suffer from sensitive information exposure and unauthorized entry when attackers take advantage of SQL injection (SQLi) attack vulnerability’s severe risks. Static and heuristic defense methods remain conventional detection tools for previous SQL injection attacks study's [...] Read more.
The most prevalent technique behind security data breaches exists through SQL Injection Attacks. Organizations and individuals suffer from sensitive information exposure and unauthorized entry when attackers take advantage of SQL injection (SQLi) attack vulnerability’s severe risks. Static and heuristic defense methods remain conventional detection tools for previous SQL injection attacks study's foundation is a detection system developed using the Gated Recurrent Unit (GRU) network, which attempts to efficiently identify SQL Injection attacks (SQLIAs). The suggested Gated Recurrent Unit model was trained using an 80:20 train-test split, and the results showed that SQL injection attacks could be accurately identified with a precision rate of 97%, an accuracy rate of 96.65%, a recall rate of 92.5%, and an F1-score of 94%. The experimental results, together with their corresponding confusion matrix analysis and learning curves, demonstrate resilience and outstanding generalization ability. The GRU model outperforms conventional machine learning (ML) models, including K-Nearest Neighbor’s (KNN), and Support Vector Machine (SVM), in terms of identifying sequential patterns in SQL query data. Recurrent neural architecture proves effective in the detection of SQLi attacks through its ability to provide secure protection for contemporary web applications.
Figures
PreviousNext
Article
Open Access December 26, 2021

Deep Learning Applications for Computer Vision-Based Defect Detection in Car Body Paint Shops

Abstract The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on [...] Read more.
The major automated plants have produced large volumes of high-quality products at low cost by introducing various technologies, including robotics and artificial intelligence. The code of many defects on the surface of products is embedded with economic loss and sometimes functionality loss because products are rarely found with defects. Therefore, most items’ production is done based on prediction and has an invisible fluctuation in production. The detection process for hidden defect images requires a lot of costs and needs to be supported for better progress and quality enhancement. Paint shop defects should be analyzed from color changes to detect defects effectively by preventing the variability of product demand over time. It is not easy to take visible light images without noise because the paint surfaces are glossy. A few parts of illumination and shadows remain in images, even in larger size and high-resolution images. The various painted surfaces are also needed to reflect both color and texture information in computer vision models to classify defects precisely. Several automated detection systems have been applied to paint shop inspections using lasers, infrared, x-ray, electrical, magnetic, and acoustic sensors. The chance of paint shop defects can be low, unnecessarily low, compared to clouds in the sky, but those chances impact defect functionalities. Thus, they are called as “lessons learned.” Lately, artificial intelligence has been introduced to the field of factory automation, and many defect detection feeds have found footsteps in machine learning and deep learning. Recent attempts at deep learning-based defect detection are proposing simple techniques using specific neural network architectures with big data. However, big data is still in its early stages, and significant challenges exist in normalizing and annotating such data. To get cost-efficient and timely solutions tailored to automotive paint shops, it might be a better consideration to combine deep learning solutions with traditional computer vision and more elaborate machine learning methods.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

An Analysis of Crime Prediction and Classification Using Data Mining Techniques

Abstract Crime is a serious and widespread problem in their society, thus preventing it is essential. Assignment. A significant number of crimes are committed every day. One tool for dealing with model crime is data mining. Crimes are costly to society in many ways, and they are also a major source of frustration for its members. A major area of machine learning research is crime detection. This paper [...] Read more.
Crime is a serious and widespread problem in their society, thus preventing it is essential. Assignment. A significant number of crimes are committed every day. One tool for dealing with model crime is data mining. Crimes are costly to society in many ways, and they are also a major source of frustration for its members. A major area of machine learning research is crime detection. This paper analyzes crime prediction and classification using data mining techniques on a crime dataset spanning 2006 to 2016. This approach begins with cleaning and extracting features from raw data for data preparation. Then, machine learning and deep learning models, including RNN-LSTM, ARIMA, and Linear Regression, are applied. The performance of these models is evaluated using metrics like Root Mean Squared Error (RMSE) and Mean Absolute Percentage Error (MAPE). The RNN-LSTM model achieved the lowest RMSE of 18.42, demonstrating superior predictive accuracy among the evaluated models. Data visualization techniques further unveiled crime patterns, offering actionable insights to prevent crime.
Figures
PreviousNext
Article
Open Access December 27, 2020

Enhancing Regulatory Compliance in Finance through Big Data Analytics and AI Automation

Abstract This paper shows how Big Data Analytics (BDA) and Artificial Intelligence (AI) automation facilitate regulatory compliance in Finance. Regulatory compliance is essential in helping institutions to mitigate reputational, litigation, and financial risk. Existing literature reveals several preconditions for compliance. However, much of the literature has adopted an internal view of compliance without [...] Read more.
This paper shows how Big Data Analytics (BDA) and Artificial Intelligence (AI) automation facilitate regulatory compliance in Finance. Regulatory compliance is essential in helping institutions to mitigate reputational, litigation, and financial risk. Existing literature reveals several preconditions for compliance. However, much of the literature has adopted an internal view of compliance without considering external regulatory frameworks. This research draws on the cognitive model of regulation that looks at regulatory compliance as a social construct. It uses a triangulation research method comprising literature review, interview of trade compliance experts, and questionnaire survey of compliance practitioners to understand how regulation affects compliance and what role ICTs play in implementing compliance. The findings of this study present a regulatory compliance framework comprising four cognitive stages and a conceptual regulatory compliance system that presents how BDA and AI automation are applied to mitigate regulatory complexity and enhance regulatory compliance. The conceptual regulatory compliance system shows how BDA and AI enable institutions to dynamically assess regulatory risk, automatically monitor compliance, and intelligently predict risk violations mitigating regulatory complexity and preventing producing unnecessary documents. It provides theoretical contributions to understanding regulatory evolution and compliance and practical implications for understanding how regulation evolves to be more complicated and elements of a regulatory compliance system mitigate proliferating regulations. Additionally, it provides avenues for future research into the relationship between competing regulatory mandates and how institutions cope with that. Regulations are important for ensuring compliance and governance in finance and to curb systemic risk. Complying with regulations is difficult due to their growing volume, complexity, and fragmentation. Institutions use large-scale Information and Communication Technologies (ICTs), such as Big Data Analytics (BDA) and Artificial Intelligence (AI) automation, to monitor compliance and mitigate regulatory complexity. However, less is known about how firms comply with regulation. Most literature does not thoroughly investigate regulatory elements nor explicitly relate them to compliance.
Figures
PreviousNext
Review Article
Open Access December 27, 2020

Designing Self-Learning Agentic Systems for Dynamic Retail Supply Networks

Abstract The evolution of supply chains (SC) from a linear to a network structure created an opportunity for new processes, product/service offerings, and provider-business. Rising customer service expectations have led to the need for innovative SC designs to develop and sustain competitive performance globally. Firms are forced to respond and adapt accordingly, thereby leading to design, network, [...] Read more.
The evolution of supply chains (SC) from a linear to a network structure created an opportunity for new processes, product/service offerings, and provider-business. Rising customer service expectations have led to the need for innovative SC designs to develop and sustain competitive performance globally. Firms are forced to respond and adapt accordingly, thereby leading to design, network, operational, and performance dynamics. Traditionally, SCs are treated as static structures, focusing solely on design and/or operational optimization. Such perspectives are not viable options for SC domains, as they address only a portion of the dynamic problem space, use a deterministic assumption of dominant design variables, capitalize on past data to predict future decisions, and offer pre-classified forecasting options complemented with a limited comprehension of systemic SC elasticity. Novel self-learning agentic systems are proposed that blend the sciencematics of SC decisions and dynamics. The designs guide firms seeking to build adaptive SCs using operational decision processes. The designs address the agentic nature of SC, embedding computational interaction models of firm SC networks. The designs contrast the stochastic action-taking and thereby the performance outcomes, discovering opportunities for adaptive operational designs of SC tasks. Fine-tuning and meta-learning are new design capabilities that adapt to evolving dynamic environments. Frameworks for behavioral customization and systematic exploration of the design space are provided as user guides. Exemplar designs are also provided to serve as a translation template for users to express operational models of their own contexts. To account for the dynamics of supply chains (SC), agent-based models are increasingly adopted. Such models exhibit SC structure and/or formulation dynamics. Though existing efforts commence adjacent-only structural changes, dynamism with respect to tasks is crucial for SC design and operational strategy development. Proposed is a process modeling library and workflow for discovering intricate designs of adaptive agentic systems. The library revises Dataflow and Structure, concealing sequencing and context designs of processes. Prompted specifications describe and enact designs. Applications in SC formulation discovery are provided.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Advance of AI-Based Predictive Models for Diagnosis of Alzheimer's Disease (AD) in Healthcare

Abstract The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based [...] Read more.
The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based on Convolutional Neural Networks (CNNs) to help with the early detection of Alzheimer's disease. Four levels of dementia have been applied to the 6,400 photos in the collection: not demented, slightly demented, moderately demented, and considerably mildly demented. Pixel normalization, class balancing utilizing data augmentation techniques, and picture scaling to 128×128 pixels were all part of a thorough workflow for data preparation. To improve the gathering of spatial dependence in volumetric MRI data, a 3D convolutional neural network (CNN) architecture was used. We used important performance measures including F1-score, recall, accuracy, precision, and log loss to gauge the model's effectiveness. A review of the available data indicates that the total F1-score, accuracy, recall, and precision were 99.0%, 99.0%, and 99.38%, respectively. The findings demonstrate the model's potential for practical use in early AD diagnosis and establish its robustness with the help of confusion matrix analysis and performance curves.
Figures
PreviousNext
Article
Open Access December 27, 2022

Big Data-Driven Time Series Forecasting for Financial Market Prediction: Deep Learning Models

Abstract Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing [...] Read more.
Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing prices of S&P 500 stocks over a decade (2010–2019), the ARIMA-LSTM model combines influences of both autoregressive time series forecasting with the substantial sequence learning property of LSTM. Data preprocessing in all aspects including missing values interpolation, outlier’s detection and data scaling – Min-Max guarantees data quality. The model is trained on 90/10 training/testing split and met with main performance metrics: MaE, MSE & RMSE. As indicated in the results, the proposed ARIMA-LSTM model gives a MAE value and MSE value of 0.248 and 0.101 respectively and RMSE of 0.319, a measure high accuracy on stock price prediction. Coupled comparative analysis with other Artificial Neural Networks (ANN) and BP Neural Networks (BPNN) are examples of machine learning reference models, further illustrates the suitability and superiority of ARIMA-LSTM approach as compared to the underlying models with the least MAE and strong predictive capability. This work demonstrates the efficiency of integrating the classical time series models with deep learning methods for financial forecasting.
Figures
PreviousNext
Article
Open Access December 27, 2022

Towards the Efficient Management of Cloud Resource Allocation: A Framework Based on Machine Learning

Abstract In the constantly evolving world of cloud computing, appropriate resource allocation is essential for both keeping costs down and ensuring an ongoing flow of apps and services. Because of its adaptability to specific tasks and human behavior, machine learning (ML) is a desirable choice for fulfilling those needs. This study Efficient cloud resource allocation is critical for optimizing performance [...] Read more.
In the constantly evolving world of cloud computing, appropriate resource allocation is essential for both keeping costs down and ensuring an ongoing flow of apps and services. Because of its adaptability to specific tasks and human behavior, machine learning (ML) is a desirable choice for fulfilling those needs. This study Efficient cloud resource allocation is critical for optimizing performance and cost in cloud computing environments. In order to improve the precision of resource allocation, this study investigates the use of Long Short-Term Memory (LSTM). The LSTM model achieved 97% accuracy, 97.5% precision, 98% recall, and a 97.8% F1-score (F1-score: harmonic mean of precision and recall), according to experimental data. The confusion matrix demonstrates strong classification performance across several resource classes, while the accuracy and loss curves verify steady learning with minimal overfitting. The suggested LSTM model performs better than more conventional ML (machine learning) models like Gradient Boosting (GB) and Logistic Regression (LR), according to a comparative study. These findings underscore the LSTM (Long Short-Term Memory) model’s robustness and suitability for dynamic cloud environments, enabling more accurate forecasting and efficient resource management.
Figures
PreviousNext
Article
Open Access November 24, 2022

Bridging Traditional ETL Pipelines with AI Enhanced Data Workflows: Foundations of Intelligent Automation in Data Engineering

Abstract Machine Learning (ML) and Artificial Intelligence (AI) are having an increasingly transformative impact on all industries and are already used in many mission-critical use cases in production, bringing considerable value. Data engineering, which combines ETL pipelines with other workflows managing data and machine learning operations, is also significantly impacted. The Intelligent Data [...] Read more.
Machine Learning (ML) and Artificial Intelligence (AI) are having an increasingly transformative impact on all industries and are already used in many mission-critical use cases in production, bringing considerable value. Data engineering, which combines ETL pipelines with other workflows managing data and machine learning operations, is also significantly impacted. The Intelligent Data Engineering and Automation framework offers the groundwork for intelligent automation processes. However, ML/AI are not the only disruptive forces; new Big Data technologies inspired by Web2.0 companies are also reshaping the Internet. Companies having the largest Big Data footprints not only provide applications with a Big Data operational model but also source their competitive advantage from data in the form of AI services and, consequently, impact the cost/performance equilibrium of ETL pipelines. All these technologies and reasons help explain why the traditional ETL pipeline design should adapt to current and emerging technologies and may be enhanced through artificial intelligence.
Figures
PreviousNext
Article
Open Access December 24, 2022

Web-Centric Cloud Framework for Real-Time Monitoring and Risk Prediction in Clinical Trials Using Machine Learning

Abstract Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing [...] Read more.
Advances in web-centric cloud computing have facilitated the establishment of an integrated cloud environment connecting a wide variety of clinical trial stakeholders. A web-centric cloud framework is proposed for real-time monitoring and risk prediction during clinical trials. The framework focuses on identifying relevant datasets, developing a data-management interface, and implementing machine-learning algorithms for data analysis. Detailed descriptions of the data-management interface and the machine-learning processes are provided, targeting active clinical trials with therapeutic uses in cancer. Demonstrations utilize publicly available clinical-trial data from the ClinicalTrials.gov repository. The real-time monitoring and risk prediction systems were assessed by developing five supervised-classification-machine-learning models for trial-status prediction and six unsupervised models for patient-safety-profile assessment, each representing a different phase of the clinical-trial process. All supervised models yielded high accuracy and area-under-the-curve values at the testing stage, while the unsupervised models demonstrated practical applicability. The results underscore the advantages of using the trial-status algorithm, the patient-safety-profile model, and the proposed framework for performing real-time monitoring and risk prediction of clinical trials.
Figures
PreviousNext
Review Article
Open Access December 02, 2020

Predictive Modeling and Machine Learning Frameworks for Early Disease Detection in Healthcare Data Systems

Abstract Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, [...] Read more.
Predictive modeling, supported by machine learning technology, aims to analyze data in order to guide decision-making towards actions generating desired values in the future. It encompasses the set of techniques used to build models that estimate the value of a certain variable predicting a forthcoming event from the past or current values of relevant attributes. In predictive healthcare modeling, the built models represent the relationship among the data concerning customer, provider, production, and other aspects of the healthcare environment in order to assist the decision processes in the prevention of diseases and in the planning of preventive actions by detection of high-risk patients. Contrary to trend analysis, whose goal is to describe past events, predictive models aim to provide useful indications regarding future events and changes. Predictive healthcare modeling supports actions that try to prevent the manifestation of diseases in healthy individuals or try to diagnose as early as possible the incidence of a disease in patients at risk. A sound predictive analysis encompasses not only the model-training task, but also the aspects of data quality, preprocessing, and fusion during its entire implementation lifecycle to ensure appropriate input data preparation. The robustness of the predictive model and its results depends highly on data quality. Due to the variety of data sources in healthcare environments, it becomes essential to use preprocessing in order to remove noise and inconsistencies. The increasing number of endorsable data exchange standards makes each data exchange achievable, but it demands the implementation of a data-governance program. In addition, the influence of the hospital-database architect on the architecture of an early-diagnosis model is important to guarantee appropriate input-formatting modularity.
Figures
PreviousNext
Review Article
Open Access July 20, 2021

Quality of Experience (QoE) and Network Performance Modelling for Multimedia Traffic

Abstract This research explores the complex relationship between user-perceived Quality of Experience (QoE) and underlying network performance for multimedia traffic. As video streaming, online gaming, and interactive media dominate modern networks, ensuring consistent QoE has become a key challenge. The study develops a network performance model that integrates objective Quality of Service (QoS) [...] Read more.
This research explores the complex relationship between user-perceived Quality of Experience (QoE) and underlying network performance for multimedia traffic. As video streaming, online gaming, and interactive media dominate modern networks, ensuring consistent QoE has become a key challenge. The study develops a network performance model that integrates objective Quality of Service (QoS) parameters—such as delay, jitter, packet loss, and throughput—with subjective QoE metrics like Mean Opinion Score (MOS) and perceptual quality indices. Using simulation-based and analytical approaches, the paper evaluates how network conditions affect multimedia traffic behavior and user satisfaction. The results highlight critical thresholds for QoE degradation, enabling predictive modeling for adaptive multimedia delivery and real-time optimization. This work contributes to designing intelligent, user-centered network management systems capable of balancing resource efficiency and end-user satisfaction.
Figures
PreviousNext
Review Article
Open Access December 26, 2021

Scalable Data Warehouse Architecture for Population Health Management and Predictive Analytics

Abstract Scalable architecture principles for data warehousing are introduced to support population health management and predictive analytics. These principles are validated through the design of an accompanying Data Pipeline that allows the integration of non-traditional data sources, the use of real-time data for descriptive analytics dashboards, and support for the generation of supervised Machine [...] Read more.
Scalable architecture principles for data warehousing are introduced to support population health management and predictive analytics. These principles are validated through the design of an accompanying Data Pipeline that allows the integration of non-traditional data sources, the use of real-time data for descriptive analytics dashboards, and support for the generation of supervised Machine Learning models. Several analytical capabilities have been implemented to exemplify the practical application of the principles, including predictive models for Risk Stratification in health care. Optimal cost-effectiveness and performance considerations ensure the practical relevance of the architectural principles and associated Data Pipeline. In recent years, the availability of Low-Cost Data Storage services and the increasing popularity of Streaming technologies opened new possibilities for the storage and processing of Streaming data on a near-real-time basis. These technologies can help Developing Countries in tackling many relevant issues such as Urban Planning, Environmental Management, Migration Policies, etc. A multi-tier approach combining Cloud-based Storage with Data Warehousing and Data Mining technologies can offer an interesting architecture to exploit Big Data related to populations.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

MLOps Frameworks for Reliable Model Deployment in Cloud Data Platforms

Abstract Machine learning operations (MLOps) comprises the practices, methods, and tooling that facilitate the deployment of reliable ML models in production environments. While many aspects of cloud data platforms are designed to enable reliability, only some managed ML services support the MLOps goals of continuous integration, continuous delivery, data lineage tracking, associated reproducibility, [...] Read more.
Machine learning operations (MLOps) comprises the practices, methods, and tooling that facilitate the deployment of reliable ML models in production environments. While many aspects of cloud data platforms are designed to enable reliability, only some managed ML services support the MLOps goals of continuous integration, continuous delivery, data lineage tracking, associated reproducibility, governance, and security. Furthermore, reliability encompasses not only the fulfillment of service-level objectives, but also systematic monitoring, alerting, and incident response automation. Architectural patterns are proposed to enable reliable deployment in cloud data platforms, focusing on the implementation of continuous integration and testing pipelines for ML models and the formulation of continuous delivery and rollout strategies. Continuous integration pipelines reduce the risk of regressions and ensure sufficient model performance at the time of deployment, while continuous delivery pipelines enable rapid updates to production models within acceptable risk profiles. The landscape of publicly available MLOps frameworks, tools, and services is also examined, emphasizing the pros and cons of established and rising solutions in containerization, orchestration, model serving, and inference. Containerization and orchestration contributes to the building of reliable deployment pipelines in cloud data platforms, whether general-purpose tools (e.g. Docker and Kubernetes) or solutions tailored for ML workloads. Containerized serving frameworks designed for high-throughput, low-latency inference can benefit a wide range of business applications, while auto-scaling and model versioning capabilities enhance the ease of use of cloud-native ML services.
Figures
PreviousNext
Review Article
Open Access December 18, 2023

Leveraging AI, ML, and Generative Neural Models to Bridge Gaps in Genetic Therapy Access and Real-Time Resource Allocation

Abstract This paper leverages gene and cell therapy research in diverse disorders ranging from monogenic to infectious diseases to cancer and emerging breakthroughs, where one can harness individual genes or a synthetic gene sequence designed based on a shared molecular pattern in infected cells to better fight various disorders [1]. A pivotal task is to predict the performances of candidate gene therapies [...] Read more.
This paper leverages gene and cell therapy research in diverse disorders ranging from monogenic to infectious diseases to cancer and emerging breakthroughs, where one can harness individual genes or a synthetic gene sequence designed based on a shared molecular pattern in infected cells to better fight various disorders [1]. A pivotal task is to predict the performances of candidate gene therapies to guide clinical translational research using methods such as retrospective bioinformatic analyses. Implementing them to a large-scale gene therapy database reveals that it is feasible to construct and apply well-performing interpretable, supervised learning models [2]. Preliminary evidence of machine learning approaches' statistical significance helps clinicians and biomedical researchers, market participants, and regulatory and economic experts derive relevant, practical applications, thereby enhancing the deployment of gene therapy and genomics to achieve positive, long-term growth for humanity while alleviating the ongoing worldwide economic burden precipitated by prolonged and recurring diseases. Deploying machine learning techniques to accelerate gene and cell therapy drug development and trials shall also mitigate the existing obstacle of limited patient access to emerging, transformative medical innovations such as gene therapy due to skyrocketing prices, which often herald gene therapy products as the world's most expensive medicines [3]. Moreover, in preventing patients from accessing effective, life-saving genetic medicines, there commonly exists a multidimensional access gap encompassing the availability, affordability, and quality or acceptability of these clinical treatments. The ensuing substantial gap has repeatedly been documented and mainly emanates from differential institutional and socio-political choices around resource allocation at international and domestic levels [4]. Particularly, it is also due to the stringent licensure and regulatory approval processes underpinned by insufficient evidence for novel safety and clinical efficacy profiles for genetic therapies in multiple micro-local diagnoses and subpopulations. We believe that a higher likelihood of gene therapy adoption shall result when the clinical evidence path contains adequate representation from the most diverse and relevant patient populations [5].
Figures
PreviousNext
Case Report

Query parameters

Keyword:  Machine Learning

View options

Citations of

Views of

Downloads of