Journal of Artificial Intelligence and Big Data
Volume 2, Issue 1, 2022
Open Access July 10, 2022 8 pages 1693 views 456 downloads

Digital Therapeutics in Oncology: A Better Outlook for Cancer Patients in the Future

Journal of Artificial Intelligence and Big Data 2022, 2(1), 347. DOI: 10.31586/jaibd.2022.347
Abstract
Digital therapeutics (DTx) is an evidence-based treatment that makes use of high-quality software. As many healthcare systems confront increasing expectations for quality results, the need for digital medications is steadily growing in the clinical arena. To ensure that patients are supported during chemotherapy and that needless hospital visits are avoided, digital therapeutics must be integrated
[...] Read more.
Digital therapeutics (DTx) is an evidence-based treatment that makes use of high-quality software. As many healthcare systems confront increasing expectations for quality results, the need for digital medications is steadily growing in the clinical arena. To ensure that patients are supported during chemotherapy and that needless hospital visits are avoided, digital therapeutics must be integrated into the cancer care pathway. Oncology patients are usually immunocompromised die to their disease and treatment, rendering them more susceptible to infection than the general population. As a result, visiting to a hospital might endanger their health. In addition, when cancer patients and survivors return home after treatment, digital health interventions provide them with the tools they need to manage their illness and its side effects in the privacy of their own homes. Considering the increasing prevalence of cancer patients and the solution that digital therapeutics has to offer in oncology, its future looks promising. This review article aims to summarize the existing companies in this domain, while evaluating the prospects as well.Full article
Review Article
Open Access August 20, 2022 12 pages 402 views 133 downloads

Advancing Predictive Failure Analytics in Automotive Safety: AI-Driven Approaches for School Buses and Commercial Trucks

Journal of Artificial Intelligence and Big Data 2021, 1(1), 944. DOI: 10.31586/jaibd.2022.944
Abstract
The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D
[...] Read more.
The recent evidence on AI in automotive safety shows the potential to reduce crashes and improve efficiency. Studies used AI techniques like machine learning and predictive analytics models to develop predictive collision avoidance systems. The studies collected data from various sources, such as traffic collision data and shapefiles. They utilized deep learning neural networks and 3D visualization techniques to analyze the data. However, there needs to be more research on AI in school bus and commercial truck safety. This paper explores the importance of AI-driven predictive failure analytics in enhancing automotive safety for these vehicles. It will discuss challenges, required data, technologies involved in predictive failure analytics, and the potential benefits and implications for the future. The conclusion will summarize the findings and emphasize the significance of AI in improving driver safety. Overall, this paper contributes to the field of automotive safety and aims to attract more research in this area.Full article
Review Article
Open Access August 29, 2022 11 pages 300 views 189 downloads

From Deterministic to Data-Driven: AI and Machine Learning for Next-Generation Production Line Optimization

Journal of Artificial Intelligence and Big Data 2022, 2(1), 952. DOI: 10.31586/jaibd.2022.952
Abstract
The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes
[...] Read more.
The advancement of modern manufacturing is synonymous with the growth of automation. Automation replaces human operators, improves productivity and quality, and reduces costs. However, the initial financial cost and knowledge requirements can be barriers to embracing automation. Manufacturers are now seeking smart manufacturing, known as the fourth industrial revolution. Smart manufacturing goes beyond automation and utilizes IoT, AI, and big data for optimized production. In a smart factory, production can be linked and controlled innovatively, leading to increased performance, agility, and reduced costs.Full article
Review Article
Open Access October 15, 2022 17 pages 401 views 61 downloads

Big Data and AI/ML in Threat Detection: A New Era of Cybersecurity

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1125. DOI: 10.31586/jaibd.2022.1125
Abstract
The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even
[...] Read more.
The unrelenting proliferation of data, entwined with the prevalence of mobile devices, has given birth to an unprecedented growth of information obscured by noise. With the Internet of Things and myriad endpoint devices generating vast volumes of sensitive and critical data, organizations are tasked with extracting actionable intelligence from this deluge. Governments and enterprises alike, even under pressure from regulatory boards, have strived to harness the power of data and leverage it to enhance safety and security, maximize performance, and mitigate risks. However, the adversaries themselves have capitalized on the unequal battle of big data and artificial intelligence to inflict widespread chaos. Therefore, the demand for big data analytics and AI/ML for high-fidelity intelligence, surveillance, and reconnaissance is at its highest. Today, in the cybersecurity realm, the detection of adverse incidents poses substantial challenges due to the sheer variety, volume, and velocity of deep packet inspection data. State-of-the-art detection techniques have fallen short of detecting the latest attacks after a big data breach incident. On the other hand, computational intelligence techniques such as machine learning have reignited the search for solutions for diverse monitoring problems. Recent advancements in AI/ML frameworks have the potential to analyze IoT/edge-generated big data in near real-time and assist risk assessment and mitigation through automated threat detection and modeling in the big data and AI/ML domain. Industry best practices and case studies are examined that endeavor to showcase how big data coupled with AI/ML unlocks new dimensions and capabilities in improved vigilance and monitoring, prediction of adverse incidents, intelligent modeling, and future uncertainty quantification by data resampling correction. All of these avenues lead to enhanced robustness, security, safety, and performance of industrial processes, computing, and infrastructures. A view of the future and how the potential threats due to the misuse of new technologies from bandwidth to IoT/edge, blockchain, AI, quantum, and autonomous fields is discussed. Cybersecurity is again playing out at a pace set by adversaries with low entry barriers and debilitating tools. The need for innovative solutions for defense from the emerging threat landscape, harnessing the power of new technologies and collaboration, is emphasized.Full article
Article
Open Access October 29, 2022 15 pages 396 views 55 downloads

Neural Networks for Enhancing Rail Safety and Security: Real-Time Monitoring and Incident Prediction

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1155. DOI: 10.31586/jaibd.2022.1155
Abstract
The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the
[...] Read more.
The growth in demand for rail transportation systems within cities, together with high-speed and long-distance transportation running on a rail network, raises the issues of both rail safety and security. If an accident or an attack occurs, its consequences can be extremely severe. To mitigate the impact of these events, the real-time monitoring of a rail system is required. In that case, the improvements in monitoring can be achieved using artificial intelligence algorithms such as neural networks. Neural networks have been used to achieve real-time incident identification in monitoring the track quality in terms of classifying the graphical outputs of an ultrasonic system working with the rails and track bed, to predict incidents on the rail infrastructure due to transmission channels becoming blocked, and also to attempt scheduling preemptive and preventative maintenance. In terms of forecasting incidents and accidents on board the trains, neural networks have been used to model passenger behavior and optimize responses during a train station evacuation. In tackling the incidents and accidents occurring on rail transport, we contribute with two methodologies to detect anomalies in real-time and identify the level of security risk: at the maintenance level with personnel operating along the railways, and onboard passenger trains. These methodologies were evaluated on real-world datasets and shown to be able to achieve a high accuracy in the results. The results generated from these case studies also reveal the potential for network-wide applications, which could enhance security and safety on railway networks by offering the possibility of better managing network disruptions and more rapidly identifying security issues. The speed and coverage of the information generated through the implementation of these methodologies have implications in utilizing prediction for decision support and enhancing safety and security on board the rail network.Full article
Review Article
Open Access November 16, 2023 20 pages 1314 views 240 downloads

Innovations in Agricultural Machinery: Assessing the Impact of Advanced Technologies on Farm Efficiency

Journal of Artificial Intelligence and Big Data 2023, 3(1), 1156. DOI: 10.31586/jaibd.2023.1156
Abstract
Progress in the development and adoption of technological innovations is instrumental in enhancing the efficiency of production systems across the globe. Through the introduction of cost-efficient and high-performing technologies, countries can both reduce the resource use intensity of their economies and boost the global supply of essential products. The focus of this study is to analyze the
[...] Read more.
Progress in the development and adoption of technological innovations is instrumental in enhancing the efficiency of production systems across the globe. Through the introduction of cost-efficient and high-performing technologies, countries can both reduce the resource use intensity of their economies and boost the global supply of essential products. The focus of this study is to analyze the application of advanced machinery and mechanisms within the agricultural sector, a primary industry that acts as a major contributor to the gross domestic product (GDP) of many nations. Specifically, this paper provides an in-depth review of the latest impact assessments based on analytical and modeling tools conducted on agricultural machinery and production technologies. Our findings highlight the positive role played by scientific progress and innovation in driving the competitiveness, growth and improved sustainability of the agricultural sector. Over the years, advanced technologies have accelerated the development and modernization of machinery, equipment, and processes in farming. Typically, modern machinery and equipment have enabled large-scale production on farms, enhancing the cost-efficient use of both land and labor, as well as the capacity and timeliness in performing essential agricultural operations. The rapid diffusion of technical advancements has further contributed to resource savings, productivity growth, and the overall transformation of agricultural value chains. Accordingly, the implementation of appropriate enabling conditions is of vital importance in encouraging the widespread integration of technologies in agriculture, not only boosting productivity along the agri-food chain but also yielding widespread social, economic, and environmental benefits.Full article
Review Article
Open Access October 30, 2022 13 pages 307 views 49 downloads

Towards Autonomous Analytics: The Evolution of Self-Service BI Platforms with Machine Learning Integration

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1157. DOI: 10.31586/jaibd.2022.1157
Abstract
Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the
[...] Read more.
Self-service business intelligence (BI) platforms have become essential applications for exploring, analyzing, and visualizing business data in various domains. Here, we envisage that the business intelligence platform will perform automatic and autonomous data analytics with minimal to no user interaction. We aim to offer a data-driven, intelligent, and scalable infrastructure that amplifies the advantages of BI systems and discovers hidden and complex insights from very large business datasets, which a business analyst can miss during manual exploratory data analysis. Towards our future vision of autonomous analytics, we propose a collective machine learning model repository with an integration layer for user-defined analytical goals within the BI platform. The proposed architecture can effectively reduce the cognitive load on users for repetitive tasks, democratizing data science expertise across data workers and facilitating a less experienced user community to develop and use advanced machine learning and statistical algorithms.Full article
Review Article
Open Access November 05, 2022 15 pages 439 views 50 downloads

Application of Neural Networks in Optimizing Health Outcomes in Medicare Advantage and Supplement Plans

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1178. DOI: 10.31586/jaibd.2022.1178
Abstract
The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling,
[...] Read more.
The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, personalized treatment recommendations, and risk stratification. This paper explores the application of neural networks in enhancing health outcomes within the context of Medicare Advantage and Supplement plans. We review how deep learning models can be leveraged to predict patient risk, optimize resource allocation, and identify at-risk populations for preventive interventions. Additionally, we discuss the potential for neural networks to improve claims processing, reduce fraud, and streamline administrative burdens. By integrating various data sources, including medical records, claims data, and demographic information, neural networks enable more accurate and efficient decision-making processes. Ultimately, this approach can lead to better patient care, reduced healthcare costs, and improved satisfaction for beneficiaries of these programs. The paper concludes by highlighting the current limitations, ethical considerations, and future directions for AI adoption in the Medicare Advantage and Supplement sectors.Full article
Review Article
Open Access December 27, 2022 15 pages 436 views 78 downloads

Advancing Pain Medicine with AI and Neural Networks: Predictive Analytics and Personalized Treatment Plans for Chronic and Acute Pain Managements

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1201. DOI: 10.31586/jaibd.2022.1201
Abstract
There is a growing body of evidence that the number of individuals suffering from chronic and acute pain is under-reported and the burden of the veteran, aging, athletic, and working populations is rising. Current pain management is limited by our capacity to collaborate with individuals continuing normal daily functions and self-administration of pain treatments outside of traditional healthcare
[...] Read more.
There is a growing body of evidence that the number of individuals suffering from chronic and acute pain is under-reported and the burden of the veteran, aging, athletic, and working populations is rising. Current pain management is limited by our capacity to collaborate with individuals continuing normal daily functions and self-administration of pain treatments outside of traditional healthcare appointments and hospital settings. In this review, the current gap in clinical care for real-time feedback and guidance with pain management decision-making for chronic and post-operative pain treatment is defined. We examine the recent and future applications for predictive analytics of opioid use after surgery and implementing real-time neural networks for personalized pain management goal setting for particular individuals on the path to discharge to normal function. Integration of personalized neural networks with longitudinal data may enable the development of future treatment personalizations paired with electrical simulations.Full article
Review Article
Open Access December 27, 2022 14 pages 195 views 27 downloads

Building Scalable and Secure Cloud Architectures: Multi-Region Deployments, Auto Scaling, and Traffic Management in Azure and AWS for Microservices

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1207. DOI: 10.31586/jaibd.2022.1207
Abstract
The last few years have seen an increased adoption of cloud infrastructure, which has in turn led to a growth in large-scale distributed architectures in data centers to accommodate cloud resource elasticity and resiliency better. Selecting the right approach to build secure, scalable, and reliable cloud infrastructure within a budget is always a challenge. This text focuses on offering practical
[...] Read more.
The last few years have seen an increased adoption of cloud infrastructure, which has in turn led to a growth in large-scale distributed architectures in data centers to accommodate cloud resource elasticity and resiliency better. Selecting the right approach to build secure, scalable, and reliable cloud infrastructure within a budget is always a challenge. This text focuses on offering practical solutions for designing and building a secure, scalable, and reliable cloud-based infrastructure where auto-scaling and multi-region deployments are the two key approaches to offer high availability. It covers designing secure and scalable microservices using cloud platforms. The content will provide an understanding of public cloud architecture, the design of microservices running on the cloud, and also the design patterns used in the cloud era. With real-world examples, you will learn how microservices can enable scalable distributed systems. Furthermore, you will be walked through multi-region deployments, auto-scaling, and traffic management in cloud environments, using a sample environment setup and useful tips and tricks for monitoring. Finally, you will see a mock implementation of cloud infrastructure on-premise for a private cloud or single-node cloud. By the end of this text, you will be able to build, manage, and deploy a highly scalable and reliable cloud-ready solution [1].Full article
Review Article
Open Access December 27, 2022 12 pages 131 views 12 downloads

Advance of AI-Based Predictive Models for Diagnosis of Alzheimer's Disease (AD) in Healthcare

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1340. DOI: 10.31586/jaibd.2022.1340
Abstract
The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based
[...] Read more.
The effects on the elderly are disproportionately Alzheimer’s disease (AD) is one of the most prevalent and chronic types of dementia. Alzheimer's disease (AD), a fatal illness that can harm brain structures and cells long before symptoms appear, is currently incurable and incurable. Using brain MRI pictures from a publicly accessible Kaggle dataset, this study suggests a prediction model based on Convolutional Neural Networks (CNNs) to help with the early detection of Alzheimer's disease. Four levels of dementia have been applied to the 6,400 photos in the collection: not demented, slightly demented, moderately demented, and considerably mildly demented. Pixel normalization, class balancing utilizing data augmentation techniques, and picture scaling to 128×128 pixels were all part of a thorough workflow for data preparation. To improve the gathering of spatial dependence in volumetric MRI data, a 3D convolutional neural network (CNN) architecture was used. We used important performance measures including F1-score, recall, accuracy, precision, and log loss to gauge the model's effectiveness. A review of the available data indicates that the total F1-score, accuracy, recall, and precision were 99.0%, 99.0%, and 99.38%, respectively. The findings demonstrate the model's potential for practical use in early AD diagnosis and establish its robustness with the help of confusion matrix analysis and performance curves.Full article
Article
Open Access December 27, 2022 12 pages 112 views 12 downloads

Big Data-Driven Time Series Forecasting for Financial Market Prediction: Deep Learning Models

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1341. DOI: 10.31586/jaibd.2022.1341
Abstract
Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing
[...] Read more.
Financial markets have become more and more complex, so has been the number of data sources. Stock price prediction has hence become a tough but important task. The time dependencies in stock price movements tend to escape from traditional models. In this work, a hybrid ARIMA-LSTM model is suggested to enhance accuracy of stock price forecasts. Based on time series indicators like adjusted closing prices of S&P 500 stocks over a decade (2010–2019), the ARIMA-LSTM model combines influences of both autoregressive time series forecasting with the substantial sequence learning property of LSTM. Data preprocessing in all aspects including missing values interpolation, outlier’s detection and data scaling – Min-Max guarantees data quality. The model is trained on 90/10 training/testing split and met with main performance metrics: MaE, MSE & RMSE. As indicated in the results, the proposed ARIMA-LSTM model gives a MAE value and MSE value of 0.248 and 0.101 respectively and RMSE of 0.319, a measure high accuracy on stock price prediction. Coupled comparative analysis with other Artificial Neural Networks (ANN) and BP Neural Networks (BPNN) are examples of machine learning reference models, further illustrates the suitability and superiority of ARIMA-LSTM approach as compared to the underlying models with the least MAE and strong predictive capability. This work demonstrates the efficiency of integrating the classical time series models with deep learning methods for financial forecasting.Full article
Article
Open Access December 27, 2022 11 pages 64 views 51 downloads

Towards the Efficient Management of Cloud Resource Allocation: A Framework Based on Machine Learning

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1344. DOI: 10.31586/jaibd.2022.1344
Abstract
In the constantly evolving world of cloud computing, appropriate resource allocation is essential for both keeping costs down and ensuring an ongoing flow of apps and services. Because of its adaptability to specific tasks and human behavior, machine learning (ML) is a desirable choice for fulfilling those needs. This study Efficient cloud resource allocation is critical for optimizing performance
[...] Read more.
In the constantly evolving world of cloud computing, appropriate resource allocation is essential for both keeping costs down and ensuring an ongoing flow of apps and services. Because of its adaptability to specific tasks and human behavior, machine learning (ML) is a desirable choice for fulfilling those needs. This study Efficient cloud resource allocation is critical for optimizing performance and cost in cloud computing environments. In order to improve the precision of resource allocation, this study investigates the use of Long Short-Term Memory (LSTM). The LSTM model achieved 97% accuracy, 97.5% precision, 98% recall, and a 97.8% F1-score (F1-score: harmonic mean of precision and recall), according to experimental data. The confusion matrix demonstrates strong classification performance across several resource classes, while the accuracy and loss curves verify steady learning with minimal overfitting. The suggested LSTM model performs better than more conventional ML (machine learning) models like Gradient Boosting (GB) and Logistic Regression (LR), according to a comparative study. These findings underscore the LSTM (Long Short-Term Memory) model’s robustness and suitability for dynamic cloud environments, enabling more accurate forecasting and efficient resource management.Full article
Article
Open Access December 27, 2022 11 pages 34 views 21 downloads

Survey of Automated Testing Frameworks and Tools for Software Quality Assurance: Challenges and Best Practices

Journal of Artificial Intelligence and Big Data 2022, 2(1), 1351. DOI: 10.31586/jaibd.2022.1351
Abstract
Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations
[...] Read more.
Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations in different development contexts. In parallel, it examines established SQA techniques such as Test-Driven Development, static analysis, and white-box testing, which provide systematic methods for defect detection and quality improvement. The study further examines the role of practical tools, such as Selenium, TestNG, and JUnit, in supporting test automation and validation activities. In addition to highlighting technical capabilities, the paper identifies common challenges faced in automation, including incomplete requirements, integration complexities, and maintaining evolving test suites. Recommended best practices are provided to address these issues, offering guidance for organizations seeking to strengthen their software testing processes through structured frameworks, adaptive techniques, and reliable automation tools.Full article
Article
ISSN: 2771-2389
DOI prefix: 10.31586/jaibd
Journal metrics
Publication year
2016-2026
Journal (home page) visits
15510
Published articles
62
Article views
40687
Article downloads
6763
Downloads/article
109
APC
99.00