Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access January 10, 2025

Artificial Immune Systems: A Bio-Inspired Paradigm for Computational Intelligence

Abstract Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a [...] Read more.
Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a comprehensive exploration of AIS applications in domains such as cybersecurity, resource allocation, and autonomous systems, highlighting the growing importance of hybrid AIS models. Recent advancements, including integrations with machine learning, quantum computing, and bioinformatics, are discussed as solutions to scalability, high-dimensional data processing, and efficiency challenges. Core algorithms, such as the Negative Selection Algorithm (NSA) and Clonal Selection Algorithm (CSA), are examined, along with limitations in interpretability and compatibility with emerging AI paradigms. The paper concludes by proposing future research directions, emphasizing scalable hybrid frameworks, quantum-inspired approaches, and real-time adaptive systems, underscoring AIS's transformative potential across diverse computational fields.
Figures
PreviousNext
Article
Open Access March 12, 2025

Academic Aspirations of 12th Grade Students in the United States: Place-Based Diminished Returns of Parental Education in Rural Areas

Abstract Background: The Motivational Theory of Life-Span Development suggests that individual aspirations are shaped by both internal and external resources. Parental education is a key determinant of educational aspirations, yet its effects may vary by geographic location, demonstrating spatial patterns of Minorities’ Diminished Returns (MDRs). Objectives: This [...] Read more.
Background: The Motivational Theory of Life-Span Development suggests that individual aspirations are shaped by both internal and external resources. Parental education is a key determinant of educational aspirations, yet its effects may vary by geographic location, demonstrating spatial patterns of Minorities’ Diminished Returns (MDRs). Objectives: This study examines the association between parental education and aspirations for graduate or professional education among non-Latino White adolescents, with a specific focus on urban-suburban versus rural differences. Methods: Using data from the 12th-grade cohort of the Monitoring the Future (MTF) 2024 survey, we conducted multivariate analyses to assess the relationship between parental education and aspirations for graduate or professional education. We further examined whether this association was moderated by geographic location (urban-suburban vs. rural) to identify place-based MDRs. Results: Higher parental education was associated with greater aspirations for advanced education; however, this effect was weaker in rural areas compared to urban and suburban settings. These findings highlight that even among non-Latino White adolescents, rural residence diminishes the benefits of socioeconomic resources, providing evidence of place-based MDRs. Conclusion: Rural residents face a dual disadvantage—both lower socioeconomic status and weaker returns on those resources—necessitating targeted interventions beyond resource allocation. To address disparities in educational aspirations in rural areas, policymakers should focus on improving equitable access to educational opportunities and ensuring that these resources translate into comparable outcomes across different social and geographic contexts.
Article
Open Access December 27, 2021

Financial Implications of Predictive Analytics in Vehicle Manufacturing: Insights for Budget Optimization and Resource Allocation

Abstract Factory owners and vehicle manufacturers increasingly opt for predictive analytics to inform their decisions. While predictive analytics have been proven to provide insights into the initiation of maintenance measures before a machine actually fails, the right models and features could have a significant impact on the budget spent and resources allocated. This means that financially oriented [...] Read more.
Factory owners and vehicle manufacturers increasingly opt for predictive analytics to inform their decisions. While predictive analytics have been proven to provide insights into the initiation of maintenance measures before a machine actually fails, the right models and features could have a significant impact on the budget spent and resources allocated. This means that financially oriented questions need to at least partially guide the decisions in the planning phase of data science projects. Data-driven approaches will play an increasingly important role, but only a few of the firms that were confident performed logistic regression models for predictive maintenance. Also, from the available knowledge, data-driven classification models connecting vehicle component failures and the occurrence of delays at the assembly line have not been published. This paper utilizes a real-world data-driven approach using classification models in predictive analytics by vehicle manufacturers and thereby links the financial implications of such data science projects to their results. We expand the existing literature on predictive maintenance and possess a unique dataset of newly launched series of vehicles, presented as-is. Our research context is of interest to researchers and practitioners in the automotive industry that manage and plan the final vehicle assembly with just-in-time principles, factoring the consequences of component failures on the assembly process. Key findings of this paper highlight that while minor tweaking of the models is possible, their potential input in decision-making processes for budget optimization is limited.
Figures
PreviousNext
Review Article
Open Access November 05, 2022

Application of Neural Networks in Optimizing Health Outcomes in Medicare Advantage and Supplement Plans

Abstract The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, [...] Read more.
The growing complexity and variability in healthcare delivery and costs within Medicare Advantage (MA) and Medicare Supplement (Medigap) plans present significant challenges for improving health outcomes and managing expenditures. Neural networks, a subset of artificial intelligence (AI), have shown considerable promise in optimizing healthcare processes, particularly in predictive modeling, personalized treatment recommendations, and risk stratification. This paper explores the application of neural networks in enhancing health outcomes within the context of Medicare Advantage and Supplement plans. We review how deep learning models can be leveraged to predict patient risk, optimize resource allocation, and identify at-risk populations for preventive interventions. Additionally, we discuss the potential for neural networks to improve claims processing, reduce fraud, and streamline administrative burdens. By integrating various data sources, including medical records, claims data, and demographic information, neural networks enable more accurate and efficient decision-making processes. Ultimately, this approach can lead to better patient care, reduced healthcare costs, and improved satisfaction for beneficiaries of these programs. The paper concludes by highlighting the current limitations, ethical considerations, and future directions for AI adoption in the Medicare Advantage and Supplement sectors.
Figures
PreviousNext
Review Article
Open Access December 27, 2021

Predictive Analytics and Deep Learning for Logistics Optimization in Supply Chain Management

Abstract Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the [...] Read more.
Managing supply chains efficiently has become a major concern for organizations. One of the important factors to optimize in supply chain management is logistics. The advent of technology and the increase in data availability allow for the enhancement of the efficiency of logistics in a supply chain. This discussion focuses on the blending of analytics with innovation in logistics to improve the operations of a supply chain. An approach is presented on how predictive analytics can be used to improve logistics operations. In order to analyze big data in logistics effectively, an artificial intelligence computational technique, specifically deep learning, is employed. Two case studies are illustrated to demonstrate the practical employability of the proposed technique. This reveals the power and potential of using predictive analytics in logistics to project various KPI values ahead in the future based on the contemporary data from the logistics operations; sheds light on the innovative technique of employing deep learning through deep learning-based predictive analytics in logistics; suggests incorporating innovative techniques like deep learning with predictive analytics to develop an accurate forecasting technique in logistics and optimize operations and prevent disruption in the supply chain. The network of supply chains has become more complex, necessitating the need for the latest technological advancements. The sectors that have gained a fair amount of attention for the application of technology to optimize their operations are manufacturing, healthcare, aerospace, and the automotive industry. A little attention has been diverted to the logistics sector; many describe how analytics and artificial intelligence can be used in the logistics sector to achieve higher optimization. Currently, significant research has been done in optimizing logistics operations. Nevertheless, with the explosive volume of historical data being produced by the logistics operations of an organization, there is a great opportunity to learn valuable insights from the data accumulated over time for more long-term strategic planning. To develop the logistics operations in an organization, the use of historical data is essential to understand the trends in the operations. For example, regular maintenance planning and resource allocation based on trends are long-term activities that will not affect logistics operations immediately but can affect the business’s strategic planning in the long run. A predictive analysis technique employed on historical data of logistics can narrow down conclusions based on the future trends of logistics operations. Thus, the technique can be used to prevent the disruption of the supply chain.
Figures
PreviousNext
Review Article
Open Access January 10, 2022

Composable Infrastructure: Towards Dynamic Resource Allocation in Multi-Cloud Environments

Abstract To ensure maximum flexibility, service providers offer a variety of computing options with regard to CPU, memory capacity, and network bandwidth. At the same time, the efficient operation of current cloud applications requires an infrastructure that can adjust its configuration continuously across multiple dimensions, which are generally not statically predefined. Our research shows that these [...] Read more.
To ensure maximum flexibility, service providers offer a variety of computing options with regard to CPU, memory capacity, and network bandwidth. At the same time, the efficient operation of current cloud applications requires an infrastructure that can adjust its configuration continuously across multiple dimensions, which are generally not statically predefined. Our research shows that these requirements are hardly met with today's typical public cloud and management approaches. To provide such a highly dynamic and flexible execution environment, we propose the application-driven autonomic management of data center resources as the core vision for the development of a future cloud infrastructure. As part of this vision and the required gradual progress toward it, we present the concept of composable infrastructure and its impact on resource allocation for multi-cloud environments. We introduce relevant techniques for optimizing resource allocation strategies and indicate future research opportunities [1]. Many cloud service providers offer computing instances that can be configured with arbitrary capacity, depending on the availability of certain hardware resources. This level of configurability provides customers with the desired flexibility for executing their applications. Because of the large number of such prerequisite instances with often varying characteristics, service consumers must invest considerable effort to set up or reconfigure elaborate resource provisioning systems. Most importantly, they must differentiate the loads to be distributed between jobs that need to be executed versus placeholder jobs, i.e., jobs that trigger the automatic elasticity functionality responsible for resource allocator reconfiguration. Operations research reveals that the optimization of resource allocator reconfiguration strategies is a fundamentally difficult problem due to its NP-hardness. Despite these challenges, dynamic resource allocation in multi-clouds is becoming increasingly important since modern Internet-based service settings are dispersed across multiple providers [2].
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Towards the Efficient Management of Cloud Resource Allocation: A Framework Based on Machine Learning

Abstract In the constantly evolving world of cloud computing, appropriate resource allocation is essential for both keeping costs down and ensuring an ongoing flow of apps and services. Because of its adaptability to specific tasks and human behavior, machine learning (ML) is a desirable choice for fulfilling those needs. This study Efficient cloud resource allocation is critical for optimizing performance [...] Read more.
In the constantly evolving world of cloud computing, appropriate resource allocation is essential for both keeping costs down and ensuring an ongoing flow of apps and services. Because of its adaptability to specific tasks and human behavior, machine learning (ML) is a desirable choice for fulfilling those needs. This study Efficient cloud resource allocation is critical for optimizing performance and cost in cloud computing environments. In order to improve the precision of resource allocation, this study investigates the use of Long Short-Term Memory (LSTM). The LSTM model achieved 97% accuracy, 97.5% precision, 98% recall, and a 97.8% F1-score (F1-score: harmonic mean of precision and recall), according to experimental data. The confusion matrix demonstrates strong classification performance across several resource classes, while the accuracy and loss curves verify steady learning with minimal overfitting. The suggested LSTM model performs better than more conventional ML (machine learning) models like Gradient Boosting (GB) and Logistic Regression (LR), according to a comparative study. These findings underscore the LSTM (Long Short-Term Memory) model’s robustness and suitability for dynamic cloud environments, enabling more accurate forecasting and efficient resource management.
Figures
PreviousNext
Article
Open Access June 28, 2016

Scalable Task Scheduling in Cloud Computing Environments Using Swarm Intelligence-Based Optimization Algorithms

Abstract Effective task scheduling in cloud computing is crucial for optimizing system performance and resource utilization. Traditional scheduling methods often struggle to adapt to the dynamic and complex nature of cloud environments, where workloads, resource availability, and task requirements constantly change. Swarm intelligence-based optimization algorithms, such as Particle Swarm Optimization [...] Read more.
Effective task scheduling in cloud computing is crucial for optimizing system performance and resource utilization. Traditional scheduling methods often struggle to adapt to the dynamic and complex nature of cloud environments, where workloads, resource availability, and task requirements constantly change. Swarm intelligence-based optimization algorithms, such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Artificial Bee Colony (ABC), offer a promising solution by mimicking natural processes to explore large search spaces efficiently. These algorithms are effective in balancing multiple objectives, including minimizing execution time, reducing energy consumption, and ensuring fairness in resource allocation. They also enhance system scalability, which is vital for modern cloud infrastructures. However, challenges remain, including slow convergence speeds, complex parameter tuning, and integration with existing cloud frameworks. Addressing these issues will be essential for the practical implementation of swarm intelligence in cloud task scheduling, helping to improve resource management and overall system performance.
Figures
PreviousNext
Review Article
Open Access December 18, 2023

Leveraging AI, ML, and Generative Neural Models to Bridge Gaps in Genetic Therapy Access and Real-Time Resource Allocation

Abstract This paper leverages gene and cell therapy research in diverse disorders ranging from monogenic to infectious diseases to cancer and emerging breakthroughs, where one can harness individual genes or a synthetic gene sequence designed based on a shared molecular pattern in infected cells to better fight various disorders [1]. A pivotal task is to predict the performances of candidate gene therapies [...] Read more.
This paper leverages gene and cell therapy research in diverse disorders ranging from monogenic to infectious diseases to cancer and emerging breakthroughs, where one can harness individual genes or a synthetic gene sequence designed based on a shared molecular pattern in infected cells to better fight various disorders [1]. A pivotal task is to predict the performances of candidate gene therapies to guide clinical translational research using methods such as retrospective bioinformatic analyses. Implementing them to a large-scale gene therapy database reveals that it is feasible to construct and apply well-performing interpretable, supervised learning models [2]. Preliminary evidence of machine learning approaches' statistical significance helps clinicians and biomedical researchers, market participants, and regulatory and economic experts derive relevant, practical applications, thereby enhancing the deployment of gene therapy and genomics to achieve positive, long-term growth for humanity while alleviating the ongoing worldwide economic burden precipitated by prolonged and recurring diseases. Deploying machine learning techniques to accelerate gene and cell therapy drug development and trials shall also mitigate the existing obstacle of limited patient access to emerging, transformative medical innovations such as gene therapy due to skyrocketing prices, which often herald gene therapy products as the world's most expensive medicines [3]. Moreover, in preventing patients from accessing effective, life-saving genetic medicines, there commonly exists a multidimensional access gap encompassing the availability, affordability, and quality or acceptability of these clinical treatments. The ensuing substantial gap has repeatedly been documented and mainly emanates from differential institutional and socio-political choices around resource allocation at international and domestic levels [4]. Particularly, it is also due to the stringent licensure and regulatory approval processes underpinned by insufficient evidence for novel safety and clinical efficacy profiles for genetic therapies in multiple micro-local diagnoses and subpopulations. We believe that a higher likelihood of gene therapy adoption shall result when the clinical evidence path contains adequate representation from the most diverse and relevant patient populations [5].
Figures
PreviousNext
Case Report

Query parameters

Keyword:  Resource Allocation

View options

Citations of

Views of

Downloads of