Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access January 10, 2025

Artificial Immune Systems: A Bio-Inspired Paradigm for Computational Intelligence

Abstract Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a [...] Read more.
Artificial Immune Systems (AIS) are bio-inspired computational frameworks that emulate the adaptive mechanisms of the human immune system, such as self/non-self discrimination, clonal selection, and immune memory. These systems have demonstrated significant potential in addressing complex challenges across optimization, anomaly detection, and adaptive system control. This paper provides a comprehensive exploration of AIS applications in domains such as cybersecurity, resource allocation, and autonomous systems, highlighting the growing importance of hybrid AIS models. Recent advancements, including integrations with machine learning, quantum computing, and bioinformatics, are discussed as solutions to scalability, high-dimensional data processing, and efficiency challenges. Core algorithms, such as the Negative Selection Algorithm (NSA) and Clonal Selection Algorithm (CSA), are examined, along with limitations in interpretability and compatibility with emerging AI paradigms. The paper concludes by proposing future research directions, emphasizing scalable hybrid frameworks, quantum-inspired approaches, and real-time adaptive systems, underscoring AIS's transformative potential across diverse computational fields.
Figures
PreviousNext
Article
Open Access March 22, 2025

Enhancing Scalability and Performance in Analytics Data Acquisition through Spark Parallelism

Abstract Data acquisition serves as a critical component of modern data architecture, with REST API integration emerging as one of the most common approaches for sourcing external data. This study evaluates the efficiency of various methodologies for collecting data via REST APIs and benchmark their performance. It explores how leveraging the Spark distributed computing platform can optimize large scale [...] Read more.
Data acquisition serves as a critical component of modern data architecture, with REST API integration emerging as one of the most common approaches for sourcing external data. This study evaluates the efficiency of various methodologies for collecting data via REST APIs and benchmark their performance. It explores how leveraging the Spark distributed computing platform can optimize large scale REST API calls, enabling enhanced scalability and improved processing speeds to meet the demands of high volume data workflows.
Figures
PreviousNext
Review Article
Open Access March 08, 2025

Advancing Preference Learning in AI: Beyond Pairwise Comparisons

Abstract Preference learning plays a crucial role in AI applications, particularly in recommender systems and personalized services. Traditional pairwise comparisons, while foundational, present scalability challenges in large-scale systems. This study explores alternative elicitation methods such as ranking, numerical ratings, and natural language feedback, alongside a novel hybrid framework that [...] Read more.
Preference learning plays a crucial role in AI applications, particularly in recommender systems and personalized services. Traditional pairwise comparisons, while foundational, present scalability challenges in large-scale systems. This study explores alternative elicitation methods such as ranking, numerical ratings, and natural language feedback, alongside a novel hybrid framework that dynamically integrates these approaches. The proposed methods demonstrate improved efficiency, reduced cognitive load, and enhanced accuracy. Results from simulated user studies reveal that hybrid approaches outperform traditional methods, achieving a 40% reduction in user effort while maintaining high predictive accuracy. These findings open pathways for deploying user-centric, scalable preference learning systems in dynamic environments.
Review Article
Open Access December 22, 2023

Cloud Based Payment Processing and Merchant Services: A Scalable and Secure Framework for Digital Transactions in a Globalized Economy

Abstract In today’s world of a globalized economy and ubiquitous digital transactions, businesses are hungry for ways to increase transaction efficiency and security. In the real economy, solutions that scale to fit transaction volume or velocity are equally valuable. This is true for clearing and settlement and for the day-to-day needs of buyers and sellers alike. Clever observers of both cash and digital [...] Read more.
In today’s world of a globalized economy and ubiquitous digital transactions, businesses are hungry for ways to increase transaction efficiency and security. In the real economy, solutions that scale to fit transaction volume or velocity are equally valuable. This is true for clearing and settlement and for the day-to-day needs of buyers and sellers alike. Clever observers of both cash and digital transactions can spot cases where technology that supports transaction security or safety can strengthen consumer-borrower ties, mitigate default risks, and reduce recidivism. In general, a cloud solution for payment processing and merchant services solves two major barriers to optimum business technology: lack of scalability and lack of security [1]. The extension of current practice has its advantages, but new solutions unlock significant opportunities for both consumers and financial institutions [2]. The focus of this work is on the provisioning of cloud-based payment processing and merchant services to financial institutions and established global organizations, although the options available with these services mean they are potentially applicable to a wide range of group entities, including non-trading organizations, pension administrators, and group treasurers. With the increased attention to cybersecurity, a mass of data is available to assist the IT departments of the major payment processors, merchants, and acquirers to get cybersecurity on the radar of C-level executives [3]. The case is put forward for the increased targeting of and reporting to the Board’s Audit, Risk, and Liability Committees of publicly held payment processors and merchants to reduce fraud losses and mitigate the reputation and class action lawsuit risk due to data breaches. The progress of technology in the payment sector requires all stakeholders to have a collective approach in order to mitigate fraud and cybersecurity-related risks in new products and services to enhance consumer confidence and the proportion of retail cashless transactions [4].
Figures
PreviousNext
Review Article
Open Access January 09, 2025

Advances in the Synthesis and Optimization of Pharmaceutical APIs: Trends and Techniques

Abstract The synthesis and optimization of Active Pharmaceutical Ingredients (APIs) is fundamental to pharmaceutical drug development, directly influencing drug efficacy, safety, and cost-effectiveness. Over recent years, significant advancements in synthetic methodologies and manufacturing technologies have transformed API production. This manuscript provides an overview of the latest innovations in API [...] Read more.
The synthesis and optimization of Active Pharmaceutical Ingredients (APIs) is fundamental to pharmaceutical drug development, directly influencing drug efficacy, safety, and cost-effectiveness. Over recent years, significant advancements in synthetic methodologies and manufacturing technologies have transformed API production. This manuscript provides an overview of the latest innovations in API synthesis, focusing on key techniques such as green chemistry, continuous flow chemistry, biocatalysis, and automation. Green chemistry principles, including solvent substitution and catalytic reactions, have enhanced sustainability by reducing waste and energy consumption. Continuous flow chemistry offers improved reaction control, scalability, and safety, while biocatalysis provides an eco-friendly alternative for synthesizing complex and chiral APIs. Additionally, the integration of automation and advanced process control using machine learning and real-time monitoring has optimized production efficiency and consistency. The manuscript also discusses the challenges associated with regulatory compliance and quality assurance, highlighting the role of advanced analytical techniques such as HPLC, NMR, and mass spectrometry in ensuring API purity. Looking ahead, personalized medicine and smart manufacturing technologies, including blockchain for traceability, are expected to drive further innovation in API production. This review concludes by emphasizing the need for continued advancements in sustainability, efficiency, and scalability to meet the evolving demands of the pharmaceutical industry, ultimately enabling the development of safer, more effective, and environmentally responsible medicines.
Review Article
Open Access April 11, 2024

5V’s of Big Data Shifted to Suite the Context of Software Code: Big Code for Big Software Projects

Abstract Data is the collection of facts and observations in terms of events, it is continuously growing, getting denser and more varied by the minute across different disciplines or fields. Hence, Big Data emerged and is evolving rapidly, the various types of data being processed are huge, but no one has ever thought of where this data resides, we therefore noticed this data resides in software’s and the [...] Read more.
Data is the collection of facts and observations in terms of events, it is continuously growing, getting denser and more varied by the minute across different disciplines or fields. Hence, Big Data emerged and is evolving rapidly, the various types of data being processed are huge, but no one has ever thought of where this data resides, we therefore noticed this data resides in software’s and the codebases of the software’s are increasingly growing that is the size of the modules, functionalities, the size of the classes etc. Since data is growing so rapidly it also mean the codebases of software’s or code are also growing as well. Therefore, this paper seeks to discuss the 5V’s of big data in the context of software code and how to optimize or manage the big code. When we talk of "Big Code for Big Software's," we are referring to the specific challenges and considerations involved in developing, managing, and maintaining of code in large-scale software systems.
Article
Open Access December 06, 2023

Success Factors of Adopting Cloud Enterprise Resource Planning

Abstract The technologies for cloud ERP (Enterprise Resource Planning) have revolutionized the field of information technologies. Any kind of business can benefit from their flexibility, affordability, scalability, adaptation, availability, and customizable data. An advancement of classic ERP, cloud enterprise resource planning (C-ERP) provides the benefits of cloud computing (CC), including resource [...] Read more.
The technologies for cloud ERP (Enterprise Resource Planning) have revolutionized the field of information technologies. Any kind of business can benefit from their flexibility, affordability, scalability, adaptation, availability, and customizable data. An advancement of classic ERP, cloud enterprise resource planning (C-ERP) provides the benefits of cloud computing (CC), including resource elasticity and ease of use. The rise of cloud computing affects on-premise ERP systems in terms of architecture and cost. Cloud-based ERP systems make the claim to be appropriate for digital corporate settings. System quality, security, vendor lock-in, and data accessibility are recognized as the technological issues. Industry 4.0 refers to the re-engineering and revitalization of modern factories through the integration of cloud-based operations, industrial internet connectivity, additive manufacturing, and cybersecurity platforms. One of the four main pillars of Industry 4.0, cloud-based Enterprise Resource Planning (Cloud ERP), is a component of cloud operations that aids in achieving greater standards of sustainable performance.
Figures
PreviousNext
Review Article
Open Access August 16, 2023

Pharmaceutical Drug Traceability by Blockchain and IoT in Enterprise Systems

Abstract Pharmaceutical drug traceability is a regulatory compliance adopted by most nations in the world. A comprehensive analysis was carried out to explain the benefits of adopting enterprise system for pharmaceutical drug traceability. Counterfeit drugs are medicines that are fake and have been produced using incorrect potency, or incorrect ingredients used to manufacture these drugs. Solving the drug [...] Read more.
Pharmaceutical drug traceability is a regulatory compliance adopted by most nations in the world. A comprehensive analysis was carried out to explain the benefits of adopting enterprise system for pharmaceutical drug traceability. Counterfeit drugs are medicines that are fake and have been produced using incorrect potency, or incorrect ingredients used to manufacture these drugs. Solving the drug counterfeiting problems by identifying the most effective and innovative technologies for protecting people's health is of essence these days for the world. Drug serialization is essential concept for drug traceability in the pharmaceutical supply chain. Blockchain is the latest stringent technology that makes drug distribution more secure in the supply chain. The blockchain-based drug traceability is a distributed shared data platform that shares information that is irreversible, reliable, responsible, and transparent in the PSC. Blockchain uses two powerful module, Hyperledger Fabric and Besu to satisfy important criteria for medication traceability, such as privacy, trust, transparency, security, authorization and authentication, and scalability. Researchers in Health informatics can use blockchain designs as a useful road map to develop and implement end-to-end pharmaceutical drug traceability in the supply chain to prevent drug counterfeiting. Industrial IoT is also a key component for the pharmaceutical industry. IoT systems in pharmaceutical drug traceability can be beneficial as they are based on automation and computational methodologies.
Figures
PreviousNext
Review Article
Open Access December 27, 2023

Ensuring High Availability and Resiliency in Global Deployments: Leveraging Multi-Region Architectures, Auto Scaling, and Traffic Management in Azure and AWS

Abstract Modern organizations leverage highly distributed, global deployments to provide high availability and resiliency for cloud-first applications. By hosting these applications across multiple geographic locations and relying on highly available services, organizations can prevent disruption to their business and reduce complexity by employing the scale of infrastructure offered by major cloud [...] Read more.
Modern organizations leverage highly distributed, global deployments to provide high availability and resiliency for cloud-first applications. By hosting these applications across multiple geographic locations and relying on highly available services, organizations can prevent disruption to their business and reduce complexity by employing the scale of infrastructure offered by major cloud providers. Global deployments in the cloud are built on well-known models such as failover, load balancing, and scalability. However, traditional methods used to recover from regional failure—while effective—can be complex. Typical multi-region recovery and high availability system architectures have latency and cost risks that should be considered when facing other limitations such as deployment models in the cloud. This document describes the different traffic management techniques that can be applied to multi-region strategies, focusing on trade-offs and costs. The introduction of new traffic management techniques being applied to the traditional global architectures now allows organizations to adopt cloud services more efficiently. Traffic management is much more straightforward in some environments, while others have started to leverage their traffic management platform via routing. In multi-region deployments, active-active and active-passive are the most common architectural models, allowing organizations to seamlessly handle failover, scalability, and global distribution based on business goals and requirements. However, traffic management for these infrastructures is critical to ensure just data distribution and efficiency, maintaining costs under control and workloads rerouted when necessary. Using the new traffic management techniques will allow organizations to evolve system architectures easily based on business requirements, taking advantage of cost benefits from multiple infrastructures. In these scenarios, traffic management becomes a crucial backbone of success to ensure that traffic is being efficiently and intelligently distributed [1].
Figures
PreviousNext
Review Article
Open Access December 24, 2022

Cloud Native ETL Pipelines for Real Time Claims Processing in Large Scale Insurers

Abstract Cloud native ETL pipelines support the extract and transform phases of real time claims processing in large scale insurers. The cloud native approach offers dramatic improvements in scalability, reliability, resiliency and agility as well as seamless integration with the diverse set of data sources, destinations and technologies characteristic of large scale insurers. The ETL process extracts data [...] Read more.
Cloud native ETL pipelines support the extract and transform phases of real time claims processing in large scale insurers. The cloud native approach offers dramatic improvements in scalability, reliability, resiliency and agility as well as seamless integration with the diverse set of data sources, destinations and technologies characteristic of large scale insurers. The ETL process extracts data from source systems such as core transaction, fraud, customer and accounting processes, transforms the data to create a usable format for analytics and other applications, and loads the resulting tables into business intelligence or data lake systems for subsequent storage and analysis. By addressing these two phases of the overall ETL process, cloud native ETL pipelines can provide timely, reliable and consistent data to data scientists, actuaries, underwriters and other analysts. Real time processing represents a key priority within the overall claims process: faster, more accurate claim approvals reduce insurer costs, improve customer service and enhance premium pricing. As a result, a variety of claims related use cases are moving from batch to real time.
Figures
PreviousNext
Review Article
Open Access December 22, 2020

Cloud Migration Strategies for High-Volume Financial Messaging Systems

Abstract Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the [...] Read more.
Key business objectives for digital infrastructure cloud adoption are often framed in terms of reducing cost, improving fault tolerance and resilience, simplifying scale, and enabling innovation. Given the critical nature of the financial sector, however, where timeliness and price can significantly determine an outcome, cloud migration in delivery environments demands greater throughput on the critical path and, in many enterprise-scale settings, forgoes hybrid complexity and multi-cloud risks. Nevertheless, slack in system designs does exist; financial institutions enable market functionality—trading, clearing/best execution—despite potentially being able to meet such sets with lower service levels than other verticals. A cloud multi-account structure for sensitive data, for example, naturally limits exposure when combined with observed risk. Fulfilling predictions of elasticity during periods of high demand usually requires support from a dedicated environment (or environments) located nearer to the operations. Components can consequently be allocated on a per-account basis or maintained as shared sink systems to which the dedicated streams write. The automation code can similarly be targeted for dedicated accounts, avoiding the resource constraints that beset such operations during industry events like emergency triage/contact desking.
Figures
PreviousNext
Review Article
Open Access December 27, 2022

Survey of Automated Testing Frameworks and Tools for Software Quality Assurance: Challenges and Best Practices

Abstract Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations [...] Read more.
Automated testing and software quality assurance (SQA) practices are essential for ensuring the reliability, scalability, and maintainability of modern software systems. This paper presents a review of widely used automated testing frameworks, including Driven, Data-Driven, Behavior-Driven Development (BDD), and Record/Playback approaches, outlining their methodologies, benefits, and limitations in different development contexts. In parallel, it examines established SQA techniques such as Test-Driven Development, static analysis, and white-box testing, which provide systematic methods for defect detection and quality improvement. The study further examines the role of practical tools, such as Selenium, TestNG, and JUnit, in supporting test automation and validation activities. In addition to highlighting technical capabilities, the paper identifies common challenges faced in automation, including incomplete requirements, integration complexities, and maintaining evolving test suites. Recommended best practices are provided to address these issues, offering guidance for organizations seeking to strengthen their software testing processes through structured frameworks, adaptive techniques, and reliable automation tools.
Figures
PreviousNext
Article
Open Access December 26, 2021

Designing Scalable Healthcare Data Pipelines for Multi-Hospital Networks

Abstract Healthcare is increasingly recognized as a data-intensive industry. Multi-hospital networks, among other organizations, face mounting operational and governance challenges because of rigid data-integration pipelines that support all data sources and destinations in the network. These pipelines have become difficult to modify, causing them to lag behind the changing needs of the clinical operation. [...] Read more.
Healthcare is increasingly recognized as a data-intensive industry. Multi-hospital networks, among other organizations, face mounting operational and governance challenges because of rigid data-integration pipelines that support all data sources and destinations in the network. These pipelines have become difficult to modify, causing them to lag behind the changing needs of the clinical operation. Scalable data-pipeline architectures better support clinical decision making, optimize hospital operations, ease data quality and compliance concerns, and contribute to improved patient outcomes. Meeting scalability goals requires breaking up monolithic data-integration pipelines into smaller decoupled components and aligning service-level agreements of pipeline components and source systems. Parallelization and adoption of distributed data-warehouse technology mitigate the burden of ingesting data into a multi-hospital network. However, latency requirements still warrant the construction of separate pipelines for data ingress from clinical devices, electronic health records, and external laboratory-information systems. Healthcare associations recommend near real-time data availability for a growing list of clinical and operational applications. Mishandling the real-time ingestion of data from clinical devices, in particular, compromises availability and performance. Scalable architectural patterns for real-time streaming Ingestion from heterogeneous data sources, transport processes, and back-end processing structures are detailed.
Figures
PreviousNext
Review Article
Open Access June 28, 2016

Scalable Task Scheduling in Cloud Computing Environments Using Swarm Intelligence-Based Optimization Algorithms

Abstract Effective task scheduling in cloud computing is crucial for optimizing system performance and resource utilization. Traditional scheduling methods often struggle to adapt to the dynamic and complex nature of cloud environments, where workloads, resource availability, and task requirements constantly change. Swarm intelligence-based optimization algorithms, such as Particle Swarm Optimization [...] Read more.
Effective task scheduling in cloud computing is crucial for optimizing system performance and resource utilization. Traditional scheduling methods often struggle to adapt to the dynamic and complex nature of cloud environments, where workloads, resource availability, and task requirements constantly change. Swarm intelligence-based optimization algorithms, such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Artificial Bee Colony (ABC), offer a promising solution by mimicking natural processes to explore large search spaces efficiently. These algorithms are effective in balancing multiple objectives, including minimizing execution time, reducing energy consumption, and ensuring fairness in resource allocation. They also enhance system scalability, which is vital for modern cloud infrastructures. However, challenges remain, including slow convergence speeds, complex parameter tuning, and integration with existing cloud frameworks. Addressing these issues will be essential for the practical implementation of swarm intelligence in cloud task scheduling, helping to improve resource management and overall system performance.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Scalability

View options

Citations of

Views of

Downloads of