Filter options

Publication Date
From
to
Subjects
Journals
Article Types
Countries / Territories
Open Access December 27, 2022

Building Scalable and Secure Cloud Architectures: Multi-Region Deployments, Auto Scaling, and Traffic Management in Azure and AWS for Microservices

Abstract The last few years have seen an increased adoption of cloud infrastructure, which has in turn led to a growth in large-scale distributed architectures in data centers to accommodate cloud resource elasticity and resiliency better. Selecting the right approach to build secure, scalable, and reliable cloud infrastructure within a budget is always a challenge. This text focuses on offering practical [...] Read more.
The last few years have seen an increased adoption of cloud infrastructure, which has in turn led to a growth in large-scale distributed architectures in data centers to accommodate cloud resource elasticity and resiliency better. Selecting the right approach to build secure, scalable, and reliable cloud infrastructure within a budget is always a challenge. This text focuses on offering practical solutions for designing and building a secure, scalable, and reliable cloud-based infrastructure where auto-scaling and multi-region deployments are the two key approaches to offer high availability. It covers designing secure and scalable microservices using cloud platforms. The content will provide an understanding of public cloud architecture, the design of microservices running on the cloud, and also the design patterns used in the cloud era. With real-world examples, you will learn how microservices can enable scalable distributed systems. Furthermore, you will be walked through multi-region deployments, auto-scaling, and traffic management in cloud environments, using a sample environment setup and useful tips and tricks for monitoring. Finally, you will see a mock implementation of cloud infrastructure on-premise for a private cloud or single-node cloud. By the end of this text, you will be able to build, manage, and deploy a highly scalable and reliable cloud-ready solution [1].
Figures
PreviousNext
Review Article
Open Access January 10, 2022

Composable Infrastructure: Towards Dynamic Resource Allocation in Multi-Cloud Environments

Abstract To ensure maximum flexibility, service providers offer a variety of computing options with regard to CPU, memory capacity, and network bandwidth. At the same time, the efficient operation of current cloud applications requires an infrastructure that can adjust its configuration continuously across multiple dimensions, which are generally not statically predefined. Our research shows that these [...] Read more.
To ensure maximum flexibility, service providers offer a variety of computing options with regard to CPU, memory capacity, and network bandwidth. At the same time, the efficient operation of current cloud applications requires an infrastructure that can adjust its configuration continuously across multiple dimensions, which are generally not statically predefined. Our research shows that these requirements are hardly met with today's typical public cloud and management approaches. To provide such a highly dynamic and flexible execution environment, we propose the application-driven autonomic management of data center resources as the core vision for the development of a future cloud infrastructure. As part of this vision and the required gradual progress toward it, we present the concept of composable infrastructure and its impact on resource allocation for multi-cloud environments. We introduce relevant techniques for optimizing resource allocation strategies and indicate future research opportunities [1]. Many cloud service providers offer computing instances that can be configured with arbitrary capacity, depending on the availability of certain hardware resources. This level of configurability provides customers with the desired flexibility for executing their applications. Because of the large number of such prerequisite instances with often varying characteristics, service consumers must invest considerable effort to set up or reconfigure elaborate resource provisioning systems. Most importantly, they must differentiate the loads to be distributed between jobs that need to be executed versus placeholder jobs, i.e., jobs that trigger the automatic elasticity functionality responsible for resource allocator reconfiguration. Operations research reveals that the optimization of resource allocator reconfiguration strategies is a fundamentally difficult problem due to its NP-hardness. Despite these challenges, dynamic resource allocation in multi-clouds is becoming increasingly important since modern Internet-based service settings are dispersed across multiple providers [2].
Figures
PreviousNext
Review Article
Open Access June 28, 2016

Scalable Task Scheduling in Cloud Computing Environments Using Swarm Intelligence-Based Optimization Algorithms

Abstract Effective task scheduling in cloud computing is crucial for optimizing system performance and resource utilization. Traditional scheduling methods often struggle to adapt to the dynamic and complex nature of cloud environments, where workloads, resource availability, and task requirements constantly change. Swarm intelligence-based optimization algorithms, such as Particle Swarm Optimization [...] Read more.
Effective task scheduling in cloud computing is crucial for optimizing system performance and resource utilization. Traditional scheduling methods often struggle to adapt to the dynamic and complex nature of cloud environments, where workloads, resource availability, and task requirements constantly change. Swarm intelligence-based optimization algorithms, such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Artificial Bee Colony (ABC), offer a promising solution by mimicking natural processes to explore large search spaces efficiently. These algorithms are effective in balancing multiple objectives, including minimizing execution time, reducing energy consumption, and ensuring fairness in resource allocation. They also enhance system scalability, which is vital for modern cloud infrastructures. However, challenges remain, including slow convergence speeds, complex parameter tuning, and integration with existing cloud frameworks. Addressing these issues will be essential for the practical implementation of swarm intelligence in cloud task scheduling, helping to improve resource management and overall system performance.
Figures
PreviousNext
Review Article

Query parameters

Keyword:  Cloud Infrastructure

View options

Citations of

Views of

Downloads of