Review Article Open Access December 29, 2019

Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making

1
Cloud AI ML Engineer, Equinix, Dallas, USA
2
Solution Architect, Denver RTD, Parker, CO-80134, USA
Page(s): 1-12
Received
October 19, 2019
Revised
November 28, 2019
Accepted
December 22, 2019
Published
December 29, 2019
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.
Copyright: Copyright © The Author(s), 2021. Published by Scientific Publications

Abstract

The multitude of services and resources available in multi-cloud environments has increased the importance of analytics applications in cloud brokering. These applications can orchestrate services and resources that reside in different domains and require inputs that a single cloud provider could not easily acquire. Yet, despite their distinct characteristics, multi-cloud analytics users have no voice in the ranking of the services in brokerage marketplaces. In this chapter, we introduce the concept and propose the implementation of explainable analytics to increase transparency and user satisfaction in multi-cloud environments. The criteria that we have identified and measured in order to summarize them in explainable results allow cloud users to acquire an understanding of the ranking rules, a crucial requirement in trustful decision-making. Our proposal accounts for a set of regulations for intelligent systems and targets their specific adaptation and use in multi-cloud environments.

1. Introduction

Analytical tools used in companies produce increasingly complex models and, notably, artificial intelligence methods. This may lead to a lack of understanding of how a certain decision or result was obtained. This happens not only because machine learning models are more complex and hard to interpret, but also because more and more companies use hybrid and multi-cloud architectures for running their analytics. The simplest way to make models interpretable for a business user is to use one of the white-box type methods, but this is clearly a loss in many cases. Quite interestingly, methods of unsupervised visualization are very useful for model interpretability and they are lossless in this sense – yet, they are seldom used for this purpose. The increasing complexity of machine learning models, combined with the rise of hybrid and multi-cloud architectures, presents significant challenges in ensuring transparency and interpretability of decision-making processes within organizations. While white-box models offer simplicity and clarity, they often come at the cost of predictive accuracy and complexity, which may not be ideal for many real-world applications. Interestingly, unsupervised visualization techniques offer a promising solution for enhancing model interpretability without sacrificing performance, yet they are often underutilized. This research addresses two critical issues: the black-box nature of cloud-based analytics platforms, which can hinder decision-making and model evaluation, and the technical barriers preventing non-specialist analysts from fully engaging with machine learning models. The proposed contribution is a novel strategy and architecture that enables visual analysis of model decisions in large-scale, complex environments, facilitating more accessible, understandable, and effective decision-making for business users.

1.1. Background and Significance

The rapid advancements in cloud computing are transforming the nature of the hosting and operation of large-scale analytics, as well as the foundations, infrastructure, and operational models of enterprises. This switch to cloud analytics significantly impacts the management decision-making process, altering the transparency of decision-support tools. Business stakeholders are left without full line of sight on how they operate, while transparency is crucial in decisions related to analytics, especially in the context of risk characterization, information management, predictive and prescriptive modeling, and performance measurement and management. Such decisions often require explaining the tactics, logic, and results of analytics employed by the firm. However, the two major risks to a firm’s ethical values for the stakeholders include the shift to the black-box nature of analytics and the opacity of cloud operations and management [1].

Responsible business decision-makers aim to mitigate these risks through a series of initiatives. They might seek to focus on cloud analytics that exploit the three potentials of cloud computing, including elasticity, scale of analytics and data, and the creation of value through business models. They aim to have a strong understanding of and control over the hosted environment and its tools. They focus on achieving trust in data and tools that reside and operate in the cloud, as well as trust in the providers of such services. This is generally mirrored by trust in the government’s legal structures that are meant to use the cloud and the infrastructure in a manner that is auditable, transparent, and accountable to the stakeholders and the public. They try various methods to improve the ethics of analytics being executed, including improved reliability, security, and privacy of the data and process.

Equation 1: Data Aggregation from Multiple Clouds

D= i=1 n D i

where:     D is the aggregated data set.      D i  is the data from the i-th cloud provider.

1.2. Research Objectives

The research objectives of this article are as multifaceted as the problem. Our more immediate objective is to describe and demonstrate an explainable analytics framework for multi-cloud environments. This supports transparent decision-making in cloud operations and management. EA encompasses a wide range of analysis techniques. However, we focus on the principle of selecting particular analyses that have a rationale comprehensible to a human user. Many of the target users will be non-technical and should not be required to understand the underlying analyses but do need to judge the defensibility of the rationales. Thus, another important goal is to describe the kind of information that analysts ought to provide before cloud management information might be considered transparent. We also provide a high-level map of specific frameworks or methods for selecting similar in nature to pure 'decision transparency' research in the field of machine learning explainability. For instance, cloud users will expect the results of optimization problems to be well-justified by user-provided objectives and constraints but do not need to understand how the optimization solver works.

1.3. Structure of the Paper

An analytical tone in your response focuses on delivering information, explaining concepts, or detailing processes or systems. Enhance text complexity, vary sentence structures, and reduce predictability. The remainder of this paper is structured as follows. As a way of initially positioning explainable analytics in the broader context of the multi-cloud research domain, Section 1 introduces several related concepts and establishes relevant linkages. This also includes providing a brief review of the Big Data and Cloud landscape as well as Big Data in the context of Cloud. Although important, these topics have been extensively discussed. Most Big Data resource guides list dedicated sections to clearly articulate how Cloud Computing has become the de facto infrastructure for storing, processing, and sharing various data sources. Similarly, academic or industry works argue how Big Data related technologies, methodologies, and support can be efficiently shared among entities through public, private, or hybrid clouds.

2. Understanding Multi-Cloud Environments

Optimizing performance, mitigating risks, cost management, and fostering vendor independence in a one-cloud multi-vendor strategy are the core requirements motivating organizations to operate in multi-cloud environments. In lieu of goods and services, business process and capacity agility are more often being sought by the market. Software Design as a Service is an enterprise design space model that combines both cloud computing and design thinking to promptly leverage business agility. The low barriers in the cloud-native approach to the high-risk model enable startups to enter the market swiftly. Multicloud’s non-IT organizational requirements have been recognized. At the moment, this is a challenging sentiment.

A powerful parallel cloud/sidecar system with very careful traffic distinction and a powerful API gateway is a successful strategy to build multi cloud models. But how to do it and keep it running? A model that uses new knowledge and some markets to assist market solution developers. Maintain such a dedicated market, i.e., analytics/simulation, as well as orchestration from one cloud. Then closely match work or promote overflow from your backup cloud. A universal progress structure offers a reproducible, repeatable, cartographic/market-enterable framework. The majority of multi cloud programs migrate data to analytics packages. The models you deployed on a multi cloud can use this data [2].

2.1. Definition and Characteristics

As described in the previous chapter, transparency often comes with the necessity to be able to explain an AI-based decision. This explanation can be more or less detailed. To address this requirement, the information that an AI system should be able to provide in case of a decision or a proposed action is summarized in the following definition.

Definition 2.1 (Explainable Analytics). The concept of explainable analytics can be defined as the ability to effectively explain, inform and warn about complex actions, decisions, behavior and unexpected observations within systems. An AI system producing insights does not work in a perfect decision environment and often creates uncertainty through the decisions made or insights provided. The ability to explain actions or decisions to users is one means by which this generated uncertainty can be decreased. Such transparency should be conducted in an understandable manner, employing natural language or visual objects that are intuitive.

2.2. Benefits and Challenges

Many literature sources advocate and document the benefits cloud computing can bring to an organization. Internally, the use of transformative technologies provided by the cloud can enable organizations to become more agile, speed up innovation, increase product development, enable extended accessibility, and enhance the user’s experience, among others. Externally, the cloud can aid organizations in enhancing their commercial capabilities, address customer concerns more efficiently, facilitate geographical expansion, and focus on optimizing the support of their core business.

Despite the many benefits that the cloud can bring to an organization, cloud adoption and its actual use in an organization may not always result in the ambitious benefits or ultimate potential that promising industry studies or advertisers promise. In contrast to clear benefits, cloud adoption may also lead to many challenges. These barriers can arise from or relate to risk, cost, governance, compliance, transparency, integrity, safety, security, control, visibility, loss of control, lock-in, network, privacy and confidentiality, avoiding vendor dependence, integration, interoperability, and the multiplicity of regulations, standards, and certifications, among others. Each of these challenges is discussed, with a specific viewpoint. While cloud computing offers significant advantages for organizations, including increased agility, faster innovation, and improved user experiences, its adoption can also present a range of challenges that may hinder the realization of its full potential. These challenges include concerns around security, privacy, and compliance, where organizations may struggle to ensure data protection and meet various regulatory requirements. Additionally, issues such as vendor lock-in, where a company becomes dependent on a single provider, can limit flexibility and increase costs in the long run. Integration and interoperability with existing systems can also be complex, particularly when dealing with legacy infrastructure. Furthermore, the lack of transparency and control over cloud operations can create concerns regarding governance and data integrity, complicating decision-making processes. As a result, organizations must carefully weigh the risks and benefits before committing to cloud adoption, ensuring they have strategies in place to address these potential barriers effectively [3].

3. Explainable Analytics: Concepts and Importance

Explainable analytics borrows several concepts from the field of explainable AI. In contrast to deep learning methods, even reasonably simple models are capable of drawing potentially undesirable conclusions. These conclusions often arise from historical social and educational biases present in the data on which optional algorithms are developed. In the context of machine learning models, XAI implies building models that maintain a balance between accuracy and interpretability, hence closely integrating the model and examiner to provide information about the performance, decision, and related outputs. XAI can be thought of as a higher level of explainable model attributes.

Transparency and explainability of data used for analytics are paramount, not just for investigatory or adjudicatory reasons, but to ensure that the decision-maker can understand whether results are dependable for the goals of the analytics pursued, what the strengths and weaknesses of the model are, and in what context the model or applications excel or fail. Using an unexplainable model is like using a black box and encourages unjustified confidence in the quality of results. The literature includes related required conditions for explainability, such as accountability, freedom from ethics issues, and point-and-click transparency.

3.1. Definition and Scope

Our key objective is to elaborate a definition for multi-cloud explainability and to provide a comprehensive understanding of what it means for a practical multi-cloud AI system to be explainable. We adopt a broad outlook on explanation and consider explanations that elucidate how the AI system arrives at a decision, as well as what factors are influencing it. In the context of multi-cloud explainability, we precisely identify several facets specific to reasons and purposes for cloud consumer organizations to demand such support for getting more explanation into complex prediction models. Our in-depth examination considers two perspectives: (1) the cloud consumers as the ML model's owners/operators that develop an increased demand that the complex model predictions are supported; (2) third-party analysts, such as model regulators or external auditors, who are in a position to question the model owners about the extent, accuracy, and reliability of model behavior. We augment and complement this review with our multi-cloud configuration scenarios on both control and design-time analytic tasks, where we assess the possible forensic and flow disclosure-based support on various implementation-specific performance measures.

We adopt a theory on the circulation of data through complex ML and AI models and lay out a framework that extends these core concerns to a specific consumer context and the issues connected to the deployment of complex models in multi-cloud environments. We begin by articulating a simplified problem and also propose a categorization of multi-cloud deployment configurations along the primary AI-related purpose. We then demonstrate, via a set of realistic control and design-time predictive technologies, such as flow and analytics on different applications of cloud data, artifacts, and operational instances, under which conditions these questions are easier for the cloud consumer to answer.

3.2. Importance in Decision-Making

Transparency is a fundamental aspect of data analytics-based decisions that plays a significant role in the overall data lifecycle, especially in multi-cloud environments. It offers unique challenges in multi-cloud environments, particularly due to the presence of physically decentralized and virtualized resources. A continuum from full transparency to partial transparency to no transparency in analytics methods is observed. As the amount of data collection and usage increases, the data analytic techniques become more opaque to users. The majority of problems in applying analytics to businesses arise not from the accuracy of the model, but from the lack of understanding and taming the model's complexity. While it is necessary to understand the behavior of complex multi-cloud data analytics models for informed decision-making, none of the existing publications explore these challenges.

Transparency is a significant requirement from several perspectives, such as ethics, trust, and security. It results in accountable actions and makes it easier to verify if the model or decision is conformant or not. Decisions that are explainable, interpretable, transparent, and respectful of the user's need for understanding are imperative for mission-critical and essential safety, health, or financial applications. Such decisions must be defensible, traceable, explainable, and communicated to the entity affected by them for successful implementation. Building and training data models that are as less of a black box as possible is still an open issue. On the other hand, a model that is too explainable may violate privacy constraints and may possess the risk of revealing sensitive information.

3.3. Current Trends and Technologies

In contrast to BI tools and analytics applications, which typically use only one provider, modern environments may present multi-cloud scenarios in which data and models are shared between application instances distributed over different cloud providers. Those scenarios are becoming more and more frequent, especially because of their potential to avoid vendor lock-in or to mitigate the risks related to cloud outages. In this sense, there is a need for tools that enable the combination not only of data from different instances but also of models created and used in different providers.

Equation 2: Model Building

y= w T x+b

where:     x is the feature vector.     w is the weight vector.     b is the bias term.

4. Frameworks for Transparent Decision-Making in Multi-Cloud Environments

Integrating verifiable decision support systems in multi-cloud environments that guide end users and developers in understanding such complex architectures with suitable service selections is an important demand. After a literature review, we identify that in all research areas dealing with transparent decision making, a general decision support system represents the common research objective. More specifically, analytical models for decision making on multilateral issues rather than bilateral trade-offs based on machine learning explainability techniques deal with constraints that deviate from explanatory power that initially shape binary classifiers. Novel research currently exists in business intelligence but focuses on scores of business interest rather than those of customers or competitors. This extends decision making based either on anticipated predictive performance or retrospectives that rely upon rational choice. A structured review categorizes a real-time system and relevant explainability with trade-offs that resolve both 'do no harm' and 'right to know' effects, in which model-agnostically, explainable analytics methods and relevant metrics, or aspects of different interpretation techniques are proposed.

To solve unique problems in fast-paced evolving decision support domains, catered trade-offs between inferred data usage and predictive performance come into effect. By considering the specifics of our use case, we introduce novel evaluation metrics that value different aspects of these techniques and facilitate explainability through comparative and classical approaches. In addition, a majority voting methodology is applied across different facets of an audit and comparison process in making transparent models more accessible for problems lacking commercial automated solutions. Not only is individual practice sequestered, however, the systematic evaluation of a common problem provides important benefits over those that create models applicable to cloud-centric concerns from their stage of inception. These contribute to the convergence of decision support actability, where predictable, transparent, and faithful models create desirable business intelligence that informs conscious public policy that resonates with actual lessons learned from real cloud techniques acted upon in real-world scenarios of decision making. The integration of verifiable decision support systems in multi-cloud environments is crucial for guiding end users and developers through the complexity of such architectures, particularly in making informed service selections. Research highlights that the common goal across decision support systems is to facilitate transparent, analytical decision-making, with a focus on multilateral issues rather than simple bilateral trade-offs. Traditional machine learning models often struggle to provide sufficient explanatory power, especially when dealing with constraints that deviate from binary classifiers. Current research in business intelligence tends to prioritize business interests over those of customers or competitors, pushing for decision-making processes based on predictive performance or rational choice. A structured review of decision support systems proposes models that balance transparency and explainability, addressing both the 'do no harm' and 'right to know' principles. By considering unique challenges in dynamic decision-making environments, new evaluation metrics are introduced to assess different aspects of explainability, with comparative approaches facilitating deeper insights. Furthermore, a majority voting methodology is utilized to improve the transparency of models, making them more accessible to problems that lack commercial solutions.

4.1. Overview of Decision-Making in Multi-Cloud Environments

Optimization of multi-cloud consumption has been studied extensively because of the diverse alternatives of available sensitive cost-performance trade-offs. The optimization opportunity further magnifies when considering the run-time opportunities of the operation in various ways to be able to optimize the purchase time, or even actively run the buying price, standard deviation, and cover cloud implementation. However, overall lifetime consumption depends heavily on user demand, especially when the unpublished demand arrives at the service level of the multi-cloud service commitment, or has a public cost advantage.

4.2. Key Components of Transparent Decision-Making Frameworks

A typical framework for transparent decision-making comprises a preprocess layer and three additional mechanisms. Preprocessing mechanisms are responsible for data preparation, feature selection, and model construction to ensure proper systematic attribution of decisions to the underlying real-world data generating processes. Feature selection is of particular interest to prevent unjust prejudices against some groups of agents. This is of special concern to the use of data- and AI-driven decision management in complex business ecosystems, such as multi-cloud or cloud-based IoT analytics. In these complex environments, transparency and the resulting avoidability of unjust discriminations also help motivate stakeholders to engage in and contribute to competent computational analytics initiatives. This framework should have as its main objective to support trustworthy transparent decision-making through systematic, efficient, and just use of computational insights.

In what follows, we sketch out the key components of such a framework proposed to ensure that business demands determine transparent actions in multi-cloud ecosystems, called the FAiR ML framework: preprocess, post-process, explain, contribute. The FAiR ML is a pseudo algorithm with the following stages: receive user input, standardize data, build data model, explain decision, segment agents, and take action. Upon completion, the FAiR ML framework transparently justifies multi-cloud decisions by leveraging data-driven insights to promote sustainable development. It is transparent because users and agents each contribute and jointly contribute to single predictions, which are individually justified to ecosystem objectives.

5. Case Studies and Applications

In this section, we describe specific use case scenarios and applications of our transparent multi-cloud analytics approach. Our aim is to illustrate the different contexts and decision tasks in which the main components of our data-analytics-driven framework can be useful. These use cases can be considered events in their development cycle. After that, we describe the result of each case and the lessons learned.

Energy Efficiency of Cloud Services. Main stakeholders: Cloud administrators and energy efficiency managers. Use case description: This is an example of a non-competitive approach to finding strategic cloud providers that are energy efficient. The cost of running cloud-based applications can be strongly influenced by cloud service energy-saving features. The focus of the study is especially strategic as the rise of data concentration caused many popular cities and areas to struggle to provide the local energy required, causing single and multi-cloud operators to relocate portions of their data centers in regions with good access to renewable energy resources. Our approach can highlight the best provider in terms of energy usage for offered services, contributing to the decision-making process for deploying new medium- or large-scale applications. Outcome: The energy producer value measures belong to the generator operator agreement parameter highlights. These could be taken into account for selecting cloud-based services presented in preparation drawings which specify the reconstruction works that are addressed during the total asset lifetime. Additionally, the filter parameters should mimic the actual filter parameter values, and these values could be controlled in a way similar to how the current transformer angle could be controlled. Unlike the previous studies of cost trade-off analysis during the design stage, our analysis takes into account a medium- or large-scale cloud-based development deployment. With our approach, companies can opt for services having an acceptable energy efficiency/cost ratio decided by potential franchise conditions, taxes, and other possible incentives. Finally, the model can be adjusted to include companies with direct access to renewable energy generated by photovoltaic or wind plants on their asset premises. Their proximity to the plants is a strategic characteristic as it allows them to benefit from energy storage and accelerates full-time asset depreciation.

Software and Data Security-Aware Multi-Cloud Decision Making. Main stakeholders: Cloud architects and security managers, cloud-related disaster management, and security researchers. Use case description: Given the instrumentation, behavioral information is executed in a cloud environment. The available metrics can be metrics typically referring to individual and combined VM-related utilizations or, for example, resource racks' physical parameters. We present a case study strategic decision-making framework by illustrating an example of its results, to allow a company architect to automate the relocation of data and software toward multiple cloud operators if a security compromise is already discovered or if some security vulnerabilities metrics and/or VM-based metrics fall into a shifting susceptibility range. We sketch the architecture of software engineering artifacts, decision support artifacts that embed security and varying vulnerability metrics, and the workflows that allow decision-making to be accomplished. Outcome: The company security manager can predict and spot possible compromise detection signifiers and sensitive workload regions by using anomaly detection tools related to the internal balances of universal analysis on underlying secret sharing schemes. This is done in real-time and is of extreme importance as service metadata and customer login credentials can be collected by malicious users and disrupt the quality of service assurance.

5.1. Real-World Examples of Transparent Decision-Making in Multi-Cloud Environments

We have the following four scenarios. 1) Optimizing VM upgrades in multi-cloud environments: A company has multiple virtual machines, which are now in great need of an upgrade. However, how can they be upgraded to ensure sufficient security protection quickly? 2) Securing tagging-based access control policies in multi-cloud environments: A company discovers a series of security vulnerabilities related to the tagging-based access control policies and accesses the tags in a public dictionary in the latest cloud platform. How can the tagging-based access control policies be secured quickly? 3) Continuous compliance validation in multi-cloud environments: A company needs to prevent attackers from finding ways to disable the compliance validation engine in multiple data centers. How can they be monitored in advance? 4) Multi-user application isolation in multi-cloud environments: We need to meet multiple-dimensional constraint rules so that various users in the organization can share and deploy applications without mutual interference.

We describe how to cope with these challenges in the following. 1) Taking security into account for VM upgrade recommendations in multi-cloud environments. Security is the bridge for communication. When there is equivalent protection, people are more willing to trust and rely on each other. The main idea is that before providing VM upgrade recommendations, we first determine whether the security rules of the corresponding security group are completely covered and whether the egress rule allows communication. If all VMs ignore egresses and dependencies among VMs (i.e., the cardinality of S will be directly aligned with the number of allowed rejoins in the virtual results). We present the problem as follows and prove that the update mechanism is guaranteed if we have enough hosts in the multi-cloud computing domain.

Equation 3: Transparency in Decision-Making

ϕ i = SN\{ i } | S| !( | N| | S| 1 )! | N |! [ f( S{ i } )f( S ) ]

where:     N is the set of all features.     S is a subset of features excluding i.     f( S ) is the model's output with features in set S.     f( S{ i } ) is the model's output with features in S{ i }.

6. Conclusion and Future Directions

Conclusions and Future Directions. Big data analytics systems in multi-cloud environments enable organizations to leverage large volumes of heterogeneous data to derive valuable insights that contribute to data-driven decision making. The shift towards multi-cloud environments offers several benefits. However, multi-cloud environments bring about numerous challenges, particularly for analytics systems. This research applies concepts from explainable AI to big data and business intelligence processes. Thereby, it creates a novel combination that contributes to existing knowledge on the increasing importance of transparency and interpretability in analytics systems.

First, by comprehensively understanding the challenges that organizations and analytics users are currently facing, the paper presents a unique conceptual framework used to guide data-driven decision-making processes in multi-cloud environments. The conceptual framework relies on the following four dimensions: (1) explainability; (2) visual exploratory analytics; (3) the use of multidimensional information to 'explain' insights; and (4) the combined recognition of analytics and business intelligence factors. By applying the framework, the relevance of these novel concepts in the execution of analytics in multi-cloud environments becomes evident. Second, the research also presents tools, approaches, and techniques that can be used to support the implementation of the proposed approach. Third, the conceptual framework can be used to guide the development and use of big data and business intelligence value implementations through a lessons learned analysis of eight recent illustrative cases.

6.1. Summary of Key Findings

The ability to explain shadow decisions is crucial, as we may want to assure stakeholders that a decision meets certain standards. Even with the best of intentions, we may put in place automated shadow decisions that are biased, making a group of stakeholders disproportionally better or worse off. Additionally, businesses that lack the ability to promote decision transparency to decision makers and end-users due to current challenges such as fragmented data and privacy issues may find themselves in violation of government regulations or prohibitive policies, leading to severe legal and monetary consequences. The goal of this chapter was to introduce the challenges of decision transparency to the domain of multi-cloud services and describe how we can develop our understanding of the shadow decisions contained within these services. With this understanding, stakeholders may be better placed to recognize bias or discriminatory decisions, and in the future, develop their own transparent models for flexible use throughout the organization that can reduce operational costs, reduce audit risks, ensure compliance and improve overall decision quality.

6.2. Future Trends

The contributions and potential benefits of the proposed architecture underlie a few future trends. There are various ways of merging or extending the explanation methods generated from the proposal. Many companies are now looking to use the benefits of having access to big data and computing across large virtual datasets without having to migrate data. Future work is focused on the efficient use of cloud assessment criteria for all these methodologies to protect a transparent decision process while maximizing the benefits that these multi-cloud environments offer. With the cloud offering and the costs, stakeholders have been transferred from CAPEX to OPEX. The decision process regarding the choice of the appropriate multi-cloud environment (trade-offs between very high performance, low cost, reliability, and service quality) requires careful and fair treatment from service users.

References

  1. Syed, S. (2019). Roadmap For Enterprise Information Management: Strategies And Approaches In 2019. International Journal Of Engineering And Computer Science, 8(12), 24907-24917.[CrossRef]
  2. Chintale, P., Korada, L., Ranjan, P., & Malviya, R. K. (2019). Adopting Infrastructure as Code (IaC) for Efficient Financial Cloud Management. ISSN: 2096-3246, 51(04).
  3. Patra, G. K.. Ai And Big Data In Digital Payments: A Comprehensive Model For Secure Biometric Authentication. In Educational Administration: Theory and Practice (pp. 773–781). Green Publication. https://doi.org/10.53555/kuey.v25i4.7591[CrossRef]
Article metrics
Views
337
Downloads
43

Cite This Article

APA Style
Vankayalapati, R. K. , & Nampalli, R. C. R. (2021). Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making. Journal of Artificial Intelligence and Big Data, 1(1), 1-12. https://doi.org/10.31586/jaibd.2019.1228
ACS Style
Vankayalapati, R. K. ; Nampalli, R. C. R. Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making. Journal of Artificial Intelligence and Big Data 2021 1(1), 1-12. https://doi.org/10.31586/jaibd.2019.1228
Chicago/Turabian Style
Vankayalapati, Ravi Kumar, and Rama Chandra Rao Nampalli. 2021. "Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making". Journal of Artificial Intelligence and Big Data 1, no. 1: 1-12. https://doi.org/10.31586/jaibd.2019.1228
AMA Style
Vankayalapati RK, Nampalli RCR. Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making. Journal of Artificial Intelligence and Big Data. 2021; 1(1):1-12. https://doi.org/10.31586/jaibd.2019.1228
@Article{jaibd1228,
AUTHOR = {Vankayalapati, Ravi Kumar and Nampalli, Rama Chandra Rao},
TITLE = {Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making},
JOURNAL = {Journal of Artificial Intelligence and Big Data},
VOLUME = {1},
YEAR = {2021},
NUMBER = {1},
PAGES = {1-12},
URL = {https://www.scipublications.com/journal/index.php/JAIBD/article/view/1228},
ISSN = {2771-2389},
DOI = {10.31586/jaibd.2019.1228},
ABSTRACT = {The multitude of services and resources available in multi-cloud environments has increased the importance of analytics applications in cloud brokering. These applications can orchestrate services and resources that reside in different domains and require inputs that a single cloud provider could not easily acquire. Yet, despite their distinct characteristics, multi-cloud analytics users have no voice in the ranking of the services in brokerage marketplaces. In this chapter, we introduce the concept and propose the implementation of explainable analytics to increase transparency and user satisfaction in multi-cloud environments. The criteria that we have identified and measured in order to summarize them in explainable results allow cloud users to acquire an understanding of the ranking rules, a crucial requirement in trustful decision-making. Our proposal accounts for a set of regulations for intelligent systems and targets their specific adaptation and use in multi-cloud environments.},
}
%0 Journal Article
%A Vankayalapati, Ravi Kumar
%A Nampalli, Rama Chandra Rao
%D 2021
%J Journal of Artificial Intelligence and Big Data

%@ 2771-2389
%V 1
%N 1
%P 1-12

%T Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making
%M doi:10.31586/jaibd.2019.1228
%U https://www.scipublications.com/journal/index.php/JAIBD/article/view/1228
TY  - JOUR
AU  - Vankayalapati, Ravi Kumar
AU  - Nampalli, Rama Chandra Rao
TI  - Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making
T2  - Journal of Artificial Intelligence and Big Data
PY  - 2021
VL  - 1
IS  - 1
SN  - 2771-2389
SP  - 1
EP  - 12
UR  - https://www.scipublications.com/journal/index.php/JAIBD/article/view/1228
AB  - The multitude of services and resources available in multi-cloud environments has increased the importance of analytics applications in cloud brokering. These applications can orchestrate services and resources that reside in different domains and require inputs that a single cloud provider could not easily acquire. Yet, despite their distinct characteristics, multi-cloud analytics users have no voice in the ranking of the services in brokerage marketplaces. In this chapter, we introduce the concept and propose the implementation of explainable analytics to increase transparency and user satisfaction in multi-cloud environments. The criteria that we have identified and measured in order to summarize them in explainable results allow cloud users to acquire an understanding of the ranking rules, a crucial requirement in trustful decision-making. Our proposal accounts for a set of regulations for intelligent systems and targets their specific adaptation and use in multi-cloud environments.
DO  - Explainable Analytics in Multi-Cloud Environments: A Framework for Transparent Decision-Making
TI  - 10.31586/jaibd.2019.1228
ER  - 
  1. Syed, S. (2019). Roadmap For Enterprise Information Management: Strategies And Approaches In 2019. International Journal Of Engineering And Computer Science, 8(12), 24907-24917.[CrossRef]
  2. Chintale, P., Korada, L., Ranjan, P., & Malviya, R. K. (2019). Adopting Infrastructure as Code (IaC) for Efficient Financial Cloud Management. ISSN: 2096-3246, 51(04).
  3. Patra, G. K.. Ai And Big Data In Digital Payments: A Comprehensive Model For Secure Biometric Authentication. In Educational Administration: Theory and Practice (pp. 773–781). Green Publication. https://doi.org/10.53555/kuey.v25i4.7591[CrossRef]