CloudTrustLens: An Explainable AI Framework for Transparent Service Evaluation and Selection in Multi-Provider Cloud Markets
DOI:
https://doi.org/10.63575/CIA.2024.20203Keywords:
Cloud Service Selection, Explainable AI, Fuzzy Trust Evaluation, Multi-agent SystemsAbstract
Cloud service marketplaces face significant information asymmetry challenges, making transparent and trustworthy service selection difficult for users. This paper presents CloudTrustLens, a novel explainable AI framework that addresses transparency issues in cloud service evaluation and selection across multi-provider environments. The framework integrates a fuzzy logic-based trust evaluation system with multi-agent architecture to provide both accurate service rankings and comprehensible explanations of evaluation outcomes. CloudTrustLens implements a multi-dimensional QoS assessment approach that incorporates both objective performance metrics and subjective user feedback across five key dimensions: availability, reliability, performance, security, and cost-efficiency. The system processes QoS data through a pipeline that ensures data quality and consistency, while the evaluation mechanism combines fuzzy inference with constraint satisfaction techniques to generate trust scores. Experimental validation conducted across three case studies with 18-42 cloud service providers demonstrates that CloudTrustLens achieves a 20.3% improvement in decision correctness compared to traditional AHP-based methods while reducing decision time by 47.4%. The framework's explainability mechanisms—feature importance visualization, counterfactual explanations, and rule activation transparency—significantly enhance user comprehension and decision confidence, particularly addressing the trust gap in cloud service selection. The results confirm that transparent evaluation models can effectively mitigate information asymmetry challenges in multi-provider cloud marketplaces, enabling more informed service selection decisions.