Explainable AI Frameworks For Transparent Cloud Database Optimization
DOI:
https://doi.org/10.63278/jicrcr.vi.3533Abstract
Cloud database systems have increasingly adopted Artificial Intelligence and machine learning techniques for performance tuning, resource allocation, and anomaly detection, yet these systems often operate as opaque black boxes that undermine user trust and hinder regulatory compliance. This paper introduces the Explainable AI Framework for Transparent Cloud Database Optimization (XAIDBO), which integrates interpretable learning models, causal inference mechanisms, and human-in-the-loop validation to provide transparency in AI-driven database tuning processes. The framework combines reinforcement learning for dynamic policy optimization with gradient boosting models that serve as interpretable surrogates, while employing SHAP analysis for feature attribution, counterfactual reasoning for alternative scenario exploration, and natural language generation to produce comprehensible justifications for optimization decisions. Experimental evaluation using PostgreSQL and MySQL deployments across cloud environments demonstrated that XAIDBO achieved 27% improvement in interpretability scores, 19% bias reduction, and 32% increase in administrator trust while maintaining 98% of baseline optimization accuracy.




