Explainable AI Framework for Decision Support Systems in Enterprise Applications
DOI:
https://doi.org/10.63278/jicrcr.vi.3725Abstract
Artificial Intelligence (AI) is becoming part of the enterprise decision support systems (DSS) to enhance the efficiency of operations, predictive analytics, and strategic planning. Nevertheless, a great number of AI models applied in businesses, especially deep learning, and high-order ensemble algorithms, are a black box, and it is not easy to comprehend the process of decision-making. Such a lack of transparency lowers trust, accountability, and regulatory compliance in key areas of the enterprise like finance, healthcare, and supply chain management. To deal with this complexity, the concept of Explainable Artificial Intelligence (XAI) has become one of the potentially effective solutions, which can assist in making AI-based decision systems interpretable and transparent. In this paper, the author suggests an Explainable AI model that is specific to enterprise decision support systems. Its structure incorporates comprehensible machine learning frameworks, post-hoc explanation systems, and user-friendly visualization layers, to improve the transparency of the decision. The treatment consists of data preprocessing, development of the model, generation of explanations through the SHAP and LIME techniques, and interactive visualization of the results to stakeholders of the enterprise. The framework proposed will assist managers and analysts to interpret model outputs, justify predictions as well as make informed decisions.
The experimental research proves that the combination of XAI methods and DSS can enhance the level of model interpretability without a critical decrease in prediction accuracy. Moreover, explainable capabilities enhance user confidence and trust in decisions taken in an enterprise. There are however practical constraints such as computational overhead, complexity of explanation methods and problems in scaling explanations to large enterprise datasets. Further studies should be conducted in the areas of automated optimization of explanations, real-time explainability, and future involvement with enterprise governance and regulatory systems.




