Trust-Aware Generative Conversational AI: Mitigating Hallucinations In LLM-Powered Chatbots

Authors

  • Raghu Chukkala

Abstract

Large language models (LLMs) have shown impressive levels of capability in machine-generated human-like conversational responses, but they frequently generate incorrect or fake information, the so-called hallucinations. This discourages the trust of the user and restricts the use of AI-based chatbots in high-stakes uses, like healthcare, finance, and customer care. This paper presents a Trust-Aware Generative Conversational AI model that will help reduce hallucinations in chatbots with LLMs. The proposed architecture incorporates the knowledge-infused language modeling (KILM), contextual validation systems, and the trust score system to evaluate the accuracy of the generated answers. In particular, the system integrates structured knowledge, so-called curated knowledge bases, into the LLM, cross-checks the results with various sources, and gives a trust rating to each answer to instruct the chatbot to give the correct and contextually accurate answers. Our testing was performed based on benchmark datasets, such as ConvAI2 and a corpus of domain-specific and factual knowledge. Measures were taken of quantitative variables (like factual accuracy, hallucination rate, and user trust scores). In the experimental study, the suggested trust-aware system has demonstrated a reduction in incidences of hallucinations by 42 percent over the baseline LLM chatbots, and an improvement in user-perceived reliability by 37 percent. Qualitative analysis also demonstrates consistency of context and correctness of facts in different conversation situations. This study indicates that the concept of knowledge infusion and verification in generative conversational AI helps a great deal to increase trustworthiness without stereotyping dialogue naturalness. The results are a basis to construct credible, stakes high chatbot applications and emphasize on the significance of trust-aware design in the next generation AI communication system.

Downloads

Published

2026-02-10

How to Cite

Chukkala, R. (2026). Trust-Aware Generative Conversational AI: Mitigating Hallucinations In LLM-Powered Chatbots. Journal of International Crisis and Risk Communication Research , 125–135. Retrieved from https://jicrcr.com/index.php/jicrcr/article/view/3681

Issue

Section

Articles