Privacy-Preserving LLM Infrastructure With Multi-Agent Orchestration And RAG-Driven Retrieval
DOI:
https://doi.org/10.63278/jicrcr.vi.3596Abstract
The rapid integration of large language models (LLMs) into data-intensive and regulated environments has intensified concerns related to privacy, governance, and reliable knowledge use. This study proposes a privacy-preserving LLM infrastructure that integrates multi-agent orchestration with retrieval-augmented generation (RAG) to address these challenges in a systematic manner. The architecture decomposes system intelligence into specialized agents responsible for retrieval, reasoning, privacy enforcement, validation, and orchestration, while dynamically grounding model outputs through policy-aware retrieval from secure knowledge bases. Experimental results demonstrate that the proposed approach significantly improves task accuracy, contextual relevance, and system robustness compared to single-agent and non-RAG baselines, while substantially reducing hallucination rates, data exposure incidents, and access policy violations. The findings further highlight enhanced auditability and governance as direct outcomes of role-based agent isolation and controlled inter-agent communication. Overall, this study establishes that privacy-by-design, when embedded at the architectural level, enables scalable and trustworthy LLM deployments suitable for sensitive and enterprise-grade applications.




