Cloud-Native Distributed LLM Platforms For Multi-Agent Conversational AI And Enterprise Architecture
DOI:
https://doi.org/10.63278/jicrcr.vi.3535Abstract
The rapid adoption of conversational artificial intelligence in enterprise environments has intensified the need for scalable, reliable, and governable large language model (LLM) infrastructures. This study investigates cloud-native distributed LLM platforms designed to support multi-agent conversational AI within enterprise architectures. Using a design science and experimental evaluation approach, the research analyzes how agent specialization, cloud-native orchestration, and governance mechanisms influence system performance, conversational quality, and operational resilience. The results show that multi-agent configurations significantly reduce response latency, increase throughput, and improve task completion accuracy compared to single-agent deployments. Autoscaling and container orchestration enable stable performance under increasing user load, while integrated governance controls enhance policy compliance with acceptable performance trade-offs. The findings demonstrate that cloud-native multi-agent LLM platforms effectively balance scalability, quality, and governance requirements, offering a practical architectural model for enterprise-grade conversational AI deployment.




