Scalable Distributed Computing for Large-Scale SVM: A Symmetric ADMM Approach
DOI:
https://doi.org/10.63278/jicrcr.vi.2547Abstract
Recently, AI and machine learning are getting popular for solving various domain problems in real-time with more accuracy. The Support Vector Machine (SVM) is a popular classification algorithm and is known for its generalization properties in machine learning. In this paper, we propose Symmetric ADMM-based SVM algorithms for big data and demonstrate the efficiency enhancement of the algorithm for large-scale problems including scalability, training time, accuracy, convergence etc. The major contribution in this paper is distributed optimization of the SVM algorithm through the Alternate Direction Method of Multipliers (ADMM). The original problem is decomposed into sub-problems and each sub-problem handled by computational nodes in the cluster. Each computational node solves its sub-problem independently and solution of the sub-problem coordinated using global variable update. The local solution and global variable are iteratively updated until convergence. The implementation result of Symmetric ADMM based SVM model shows reduced training time and better scalability without compromising the accuracy for various real-world big data classification problems. Hence,Symmetric ADMM based SVM model 3x faster than the conventional parallel distributed algorithm.