Scalable Distributed Computing for Large-Scale SVM: A Symmetric ADMM Approach

Authors

  • Vijayakumar H. Bhajantri, Shashikumar G. Totad, Geeta R. Bharamagoudar

DOI:

https://doi.org/10.63278/jicrcr.vi.2547

Abstract

Recently, AI and machine learning are getting popular for solving various domain problems in real-time with more accuracy. The Support Vector Machine (SVM) is a popular classification algorithm and is known for its generalization properties in machine learning. In this paper, we propose Symmetric ADMM-based SVM algorithms for big data and demonstrate the efficiency enhancement of the algorithm for large-scale problems including scalability, training time, accuracy, convergence etc. The major contribution in this paper is distributed optimization of the SVM algorithm through the Alternate Direction Method of Multipliers (ADMM). The original problem is decomposed into sub-problems and each sub-problem handled by computational nodes in the cluster. Each computational node solves its sub-problem independently and solution of the sub-problem coordinated using global variable update. The local solution and global variable are iteratively updated until convergence. The implementation result of Symmetric ADMM based SVM model shows reduced training time and better scalability without compromising the accuracy for various real-world big data classification problems. Hence,Symmetric ADMM based SVM model 3x faster than the conventional parallel distributed algorithm.

Downloads

Published

2024-12-05

How to Cite

Vijayakumar H. Bhajantri, Shashikumar G. Totad, Geeta R. Bharamagoudar. (2024). Scalable Distributed Computing for Large-Scale SVM: A Symmetric ADMM Approach. Journal of International Crisis and Risk Communication Research , 1167–1178. https://doi.org/10.63278/jicrcr.vi.2547

Issue

Section

Articles