A Dual Coordinate Descent Method for Large-scale Linear SVM

author: Kai-Wei Chang, Department of Computer Science and Information Engineering, National Taiwan University
published: Aug. 5, 2008,   recorded: July 2008,   views: 6618


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1- and L2-loss functions. The proposed method is simple and reaches an epsilon-accurate solution in O(log (1/epsilon)) iterations. Experiments indicate that our method is much faster than state of the art solvers such as Pegasos, Tron, svmperf, and a recent primal coordinate descent implementation.

See Also:

Download slides icon Download slides: icml08_chang_dcd_01.pdf (439.8┬áKB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Callum Allan, December 29, 2021 at 12:14 p.m.:

Recently there has been a lot of interest in streamlining multi-label training tasks by proposing faster methods that converge to the optimal solution with a much fewer number of iterations compared to existing methods. Most students feel good to visit https://www.assignmentholic.co.uk/our... website to complete their essays. A major contribution was made by Graves et al. using submodular flows. Their algorithm uses a submodular function optimization approach which reduces the time complexity drastically as compared to greedy alternatives used.

Write your own review or comment:

make sure you have javascript enabled or clear this field: