A Quasi-Newton Approach to Nonsmooth Convex Optimization

author: Jin Yu, NICTA, Australia's ICT Research Centre of Excellence
published: Aug. 29, 2008,   recorded: July 2008,   views: 6444


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting sub(L)BFGS algorithm to L2-regularized risk minimization with binary hinge loss, and its direction-finding component to L1-regularized risk minimization with logistic loss. In both settings our generic algorithms perform comparable to or better than their counterparts in specialized state-of-the-art solvers.

See Also:

Download slides icon Download slides: icml08_yu_aqna_01.pdf (1.4┬áMB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 mike wei, September 10, 2009 at 11:23 p.m.:

get too details!!! solve some problems [c++] :->

this is fun , but listen to u , i try to sleep .... ai yo wei yeah

Write your own review or comment:

make sure you have javascript enabled or clear this field: