An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
published: Dec. 5, 2015, recorded: October 2015, views: 2052
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any eps > 0, an eps-optimal and eps-feasible solution can be computed within O(log(1/eps)) DFAL iterations, which require O(ψ1.5max/dmin⋅1/ϵ) proximal gradient computations and communications per node in total, where ψmax denotes the largest eigenvalue of the graph Laplacian, and dmin is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparse-group LASSO problems.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !