Online gradient descent for LS regression: Non-asymptotic bounds and application to bandits

author: Prashanth. L.A., SequeL lab, INRIA Lille - Nord Europe
published: Nov. 7, 2013,   recorded: September 2013,   views: 2421


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


We propose a stochastic gradient descent based method with randomization of samples for solving least squares regression. We consider a ""big data"" regime where both the dimension, d, of the data and the number, T, of training samples are large. Through finite time analyses we provide performance bounds for this method both in high probability and in expectation. In particular, we show that, with probability 1-\delta, an \epsilon-approximation of the least squares regression solution can be computed in O(d log(1/\delta)/\epsilon^2) complexity, irrespective of the number of samples T. Next, we improve the computational complexity of online learning algorithms that require to often recompute least squares regression estimates of parameters. We propose two stochastic gradient descent schemes with randomisation in order to efficiently track the true solutions of the regression problems achieving an O(d) improvement in complexity, where d is the dimension of the data. The first algorithm assumes strong convexity in the regression problem, and we provide bounds on the error both in expectation and high probability (the latter is often needed to provide theoretical guarantees for higher level algorithms). The second algorithm deals with cases where strong convexity of the regression problem cannot be guaranteed and uses adaptive regularisation. We again give error bounds in both expectation and high probability. We apply our approaches to the linear bandit algorithms PEGE and ConfidenceBall and demonstrate significant gains in complexity in both cases. Since strong convexity is guaranteed by the PEGE algorithm, we lose only logarithmic factors in the regret performance of the algorithm. On the other hand, in the ConfidenceBall algorithm we adaptively regularise to ensure strong convexity, and this results in an O(n^{1/5}) deterioration of the regret.

See Also:

Download slides icon Download slides: lsoldm2013_prashanth_ls_regression_01.pdf (412.2┬áKB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: