MapReduce/Bigtable for Distributed Optimization

author: Slav Petrov, Google Research New York, Google, Inc.
published: Jan. 13, 2011,   recorded: December 2010,   views: 984
Categories

Slides

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

For large data it can be very time consuming to run gradient based optimization, for example to minimize the log-likelihood for maximum entropy models. Distributed methods are therefore appealing and a number of distributed gradient optimization strategies have been proposed including: distributed gradient, asynchronous updates, and iterative parameter mixtures. In this paper, we evaluate these various strategies with regards to their accuracy and speed over MapReduce/Bigtable and discuss the techniques needed for high performance.

See Also:

Download slides icon Download slides: nipsworkshops2010_petrov_mrb_01.pdf (1.4┬áMB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 basanti kumari, August 10, 2013 at 7:42 p.m.:

good job please speak in hindi i can't understand.


Comment2 basanti kumari, August 10, 2013 at 7:45 p.m.:

plz attach me a video of google big table on this id.
thanks for ur workshop.

Write your own review or comment:

make sure you have javascript enabled or clear this field: