Gradient Boosted Decision Trees on Hadoop

author: Jerry Ye, Yahoo! Research Silicon Valley
published: Jan. 13, 2011,   recorded: December 2010,   views: 24106


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


Stochastic Gradient Boosted Decision Trees (GBDT) is one of the most widely used learning algorithms in machine learning today. It is adaptable, easy to interpret, and produces highly accurate models. However, most implementations today are computationally expensive and require all training data to be in main memory. As training data becomes ever larger, there is motivation for us to parallelize the GBDT algorithm. Parallelizing decision tree training is intuitive and various approaches have been explored in existing literature. Stochastic boosting on the other hand is inherently a sequential process and have not been applied to distributed decision trees. In this paper, we describe a distributed implementation of GBDT that utilizes MPI on the Hadoop grid environment as presented by us at CIKM in 2009.

See Also:

Download slides icon Download slides: nipsworkshops2010_ye_gbd_01.pdf (1.6┬áMB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Brian Mayeux, April 18, 2014 at 5:13 p.m.:

Horrible job of explaining the methodology. Sounds like a lot of spewing out jargin and absolutely no attention to detail on explaining the build. If you want to learn how to explain methodology here is how it's done --

Write your own review or comment:

make sure you have javascript enabled or clear this field: