Semi-supervised Learning of Compact Document Representations with Deep Networks
published: Aug. 1, 2008, recorded: July 2008, views: 4536
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Finding a good representation of text documents is crucial in document retrieval and classification systems. Nowadays, the most popular representation is simply based on a vector of counts storing the number of occurrences of each word in the document. This representation falls short in describing the dependence existing between similar words, and it cannot disambiguate phenomena like synonymy and polysemy of words. In this paper, we propose an algorithm to learn text document representations based on the recent advances in training deep networks. This technique can efficiently produce a very compact and informative representation of a document. Our experiments compare favorably this algorithm against similar algorithms but producing sparse and binary representations. Unlike other models, this method is trained by taking into account both an unsupervised and a supervised objective. We show that it is very advantageous to exploit even a few labeled samples during training, and that we can learn extremely compact representations by using deep and non-linear models.
Download slides: icml08_szummer_sslcdr_01.pdf (659.9 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !
Write your own review or comment: