Deep Learning for Contrasting Meaning Representation and Composition
published: Dec. 1, 2017, recorded: August 2017, views: 934
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Contrasting meaning is a basic aspect of semantics. Sentiment can be regarded as a special case of it. In this talk, we discuss our deep learning approaches to modeling two basic problems: learning representation for contrasting meaning at the lexical level and performing semantic composition to obtain representation for larger text spans, e.g., phrases and sentences. We first present our neural network models for learning distributed representation that encodes contrasting meaning among words. We discuss how the models utilize both distributional statistics and lexical resources to obtain the state-of-the-art performance on the benchmark dataset, the GRE “most contrasting word” questions. Based on lexical representation, the next basic problem is to learn representation for larger text spans through semantic composition. In the second half of the talk, we focus on deep learning models that learn composition functions by considering both compositional and non-compositional factors. The models can effectively obtain representation for phrases and sentences, and they demonstrate the state-of-the-art performance on different sentiment analysis benchmarks, including the Stanford Sentiment Treebank and the datasets used in SemEval Sentiment Analysis in Twitter.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !