Learning Structural Correspondences Across Different Linguistic Domains with Synchronous Neural Language Models
published: Jan. 11, 2013, recorded: December 2012, views: 3033
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We introduce a novel framework for learning structural correspondences between two linguistic domains based on training synchronous neural language models with co-regularization on both domains simultaneously. We show positive preliminary results indicating that our framework can be successfully used to learn similar feature representations for correlated objects across different domains, and may therefore be a successful approach for transfer learning across different linguistic domains.
Download slides: nipsworkshops2012_gouws_neural_language_01.pdf (1.1 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !