International Conference on Learning Representations (ICLR) 2016, San Juan

International Conference on Learning Representations (ICLR) 2016, San Juan

21 Lectures · May 2, 2016

About

ICLR is an annual conference sponsored by the Computational and Biological Learning Society.

It is well understood that the performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field, and include in it topics such as deep learning and feature learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there was no venue for researchers who share a common interest in this topic. The goal of ICLR has been to help fill this void.

Related categories

Uploaded videos:

Opening Remarks

video-img
06:43

Opening

May 27, 2016

 · 

4165 Views

Opening

Keynote Talks

video-img
35:37

Deep Robotic Learning

Sergey Levine

May 27, 2016

 · 

12837 Views

Keynote
video-img
34:34

Should Model Architecture Reflect Linguistic Structure?

Chris Dyer

May 27, 2016

 · 

7929 Views

Keynote
video-img
39:39

Guaranteed Non-convex Learning Algorithms through Tensor Factorization

Animashree Anandkumar

May 27, 2016

 · 

4796 Views

Keynote
video-img
39:29

Beyond Backpropagation: Uncertainty Propagation

Neil D. Lawrence

May 27, 2016

 · 

5655 Views

Keynote
video-img
39:17

Incorporating Structure in Deep Learning

Raquel Urtasun

May 27, 2016

 · 

13523 Views

Keynote

Best Paper Awards

video-img
15:35

Neural Programmer-Interpreters

Scott Reed

May 27, 2016

 · 

5880 Views

Best Paper
video-img
17:19

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantiz...

Song Han

May 27, 2016

 · 

20022 Views

Best Paper

Lectures

video-img
16:19

Regularizing RNNs by Stabilizing Activations

David Scott Krueger

May 27, 2016

 · 

2827 Views

Lecture
video-img
17:11

BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large V...

Shihao Ji

May 27, 2016

 · 

2115 Views

Lecture
video-img
16:08

The Goldilocks Principle: Reading Children's Books with Explicit Memory Represen...

Felix Hill

May 27, 2016

 · 

2571 Views

Lecture
video-img
17:29

Towards Universal Paraphrastic Sentence Embeddings

John Wieting

May 27, 2016

 · 

2333 Views

Lecture
video-img
19:18

Convergent Learning: Do different neural networks learn the same representations...

Jason Yosinski

May 27, 2016

 · 

10110 Views

Lecture
video-img
15:52

Net2Net: Accelerating Learning via Knowledge Transfer

Tianqi Chen

May 27, 2016

 · 

4110 Views

Lecture
video-img
16:19

Variational Gaussian Process

Dustin Tran

May 27, 2016

 · 

3153 Views

Lecture
video-img
15:16

The Variational Fair Autoencoder

Christos Louizos

May 27, 2016

 · 

2607 Views

Lecture
video-img
16:47

A note on the evaluation of generative models

Lucas Theis

May 27, 2016

 · 

2748 Views

Lecture
video-img
10:35

Neural Networks with Few Multiplications

Zhouhan Lin

May 27, 2016

 · 

2329 Views

Lecture
video-img
15:55

Order-Embeddings of Images and Language

Ivan Vendrov

May 27, 2016

 · 

4023 Views

Lecture
video-img
12:33

Generating Images from Captions with Attention

Elman Mansimov

May 27, 2016

 · 

2584 Views

Lecture
video-img
18:34

Density Modeling of Images using a Generalized Normalization Transformation

Johannes Ballé

Jun 15, 2016

 · 

4189 Views

Lecture