About
Model order selection, which is a trade-off between model resolution and its statistical reliability, is one of the fundamental questions in machine learning. It was studied in detail in the context of supervised learning with i.i.d. samples, but received relatively little attention beyond this domain. The goal of our workshop is to raise attention to the question of model order selection in other domains, share ideas and approaches between the domains, and identify perspective directions for future research. Our interest covers ways of defining model complexity in different domains, examples of practical problems, where intelligent model order selection yields advantage over simplistic approaches, and new theoretical tools for analysis of model order selection. The domains of interest span over all problems that cannot be directly mapped to supervised learning with i.i.d. samples, including, but not limited to, reinforcement learning, active learning, learning with delayed, partial, or indirect feedback, and learning with submodular functions.
An example of first steps in defining complexity of models in reinforcement learning, applying trade-off between model complexity and empirical performance, and analyzing it can be found in [1-4]. An intriguing research direction coming out of these works is simultaneous analysis of exploration-exploitation and model order selection trade-offs. Such an analysis enables to design and analyze models that adapt their complexity as they continue to explore and observe new data. Potential practical applications of such models include contextual bandits (for example, in personalization of recommendations on the web [5]) and Markov decision processes.
Workshop homepage: http://people.kyb.tuebingen.mpg.de/seldin/fimos.html
Related categories
Uploaded videos:
Introduction
Jan 25, 2012
·
3084 Views
Invited Talks
Model Selection in Markovian Processes
Jan 25, 2012
·
4223 Views
Autonomous Exploration in Reinforcement Learning
Jan 25, 2012
·
4414 Views
Model Selection in Exploration
Jan 25, 2012
·
4062 Views
Future Information Minimization as PAC Bayes regularization in Reinforcement Lea...
Jan 25, 2012
·
4805 Views
Lectures
BErMin: A Model Selection Algorithm for Reinforcement Learning Problems
Jan 25, 2012
·
4077 Views
Selecting the state representation in reinforcement Learning
Jan 25, 2012
·
4021 Views
Poster Spotlights
Poster session
Jan 25, 2012
·
4389 Views