Manifold Embeddings for Model-Based Reinforcement Learning of Neurostimulation Policies
published: Aug. 26, 2009, recorded: June 2009, views: 4131
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Real-world reinforcement learning problems often exhibit nonlinear, continuous-valued, noisy, partially-observable state-spaces that are prohibitively expensive to explore. The formal reinforcement learning framework, unfortunately, has not been successfully demonstrated in a real-world domain having all of these constraints. We approach this domain with a two-part solution. First, we overcome continuous-valued, partially observable state-spaces by constructing manifold embeddings of the system’s underlying dynamics, which substitute as a complete state-space representation. We then define a generative model over this manifold to learn a policy off-line. The model-based approach is preferred because it enables simplification of the learning problem by domain knowledge. In this work we formally integrate manifold embeddings into the reinforcement learning framework, summarize a spectral method for estimating embedding parameters, and demonstrate the model-based approach in a complex domain-adaptive seizure suppression of an epileptic neural system.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !