Stochastic Control as a Non-Equilibrium Statistical Physics: Gauge Invariant Bellman Equation thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Stochastic Control as a Non-Equilibrium Statistical Physics: Gauge Invariant Bellman Equation

Published on Oct 16, 20124067 Views

In Stochastic Control (SC) one minimizes average cost-to-go, consisting of the cost-of-control (amount of efforts), the cost-of-space (where one wants the system to be) and the target cost (where one

Related categories

Chapter list

Stochastic Control as a Non-Equilibrium Statistical Physics: Gauge Invariant Bellman Equation00:00
Acknowledgements00:18
Outline00:57
Stochastic Langevin processes and control06:46
Gauge-invariant cost functional11:21
Two ways to define current (flux) and Poincaré duality14:23
Currents and nontrivial topologies18:21
Currents in nontrivial topologies18:49
Large Deviations Theory19:33
Density and current density: microscopic view20:26
Weak noise: Most probable trajectories21:34
Typical trajectories in the limit of weak noise: Tubes21:38
Hamiltonian/Euler/Kolmogorov approach21:39
Change of variables: from control field to current/density37:58
Stationary gauge-invariant Bellman equation: ergodic control39:36
Gauge invariance41:33
Exponential representation: linear Bellman equation and current density functional46:04
Money in exponent (matching case)46:58
Flux (angular velocity) control48:21
Optimal cost of equilibrium48:23
Time-dependent gauge-invariant Bellman equation: variational derivation50:19
Time-dependent GIBE derivation (continued)51:02
Gauge symmetry for time-dependent GIBE51:28
Summary52:09