Distributed Learning Dynamics Convergence in Routing Games

author: Alexandre Bayen, Department of Electrical Engineering and Computer Sciences, UC Berkeley
published: Oct. 25, 2016,   recorded: August 2016,   views: 1088
Categories

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

With the emergence of smartphone based sensing for mobility as the main paradigm for sensing in the last decade, radically new information sets have become available for the driving public. This information enables commuters to make repeated decisions on a daily basis based on anticipated state of the network. This repeated decision-making process creates interesting patterns for the transportation network, in which users might (or might not) reach an equilibrium, depending on the information at their disposal (for example knowing or not what other users of the network are experiencing or doing). The present talk starts with a brief presentation of the state of the art in traffic monitoring, leading to a new results in routing games. Routing games offer a simple yet powerful model of congestion in traffic networks, both in transportation and communication systems. The congestion in such systems is affected by the combined decision of the agents (drivers or routers), so modeling the decision process of the agents is important, not only to estimate and predict the behavior of the system, but also to be able to control it. This decision process is often called learning, as agents "learn" information about the system or about the other agents. We propose and study different models of learning with the following requirement: the joint learning dynamics should converge asymptotically to the Nash equilibrium of the game. In particular, we focus on two important properties: Is the model robust to stochastic perturbations (such as measurement noise)? And does the model allow heterogeneous learning (different agents may follow different learning strategies)? We study these questions using tools from online learning theory and stochastic approximation theory. We then present experimental results obtained with an online gaming application in which distributed players can play the routing game: they connect to the web app and participate in the game, by iteratively making decisions about their routes and observing outcomes. We show preliminary results from data collected from the application. In particular, we propose and solve a model estimation problem to estimate the learning dynamics of the players, and compare the predictions of the model to the actual behavior of the players, and discuss extensions and open questions.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: