Learning and Solving Many-Player Games through a Cluster-Based Representation
published: July 30, 2008, recorded: July 2008, views: 4274
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In addressing the challenge of exponential scaling with the number of agents we adopt a cluster-based representation to approximately solve asymmetric games of very many players. A cluster groups together agents with a similar “strategic view” of the game. We learn the clustered approximation from data consisting of strategy profiles and payoffs, which may be obtained from observations of play or access to a simulator. Using our clustering we construct a reduced “twins” game in which each cluster is associated with two players of the reduced game. This allows our representation to be individually responsive because we align the interests of every individual agent with the strategy of its cluster. Our approach provides agents with higher payoffs and lower regret on average than model-free methods as well as previous cluster-based methods, and requires only few observations for learning to be successful. The “twins” approach is shown to be an important component of providing these low regret approximations.
Download slides: uai08_ficici_lsmpg.pdf (211.1 KB)
Download slides: uai08_ficici_lsmpg_01.ppt (314.5 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !