2002 MultiagentLearningUsingaVariabl

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Multi-Agent Learning Algorithm; WoLF Algorithm.

Notes

Cited By

Quotes

Author Keywords

Abstract

Learning to act in a multiagent environment is a difficult problem since the normal definition of an optimal policy no longer applies. The optimal policy at any moment depends on the policies of the other agents. This creates a situation of learning a moving target. Previous learning algorithms have one of two shortcomings depending on their approach. They either converge to a policy that may not be optimal against the specific opponents' policies, or they may not converge at all. In this article we examine this learning problem in the framework of stochastic games. We look at a number of previous learning algorithms showing how they fail at one of the above criteria. We then contribute a new reinforcement learning technique using a variable learning rate to overcome these shortcomings. Specifically, we introduce the WoLF principle, Win or Learn Fast€, for varying the learning rate. We examine this technique theoretically, proving convergence in self-play on a restricted class of iterated matrix games. We also present empirical results on a variety of more general stochastic games, in situations of self-play and otherwise, demonstrating the wide applicability of this method.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2002 MultiagentLearningUsingaVariablManuela Veloso
Michael Bowling
Multiagent Learning Using a Variable Learning Rate10.1016/S0004-3702(02)00121-22002