k-Armed Bandit Maximization (MAB) Task

From GM-RKB
(Redirected from multi-armed bandit problem)
Jump to navigation Jump to search

A k-Armed Bandit Maximization (MAB) Task is an online rewards-maximization task where the decision-making agent must make a finite sequence of choices against [math]\displaystyle{ k }[/math] independent systems such that rewards are maximized.



References

2020

  • (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Multi-armed_bandit Retrieved:2020-3-25.
    • In probability theory, the multi-armed bandit problem (sometimes called the K-[1] or N-armed bandit problem ) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices in a way that maximizes their expected gain, when each choice's properties are only partially known at the time of allocation, and may become better understood as time passes or by allocating resources to the choice. This is a classic reinforcement learning problem that exemplifies the exploration–exploitation tradeoff dilemma. The name comes from imagining a gambler at a row of slot machines (sometimes known as "one-armed bandits"), who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine. The multi-armed bandit problem also falls into the broad category of stochastic scheduling. In the problem, each machine provides a random reward from a probability distribution specific to that machine. The objective of the gambler is to maximize the sum of rewards earned through a sequence of lever pulls. [2] The crucial tradeoff the gambler faces at each trial is between "exploitation" of the machine that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other machines. The trade-off between exploration and exploitation is also faced in machine learning. In practice, multi-armed bandits have been used to model problems such as managing research projects in a large organization like a science foundation or a pharmaceutical company. In early versions of the problem, the gambler begins with no initial knowledge about the machines. Herbert Robbins in 1952, realizing the importance of the problem, constructed convergent population selection strategies in "some aspects of the sequential design of experiments". A theorem, the Gittins index, first published by John C. Gittins, gives an optimal policy for maximizing the expected discounted reward.

2018

2015

2011

2006

2005

2003

1989

  • (Gittins, 1989) ⇒ J. C. Gittins. (1989). “Multi-Armed Bandit Allocation Indices." John Wiley & Sons, Ltd., ISBN 0-471-92059-2.

1985

  • (Berry & Fristedt, 1985) ⇒ Donald A. Berry, and Bert Fristedt. (1985). “Bandit Problems: Sequential allocation of experiments." Chapman & Hall, ISBN 0-412-24810-7.

1952


  1. Cite error: Invalid <ref> tag; no text was provided for refs named doi10.1023/A:1013689704352
  2. Cite error: Invalid <ref> tag; no text was provided for refs named BF