Q-Learning Algorithm
From GM-RKB
A Q-Learning Algorithm is a model-free reinforcement learning algorithm that searches for an optimal action-selection policy for any given finite Markov decision process.
- See: Action-Value Function, Reinforcement Learning, Markov Decision Process, Temporal Difference Learning
References
2016
- (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/Q-learning Retrieved:2016-3-31.
- Q-learning is a model-free reinforcement learning technique. Specifically, Q-learning can be used to find an optimal action-selection policy for any given (finite) Markov decision process (MDP). It works by learning an action-value function that ultimately gives the expected utility of taking a given action in a given state and following the optimal policy thereafter. A policy is a rule that the agent follows in selecting actions, given the state it is in. When such an action-value function is learned, the optimal policy can be constructed by simply selecting the action with the highest value in each state. One of the strengths of Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. Additionally, Q-learning can handle problems with stochastic transitions and rewards, without requiring any adaptations. It has been proven that for any finite MDP, Q-learning eventually finds an optimal policy, in the sense that the expected value of the total reward return over all successive steps, starting from the current state, is the maximum achievable.
2011
- (Peter Stone, 2011a) ⇒ Peter Stone. (2011). "Q-Learning." In: (Sammut & Webb, 2011) p.819