# Neural Network Model (NNet) Training Algorithm

(Redirected from Neural Network Algorithm)

Jump to navigation
Jump to search
A Neural Network Model (NNet) Training Algorithm is a NNet model-based training algorithm whose learning metamodel is a neural network metamodel and can be implemented into a neural network training system (to solve a neural network training task).

**Context:**- It can (typically) require Training Weights as input.
- It can (typically) require Training Epochs as input.
- It can be a member of a Neural Network Framework.
- It can range from being a Supervised Neural Network Training Algorithm to being an Unsupervised Neural Network Training Algorithm.
- It can range from being a Feed-Forward Network Training Algorithm to being a Radial Basis Function Network Training Algorithm.
- It can range from being a Single-Layer ANN Training Algorithm to being a Multi-Layer ANN Training Algorithm (e.g. a deep NNet training algorithm).
- It can range from being a Regularized ANN Training Algorithm to being a Unregularized ANN Training Algorithm.
- It can follow the sequence:
`While not done heuristic is true:`

`pick a random training example`

, [math]\displaystyle{ (x, y) }[/math].`create the input feature vector`

[math]\displaystyle{ \hat{x} }[/math].`create the target vector`

[math]\displaystyle{ \hat{y} }[/math], for input [math]\displaystyle{ y }[/math] (often one-hot coding for classification)`compute the output vector`

(by performing a forward pass)`update the weights using weight update heuristic`

[math]\displaystyle{ h }[/math] to hopefully a future output vector [math]\displaystyle{ \hat{y}^* }[/math]closer to the target vector [math]\displaystyle{ {y} }[/math].

- ...

**Example(s):****Counter-Example(s):****See:**Stochastic Gradient Descent, Self-Organizing Map.

## References

### 2016

- (Krakovsky, 2016) ⇒ Marina Krakovsky. (2016). “Reinforcement Renaissance.” In: Communications of the ACM Journal, 59(8). doi:10.1145/2949662
- QUOTE: The two types of learning — reinforcement learning and deep learning through deep neural networks — complement each other beautifully, says Sutton. " Deep learning is the greatest thing since sliced bread, but it quickly becomes limited by the data, " he explains. " If we can use reinforcement learning to automatically generate data, even if the data is more weakly labeled than having humans go in and label everything, there can be much more of it because we can generate it automatically, so these two together really fit well. " Despite the buzz around DeepMind, combining reinforcement learning with neural networks is not new. TD-Gammon, a backgammon-playing program developed by IBM's Gerald Tesauro in 1992, was a neural network that learned to play backgammon through reinforcement learning (the TD in the name stands for Temporal-Difference learning, still a dominant algorithm in reinforcement learning). “Back then, computers were 10,000 times slower per dollar, which meant you couldn't have very deep networks because those are harder to train … “Deep reinforcement learning is just a buzzword for traditional reinforcement learning combined with deeper neural networks, " he says.

### 2011

- (Sammut & Webb, 2011) ⇒ Claude Sammut, and Geoffrey I. Webb. (2011). “Neural Networks.” In: (Sammut & Webb, 2011) p.

### 2006

- (Bishop, 2006) ⇒ Christopher M. Bishop. (2006). “Pattern Recognition and Machine Learning. Springer, Information Science and Statistics.

### 1995

- (Bishop, 1995) ⇒ Christopher M. Bishop. (1995). “Neural Networks for Pattern Recognition.” Oxford University Press.

### 1988

- (Kohonen, 1988) ⇒ Teuvo Kohonen. (1988). “An Introduction to Neural Computing.” In: : Neural Networks, 1(1).

### 1986

- (Rumelhart et al., 1986) ⇒ David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. (1986). “Learning Internal Representations by Error Propagation.” In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, David E. Rumelhart and J. L. McClelland, eds., vol. 1.