Single-Layer Perceptron Training System

From GM-RKB
Jump to navigation Jump to search

A Single-Layer Perceptron Training System is a Single-Layer ANN Training System that implements a Single-Layer Perceptron Algorithm to solve a Single-Layer Perceptron Training Task.



References

2015

Here, the output value is the class label predicted by the unit step function that we defined earlier, and the simultaneous update of each weight [math]\displaystyle{ w_j }[/math] in the weight vector [math]\displaystyle{ \mathbf{w} }[/math] can be more formally written as:

[math]\displaystyle{ w_j:= w_j+ \Delta w_j }[/math]

The value of [math]\displaystyle{ \Delta w_j }[/math], which is used to update the weight [math]\displaystyle{ w_j }[/math] , is calculated by the perceptron learning rule:

[math]\displaystyle{ \Delta w_j=\eta \left( y^{(i)}-\hat{y}^{(i)} \right) x_j^{(i)} }[/math]

Where [math]\displaystyle{ \eta }[/math] is the learning rate (a constant between 0.0 and 1.0), [math]\displaystyle{ y^{(i)} }[/math] is the true class label of the i-th training sample, and [math]\displaystyle{ \hat{y}^{(i)} }[/math] is the predicted class label (...)

Now, before we jump into the implementation in the next section, let us summarize what we just learned in a simple figure that illustrates the general concept of the perceptron:

 Raschka Perceptron.png

The preceding figure illustrates how the perceptron receives the inputs of a sample [math]\displaystyle{ x }[/math] and combines them with the weights [math]\displaystyle{ w }[/math] to compute the net input. The net input is then passed on to the activation function (here: the unit step function), which generates a binary output -1 or +1 — the predicted class label of the sample. During the learning phase, this output is used to calculate the error of the prediction and update the weights.

1957

  • (Rosenblatt, 1957) ⇒ Rosenblatt,m F. (1957). "The perceptron, a perceiving and recognizing automaton (Project Para)". Cornell Aeronautical Laboratory.
    • PREFACE: The work described in this report was supported as a part of the internal research program of the Cornell Aeronautical Laboratory, Inc. The concepts discussed had their origins in some independent research by the author in the field of physiological psychology, in which the aim has been to formulate a brain analogue useful in analysis. This area of research has been of active interest to the author for five or six years. The perceptron concept is a recent product of this research program; the current effort is aimed at establishing the technical and economic feasibility of the perceptron.

1943

  • (McCulloch & Pitts, 1943) ⇒ McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133.
    • ABSTRACT: Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.