Neural Network Forward Pass

From GM-RKB
Jump to navigation Jump to search

A Neural Network Forward Pass is a Forward Propagation Algorithm that feeds input values into the neural network and computes the output predicted values.



References

2018a

# forward-pass of a 3-layer neural network:

f = lambda x: 1.0/( 1.0 + np.exp(-x)) # activation function (use sigmoid)

x = np.random.randn(3, 1) # random input vector of three numbers (3x1)

h1 = f(np.dot(W1, x) + b1) # calculate first hidden layer activations (4x1)

h2 = f(np.dot(W2, h1) + b2) # calculate second hidden layer activations (4x1)

out = np.dot(W3, h2) + b3 # output neuron (1x1)

In the above code, W1,W2,W3,b1,b2,b3 are the learnable parameters of the network.

2018b

initialize network weights (often small random values)

do

forEach training example named ex

prediction = neural-net-output(network, ex) // forward pass

actual = teacher-output(ex)

compute error (prediction - actual) at the output units

compute [math]\displaystyle{ \Delta w_h }[/math] for all weights from hidden layer to output layer // backward pass

compute [math]\displaystyle{ \Delta w_i }[/math] for all weights from input layer to hidden layer // backward pass continued update network weights // input layer not modified by error estimate

until all examples classified correctly or another stopping criterion satisfied

return the network

The lines labeled "backward pass" can be implemented using the backpropagation algorithm, which calculates the gradient of the error of the network regarding the network's modifiable weights.[1]

2018c

  • (Raven Protocol, 2018) ⇒ "Forward propagation" In: https://medium.com/ravenprotocol/everything-you-need-to-know-about-neural-networks-6fcc7a15cb4 Retrieved: 2018-07-15
    • QUOTE: Forward propagation is a process of feeding input values to the neural network and getting an output which we call predicted value. Sometimes we refer forward propagation as inference. When we feed the input values to the neural network’s first layer, it goes without any operations. Second layer takes values from first layer and applies multiplication, addition and activation operations and passes this value to the next layer. Same process repeats for subsequent layers and finally we get an output value from the last layer.

      Forward Propagation

2016

2015

2005

1997

1) FORWARD PASS
Run all input data for one time slice [math]\displaystyle{ 1 \lt t \leq T }[/math]through the BRNN and determine all predicted outputs.
a) Do forward pass just for forward states (from [math]\displaystyle{ t=1 }[/math] to [math]\displaystyle{ t=T }[/math]) and backward states (from [math]\displaystyle{ t=T }[/math] to [math]\displaystyle{ t=1 }[/math]).
b) Do forward pass for output neurons.
2) BACKWARD PASS
Calculate the part of the objective function derivative for the time slice [math]\displaystyle{ 1 \lt t \leq T }[/math] used in the forward pass.
a) Do backward pass for output neurons.
b) Do backward pass just for forward states (from [math]\displaystyle{ t=T }[/math] to [math]\displaystyle{ t=1 }[/math]) and backward states (from [math]\displaystyle{ t=1 }[/math] to [math]\displaystyle{ t=T }[/math]).
3) UPDATE WEIGHTS

  1. Paul J. Werbos (1994). The Roots of Backpropagation. From Ordered Derivatives to Neural Networks and Political Forecasting. New York, NY: John Wiley & Sons, Inc.