Fully-Connected Neural Network Layer

From GM-RKB
Jump to navigation Jump to search

A Fully-Connected Neural Network Layer is a Neural Network Layer in which every artificial neuron(or graph node) form a fully-connected network with those of the adjancet layers but not with those within the same layer.



References

2017a

2017b

Left: A 2-layer Neural Network (one hidden layer of 4 neurons (or units) and one output layer with 2 neurons), and three inputs. Right: A 3-layer neural network with three inputs, two hidden layers of 4 neurons each and one output layer. Notice that in both cases there are connections (synapses) between neurons across layers, but not within a layer.

2017c

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Network_topology#Fully_connected_network Retrieved:2017-12-17.
    • In a fully connected network, all nodes are interconnected. (In graph theory this is called a complete graph.) The simplest fully connected network is a two-node network. A fully connected network doesn't need to use packet switching or broadcasting. However, since the number of connections grows quadratically with the number of nodes: This kind of topology does not trip and affect other nodes in the network [math]\displaystyle{ c= \frac{n(n-1)}{2}.\, }[/math] This makes it impractical for large networks.