Artificial Neural Network (ANN)

From GM-RKB
(Redirected from Artificial Neural Network)
Jump to navigation Jump to search

An Artificial Neural Network (ANN) is a neural network composed of artificial neurons and artificial neural connections.



References

2018

2017a

2017b

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Artificial_neural_network Retrieved:2017-12-17.
    • Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance on) tasks by considering examples, generally without task-specific programming.

      For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any a prior knowledge about cats, e.g., that they have fur, tails, whiskers and cat-like faces. Instead, they evolve their own set of relevant characteristics from the learning material that they process.

      An ANN is based on a collection of connected units or nodes called artificial neurons (analogous to biological neurons in an animal brain). Each connection (synapse) between neurons can transmit a signal from one to another. The receiving (postsynaptic) neuron can process the signal(s) and then signal neurons connected to it.

      In common ANN implementations, the synapse signal is a real number, and the output of each neuron is calculated by a non-linear function of the sum of its inputs. Neurons and synapses typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal that it sends across the synapse. Neurons may have a threshold such that only if the aggregate signal crosses that threshhold is the signal sent.

      Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.

      The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology.

      Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis

2015a

2015b

2015c

2009

2005

Feedforward neural networks, which typical example is one-layer perceptron (see figure of Single-layer perceptron), consist of neurons set in layers. The information flow has one direction. Neurons from a layer are connected only with the neurons from the preceding layer. The multi-layer networks usually consist of input, hidden (one or more), and output layers. Such system may be treated as non-linear function approximation block: [math]\displaystyle{ y = f(u) }[/math].

Recurrent neural networks. Such networks have feedback loops (at least one) output signals of a layer are connected to its inputs. It causes dynamic effects during network work. Input signals of layer consist of input and output states (from the previous step) of that layer. The structure of recurrent network depicts the below figure.

 Cellular networks. In this type of neural networks neurons are arranged in a lattice. The connections (usually non-linear) may appear between the closest neurons. The typical example of such networks is Kohonen Self-Organising-Map.

.

2000

  • (Valpola, 2000) ⇒ Harri Valpola. (2000). “Bayesian Ensemble Learning for Nonlinear Factor Analysis.” PhD Dissertation, Helsinki University of Technology.
    • QUOTE: artificial neural network: A model which consists of simple building-blocks. The development of such models has been inspired by neurobiological findings. The building-blocks are termed neurons in analogy to biological brain.

1999

  • (Zaiane, 1999) ⇒ Osmar Zaiane. (1999). “Glossary of Data Mining Terms.” University of Alberta, Computing Science CMPUT-690: Principles of Knowledge Discovery in Databases.
    • QUOTE: Artificial Neural Networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.