2018 GeneralizedLearningwithReservoi

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Reservoir Computing

Notes

Cited By

Quotes

Abstract

We investigate how machine learning schemes known as a Reservoir Computers (RCs) learn concepts such as ' similar' and ' different', and other relationships between pairs of inputs and generalize these concepts to previously unseen types of data. RCs work by feeding input data into a high dimensional dynamical system of neuron-like units called a ' reservoir' and using regression to train ' output weights' to produce the desired response. We study two RC architectures, that broadly resemble neural dynamics. We show that an RC that is trained to identify relationships between imagepairs drawn from a subset of handwritten digits (0-5) from the MNIST database generalizes the learned relationships to images of handwritten digits (6-9) unseen during training. We consider simple relationships between the input image pair such as: same digits (digits from the same class), same digits but one is rotated 90 degrees, same digits but one is blurred, different digits, etc. In this dataset, digits that are marked the ' same' may have substantial variation because they come from different handwriting samples. Additionally, using a database of depth maps of images taken from a moving camera, we show that an RC trained to learn relationships such as ' similar' (e.g., same scene, different camera perspectives and ' different' (different scenes) is able to generalize its learning to visual scene]]s that are very different from those used in training. RC being a dynamical system, lends itself to easy interpretation through clustering and analysis of the underlying dynamics that allows for generalization. We show that in response to different inputs, the high-dimensional reservoir state can reach different attractors (i.e. patterns), with different attractors representative of corresponding input-pair relationships. We investigate the attractor structure by clustering the high dimensional reservoir states using dimensionality reduction techniques such as Principal Component Analysis (PCA). Thus, as opposed to training for the entire high dimensional 1 reservoir state, the reservoir only needs to learn these attractors (patterns), allowing it to perform well with very few training examples as compared to conventional machine learning techniques such as deep learning. We find that RCs can not only identify and generalize linear as well as non-linear relationships, but also combinations of relationships, providing robust and effective image-pair classification. We find that RCs perform significantly better than state-of-the-art neural network classification techniques such as convolutional and deep Siamese Neural Networks (SNNs) in generalization tasks both on the MNIST dataset and scenes from a moving camera dataset. Using small datasets, our work helps bridge the gap between explainable machine learning and biologically-inspired learning through analogies and points to new directions in the investigation of learning processes.

1 Introduction

2 Data and Methods

(...)

(...) we study the Single Reservoir architecture (Fig. 3(a)). However, there is some evidence that analogy processing involves two steps: 1) the brain generated individual mental representations of the different inputs and 2) brain mapping based on structural similarity, or relationship, between them [29] (...)

2018 GeneralizedLearningwithReservoi Fig3.png

Figure 3: (a) Reservoir architecture with input state of the two images at time [math]\displaystyle{ t }[/math] denoted by [math]\displaystyle{ \vec{u}(t) }[/math], reservoir state at a single time by [math]\displaystyle{ \vec{r}(t) }[/math] and output state by [math]\displaystyle{ \vec{y}(t) }[/math]. (b) shows one image pair from the rotated 90o category of the MNIST dataset split vertically and fed into the reservoir in columns of 1 pixel width, shown to be larger here for ease of visualization.

3 Results

4 Conclusion

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2018 GeneralizedLearningwithReservoiMichelle Girvan
Sanjukta Krishnagopal
Yiannis Aloimonos
Generalized Learning with Reservoir Computing