1x1 Convolutional Layer
- It can shrink the number of channels.
- See: ConvNet, Inception Network.
- QUOTE: A 1x1 convolution matrix essentially squashes the depth dimension of an input volume (W x H x D) leaving its width and height intact (W x H x 1). An n x n convolution matrix, in comparison, not only squashes the depth dimension of an input volume - it could, based on its dimensions, input volume dimensions and pad values, alter the width and height of the input volume too.
- Andrew Ng. (2016). “Neural Networks - Networks in Networks and 1x1 Convolutions." YouTube Lecture.
- (Lin et al., 2013) ⇒ Min Lin, Qiang Chen, and Shuicheng Yan. (2013). “Network in Network.” arXiv preprint arXiv:1312.4400
- QUOTE: ... The cross channel parametric pooling layer is also equivalent to a convolution layer with 1x1 convolution kernel. ...
- QUOTE: https://openreview.net/forum?id=ylE6yojDR5yqX¬eId=ylE6yojDR5yqX