Convolutional Autoencoder

From GM-RKB
(Redirected from convolutional autoencoder)
Jump to navigation Jump to search

A Convolutional Autoencoder is an Autoencoder that includes a convolutional network.



References

2017a

2017b

  • (Guo et al., 2017) ⇒ Xifeng Guo, Xinwang Liu, En Zhu, and Jianping Yin. (2017). “Deep Clustering with Convolutional Autoencoders.” In: International conference on neural information processing, pp. 373-382 . Springer, Cham,
    • QUOTE: ... The structure of proposed Convolutional AutoEncoders (CAE) for MNIST. In the middle there is a fully connected autoencoder whose embedded layer is composed of only 10 neurons. The rest are convolutional layers and convolutional transpose layers (some work refers to as Deconvolutional layer). The network can be trained directly in an end-to-end manner.

2017c

2017d

  • (Chen et al., 2017) ⇒ Min Chen, Xiaobo Shi, Yin Zhang, Di Wu, and Mohsen Guizani. (2017). “Deep Features Learning for Medical Image Analysis with Convolutional Autoencoder Neural Network.” IEEE Transactions on Big Data
    • ABSTRACT: At present, computed tomography (CT) are widely used to assist diagnosis. Especially, computer aided diagnosis (CAD) based on artificial intelligence (AI) is an extremely important research field in intelligent healthcare. However, it is a great challenge to establish an adequate labeled dataset for CT analysis assistance, due to the privacy and security issues. Therefore, this paper proposes a convolutional autoencoder deep learning framework to support unsupervised image features learning for lung nodule through unlabeled data, which only needs a small amount of labeled data for efficient feature learning. Through comprehensive experiments, it evaluates that the proposed scheme is superior to other approaches, which effectively solves the intrinsic labor-intensive problem during of artificial image labeling. Moreover, it verifies that the proposed convolutional autoencoder approach can be extended for similarity measurement of lung nodules images. Especially, the features extracted through unsupervised learning are also applicable in other related scenarios.

2011

  • (Masci et al., 2011) ⇒ Jonathan Masci, Ueli Meier, Dan Cireşan, and Jürgen Schmidhuber. (2011). “Stacked Convolutional Auto-encoders for Hierarchical Feature Extraction.” In: International conference on artificial neural networks.
    • ABSTRACT: We present a novel convolutional auto-encoder (CAE) for unsupervised feature learning. A stack of CAEs forms a convolutional neural network (CNN). Each CAE is trained using conventional on-line gradient descent without additional regularization terms. A max-pooling layer is essential to learn biologically plausible features consistent with those found by previous approaches. Initializing a CNN with filters of a trained CAE stack yields superior performance on a digit (MNIST) and an object recognition (CIFAR10) benchmark.

2006