Yann LeCun is a person.
- Professional Homepage: http://yann.lecun.com/
- Google Scholar Author Page: http://scholar.google.com/citations?user=WLN3QrAAAAAJ
- Question: What are the likely AI advancements in the next 5 to 10 years?
- Yann LeCun: There is a number of areas on which people are working hard and making promising advances:
- deep learning combined with reasoning and planning
- deep model-based reinforcement learning (which involved unsupervised predictive learning)
- recurrent neural nets augmented with differentiable memory modules (e.g. Memory Networks:
- Memory Networks (FAIR)
- Stack-Augmented RNN (FAIR)
- Neural Turing Machine (DeepMind)
- End-toEnd MemNN (FAIR/NYU)
- and the flurry of follow-up papers.
- generative/predictive models trained with adversarial training
- “differentiable programming”: this is the idea of viewing a program (or a circuit) as a graph of differentiable modules that can be trained with backprop. This points towards the possibility of not just learning to recognize patterns (as with feed-forward neural nets) but to produce algorithms (with loops, recursion, subroutines, etc). There are a few papers on this from DeepMind, FAIR and others, but it’s rather preliminary at the moment.
- Hierarchical planning and hierarchical reinforcement learning: this is the problem of learning decompose a complex task into simpler subtasks. It seems like a requirement for intelligent systems.
- Learning predictive models of the world in an unsupervised fashion (e.g. video prediction)
- If significant progress is made along these directions in the next few years, we might see the emergence of considerably more intelligent AI agents for dialog systems, question-answering, adaptive robot control and planning, etc.
A big challenge is to devise unsupervised/predictive learning methods that would allow very large-scale neural nets to “learn how the world works” by watching videos, reading textbooks, etc, without requiring explicit human-annotated data.
This may eventually lead to machines that have learned enough about the world that we see them as having “common sense”.
It may take 5 years, 10 years, 20 years, or more. We don’t really know.
- (Zhang & LeCun, 2015) ⇒ Xiang Zhang, and Yann LeCun. (2015). “Text Understanding from Scratch.” In: arXiv:1502.01710 Journal.
- (Zhang, Zhao, & LeCun, 2015) ⇒ Xiang Zhang, Junbo Zhao, and Yann LeCun. (2015). “Character-level Convolutional Networks for Text Classification.” In: Advances in Neural Information Processing Systems, pp. 649-657.
- (Sermanet et al., 2014) ⇒ Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. (2014). “OverFeat: Integrated Recognition, Localization and Detection Using Convolutional Networks.” In: International Conference on Learning Representations (ICLR 2014).
- (LeCun et al., 2012) ⇒ Yann LeCun, Léon Bottou, Genevieve B. Orr, and Klaus-Robert Müller. "Efficient Backprop." In Neural networks: Tricks of the trade." Springer.
- (Chopra et al., 2005) ⇒ Sumit Chopra, Raia Hadsell, and Yann LeCun. (2005). “Learning a Similarity Metric Discriminatively, with Application to Face Verification.” In: Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). ISBN:0-7695-2372-2 doi:10.1109/CVPR.2005.202
- (LeCun et al., 1998) ⇒ Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. (1998). “Gradient-based Learning Applied to Document Recognition." doi:10.1109/5.726791
- (Bromley et al., 1993) ⇒ Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Sackinger, and Roopak Shah. (1993). “Signature Verification Using a "Siamese" Time Delay Neural Network.” In: Proceedings of NIPS Neural Information Processing Systems (NIPS). doi:10.1142/S0218001493000339
- (LeCun et al., 1989) ⇒ Yann LeCun, Bernhard Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne Hubbard, and Lawrence D. Jackel. (1989). “Backpropagation applied to handwritten zip code recognition." Neural computation, 1(4).