Joshua B. Tenenbaum
Joshua B. Tenenbaum is a person.
- See: Dimensionality Reduction Algorithm, Knowledge-rich Machine Learning Algorithm, Hierarchical Latent Dirichlet Allocation Metamodel.
References
- Professional Homepage: http://web.mit.edu/cocosci/josh.html
- DBLP Author Page: http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/t/Tenenbaum:Joshua_B=.html
- Google Scholar Search: http://scholar.google.com/citations?user=rRJ9wTJMUB8C
2017
- (Ullman et al., 2017) ⇒ Tomer D. Ullman, Elizabeth Spelke, Peter Battaglia, and Joshua B. Tenenbaum. (2017). “Mind Games: Game Engines As An Architecture for Intuitive Physics.” In: Trends in Cognitive Sciences, 21(9). doi:10.1016/j.tics.2017.05.012
2011
- (Tenenbaum et al., 2011) ⇒ Joshua B. Tenenbaum, Charles Kemp, Thomas L. Griffiths, and Noah D. Goodman. (2011). “How to Grow a Mind: Statistics, Structure, and Abstraction.” In: Science, 331(6022). doi:10.1126/science.1192788
2008
- (Goodman et al., 2008) ⇒ Noah D. Goodman, Vikash K. Mansinghka, Daniel M. Roy, Keith Bonawitz, and Joshua B. Tenenbaum. (2008). “Church: A Language for Generative Models.” In: Proceedings of Uncertainty in Artificial Intelligence (UAI 2008).
2007
- (Griffiths et al., 2007) ⇒ Thomas L Griffiths, Mark Steyvers, and Joshua B. Tenenbaum. (2007). “Topics in Semantic Representation..” In: Psychological review, 114(2). doi:10.1037/0033-295X.114.2.211
2004
- (Griffiths et al., 2004) ⇒ Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. (2004). “Integrating Topics and Syntax.” In: Advances in Neural Information Processing Systems 17 (NIPS 2004).
2003
- (Blei, Griffiths & al) ⇒ David M. Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. (2003). “Hierarchical topic models and the nested Chinese restaurant process.” In: Neural Information Processing Systems 16 (NIPS 2003).
2000
- (Tenenbaum et al., 2000) ⇒ Joshua B. Tenenbaum, Vin De Silva, Thomas L. Griffiths, and John C. Langford. (2000). “A Global Geometric Framework for Nonlinear Dimensionality Reduction.” In: Science, 290(5500).
- ABSTRACT: Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs - 30,000 auditory nerve fibers or [math]10^6[/math] optic nerve fibers - a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.