2006 PatternRecognitionAndMachineLearning

From GM-RKB
Jump to: navigation, search

Subject Headings: Textbook, Supervised Learning Algorithm, Probability Distribution Function, Linear Model Regression Algorithm, Linear Model Classification Algorithm, Neural Network Learning Algorithm, Kernel Learning Algorithm, Sparse Kernel Machine, Graphical Model Learning Algorithm, Mixture Model, EM Algorithm, Approximate Inferencing, Sampling Method, Continuous Latent Variable, Sequence Dataset Learning Algorithm, Ensemble Learning Algorithm.

Notes

Cited By

2008

Quotes

Book Overview

The dramatic growth in practical applications for machine learning over the last ten years has been accompanied by many important developments in the underlying algorithms and techniques. For example, Bayesian methods have grown from a specialist niche to become mainstream, while graphical models have emerged as a general framework for describing and applying probabilistic techniques. The practical applicability of Bayesian methods has been greatly enhanced by the development of a range of approximate inference algorithms such as variational Bayes and expectation propagation, while new models based on kernels have had a significant impact on both algorithms and applications. This completely new textbook reflects these recent developments while providing a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. Familiarity with multivariate calculus and basic linear algebra is required, and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory. The book is suitable for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. Extensive support is provided for course instructors, including 431 exercises, graded according to difficulty. Example solutions for a subset of the exercises are available from the book web site, while solutions for the remainder can be obtained by instructors from the publisher. The book is supported by a great deal of additional material, and the reader is encouraged to visit the book web site for the latest information. ...

1. Introduction

1.1. Example: Polynomial Curve Fitting

We begin by introducing a simple regression problem, which we shall use as a running example throughout this chapter to motivate a number of key concepts. Suppose we observe a real-valued input variable x and we wish to use this observation to predict the value of a real-valued target variable t. For the present purposes, it is instructive to consider an artificial example using synthetically generated data because we then know the precise process that generated the data for comparison against any learned model. The data for this example is generated from the function sin(21rx) with random noise included in the target values, as described in detail in Appendix A.

...

1.2 Probability Theory

1.3 Model Selection

1.4 The Curse of Dimensionality

1.5 Decision Theory

2. Probability Distributions

3. Linear Models for Regression

4. Linear Models for Classification

5. Neural Networks

6. Kernel Methods

7. Sparse Kernel Machines

8. Graphical Models

9. Mixture Models and EM

10. Approximate Inference

11. Sampling Methods

12. Continuous Latent Variables

13. Sequential Data

14. Combining Models

,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2006 PatternRecognitionAndMachineLearningChristopher M. BishopPattern Recognition and Machine Learninghttp://research.microsoft.com/en-us/um/people/cmbishop/prml/2006