2007 AdaptiveNonparametricMarkovModels

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Nonparametric Markov Models, Image Restoration Task, Nonparametric Entropy Estimation.

Notes

Cited By

Quotes

Abstract

The regularity in data fundamentally distinguishes itself from random noise. Describing this regularity in generic, yet powerful, ways is one of the key problems in signal processing. One way of capturing image regularity is by incorporating a priori information into the image model itself. Approaches extracting such prior information from training data have limited utility because of the lack of effective training sets for most applications. Unsupervised approaches that, typically, encode prior information via parametric models work best only when the data conforms to that model. Certain kinds of problems do not adhere to strict models, entailing unsupervised approaches to be adaptive. Statistical-inference methodologies that allow us to learn the underlying structure and variability in the data form important tools in adaptive signal processing.

This dissertation presents an adaptive Markov-random-field (MRF) image model that automatically learns the local statistical dependencies via data-driven nonparametric techniques. We use this model to create adaptive algorithms for processing images. We incorporate prior information, when available, through optimal Bayesian frameworks. We enforce optimality criteria based on fundamental information-theoretic concepts that capture the functional dependence and information content in the data.

We employ this adaptive-MRF framework for effectively solving several classic problems in image processing, computer vision, and medical image analysis. Inferring the statistical structure underlying corrupted images enables us to restore images without enforcing strong models on the signal. The restoration iteratively improves the predictability of pixel intensities from their neighborhoods, by decreasing their joint entropy. When the nature of noise is known, we present an effective empirical-Bayesian reconstruction strategy. We also present a method to optimally estimate the uncorrupted-signal statistics from the observed corrupted-signal statistics by minimizing a KL-divergence measure. We apply this adaptive-MRF framework to classify tissues in magnetic resonance (MR) images of the human brain by maximizing the mutual information between the classification labels and image data, capturing their mutual dependency. The generic formulation enables the method to adapt to different MR modalities, noise, inhomogeneities, and partial-voluming. We incorporate a priori information via probabilistic brain-tissue atlases. We use a similar strategy for texture segmentation, using fast threshold-dynamics-based level-set techniques for regularization.

Maximum-a-Posteriori (MAP) Estimation

Sometimes we have a priori information about the physical process whose parameters we want to estimate. Such information can come either from the correct scientific knowledge of the physical process or from previous empirical evidence. We can encode such prior information in terms of a PDF on the parameter to be estimated. Essentially, we treat the parameter [math]\displaystyle{ \theta }[/math] as the value of an RV. The associated probabilities [math]\displaystyle{ P(\theta) }[/math] are called the prior probabilities. We refer to the inference based on such priors as Bayesian inference. Bayes' theorem shows the way for incorporating prior information in the estimation process:

[math]\displaystyle{ P (θ \vert {\bf x}) = \frac { P ({\bf x} \vert θ) P (θ) } { P ({\bf x}) }. (35) }[/math]

The term on the left hand side of the equation is called the posterior. On the right hand side, the numerator is the product of the likelihood term and the prior term. The denominator serves as a normalization term so that the posterior PDF integrates to unity. Thus, Bayesian inference produces the maximum a posteriori (MAP) estimate

[math]\displaystyle{ \displaystyle \mathop{\mbox{argmax }}_{θ} P (θ \vert {\bf x}) = \mathop{\mbox{argmax }}_{θ} P ({\bf x} \vert θ) P (θ). (36) }[/math],


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2007 AdaptiveNonparametricMarkovModelsSuyash P. AwateAdaptive Nonparametric Markov Models and Information-Theoretic Methods for Image Restoration and Segmentationhttp://www.cs.utah.edu/~suyash/Dissertation htm2007