2007 PartiallyObservableMarkovDecisi

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Partially Observable Markov Decision Process, Spoken Dialog.

Notes

Cited By

Quotes

Abstract

In a spoken dialog system, determining which action a machine should take in a given situation is a difficult problem because automatic speech recognition is unreliable and hence the state of the conversation can never be known with certainty. Much of the research in spoken dialog systems centres on mitigating this uncertainty and recent work has focussed on three largely disparate techniques: parallel dialog state hypotheses, local use of confidence scores, and automated planning. While in isolation each of these approaches can improve action selection, taken together they currently lack a unified statistical framework that admits global optimization. In this paper we cast a spoken dialog system as a partially observable Markov decision process (POMDP). We show how this formulation unifies and extends existing techniques to form a single principled framework. A number of illustrations are used to show qualitatively the potential benefits of POMDPs compared to existing techniques, and empirical results from dialog simulations are presented which demonstrate significant quantitative gains. Finally, some of the key challenges to advancing this method - in particular scalability - are briefly outlined.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2007 PartiallyObservableMarkovDecisiJason D. Williams
Steve Young
Partially Observable Markov Decision Processes for Spoken Dialog Systems10.1016/j.csl.2006.06.0082007