2006 MaximumMarginPlanning

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Imitation Learning.

Notes

Cited By

Quotes

Author Keywords

connectionism and neural nets problem solving, control methods, and search theory

Abstract

Imitation learning of sequential, goal-directed behavior by standard supervised techniques is often difficult. We frame learning such behaviors as a maximum margin structured prediction problem over a space of policies. In this approach, we learn mappings from features to cost so an optimal policy in an MDP with these cost mimics the expert's behavior. Further, we demonstrate a simple, provably efficient approach to structured maximum margin learning, based on the subgradient method, that leverages existing fast algorithms for inference. Although the technique is general, it is particularly relevant in problems where A* and dynamic programming approaches make learning policies tractable in problems beyond the limitations of a QP formulation. We demonstrate our approach applied to route planning for outdoor mobile robots, where the behavior a designer wishes a planner to execute is often clear, while specifying cost functions that engender this behavior is a much more difficult task.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2006 MaximumMarginPlanningNathan D. Ratliff
J. Andrew Bagnell
Martin A. Zinkevich
Maximum Margin Planning10.1145/1143844.11439362006