2009 PrimalSparseMaxMarginMarkovNetw

From GM-RKB
Jump to navigation Jump to search

Subject Headings: Max-Margin Markov Network.

Notes

Cited By

Quotes

Author Keywords

L1-Norm Max-Margin Markov Networks, Primal Sparsity, Dual Sparsity

Abstract

Max-margin Markov networks (M3N) have shown great promise in structured prediction and relational learning. Due to the KKT conditions, the M3N enjoys dual sparsity. However, the existing M3N formulation does not enjoy primal sparsity, which is a desirable property for selecting significant features and reducing the risk of over-fitting. In this paper, we present an l1-norm regularized max-margin Markov network (l1-M3N), which enjoys dual and primal sparsity simultaneously. To learn an l1-M3N, we present three methods including projected sub-gradient, cutting-plane, and a novel EM-style algorithm, which is based on an equivalence between l1-M3N and an adaptive M3N. We perform extensive empirical studies on both synthetic and real data sets. Our experimental results show that : (1) l1-M3N can effectively select significant features; (2) l1-M3N can perform as well as the pseudo-primal sparse Laplace M3N in prediction accuracy, while consistently outperforms other competing methods that enjoy either primal or dual sparsity; and (3) the EM-algorithm is more robust than the other two in prediction accuracy and time efficiency.

References

,

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2009 PrimalSparseMaxMarginMarkovNetwEric P. Xing
Jun Zhu
Bo Zhang
Primal Sparse Max-margin Markov NetworksKDD-2009 Proceedings10.1145/1557019.15571322009