2013 IndexedBlockCoordinateDescentfo

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Author Keywords

Abstract

Linear Classification has achieved complexity linear to the data size. However, in many applications, data contain large amount of samples that does not help improve the quality of model, but still cost much I/O and memory to process. In this paper, we show how a Block Coordinate Descent method based on Nearest-Neighbor Index can significantly reduce such cost when learning a dual-sparse model. In particular, we employ truncated loss function to induce a series of convex programs with superior dual sparsity, and solve each dual using Indexed Block Coordinate Descent, which makes use of Approximate Nearest Neighbor (ANN) search to select active dual variables without I/O cost on irrelevant samples. We prove that, despite the bias and weak guarantee from ANN query, the proposed algorithm has global convergence to the solution defined on entire dataset, with sublinear complexity each iteration. Experiments in both sufficient and limited memory conditions show that the proposed approach learns many times faster than other state-of-the-art solvers without sacrificing accuracy.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2013 IndexedBlockCoordinateDescentfoShou-De Lin
Ian En-Hsu Yen
Chun-Fu Chang
Ting-Wei Lin
Shan-Wei Lin
Indexed Block Coordinate Descent for Large-scale Linear Classification with Limited Memory10.1145/2487575.24876262013