2013 FastandScalablePolynomialKernel

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Author Keywords

Abstract

Approximation of non-linear kernels using random feature mapping has been successfully employed in large-scale data analysis applications, accelerating the training of kernel machines. While previous random feature mappings run in [math]\displaystyle{ O(ndD) }[/math] time for [math]\displaystyle{ n }[/math] training samples in d-dimensional space and D random feature maps, we propose a novel randomized tensor product technique, called Tensor Sketching, for approximating any polynomial kernel in [math]\displaystyle{ O (n (d + D\log{D})) }[/math] time. Also, we introduce both absolute and relative error bounds for our approximation to guarantee the reliability of our estimation algorithm. Empirically, Tensor Sketching achieves higher accuracy and often runs orders of magnitude faster than the state-of-the-art approach for large-scale real-world datasets.

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2013 FastandScalablePolynomialKernelNinh Pham
Rasmus Pagh
Fast and Scalable Polynomial Kernels via Explicit Feature Maps10.1145/2487575.24875912013