word2vec-like System: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Remove links to pages that are actually redirects to this page.)
m (Remove links to pages that are actually redirects to this page.)
Line 28: Line 28:
=== 2014 ===
=== 2014 ===
* ([[Rei & Briscoe, 2014]]) ⇒ [[Marek Rei]], and [[Ted Briscoe]]. ([[2014]]). “[http://www.aclweb.org/anthology/W14-1608 Looking for Hyponyms in Vector Space].” In: Proceedings of CoNLL-2014.  
* ([[Rei & Briscoe, 2014]]) ⇒ [[Marek Rei]], and [[Ted Briscoe]]. ([[2014]]). “[http://www.aclweb.org/anthology/W14-1608 Looking for Hyponyms in Vector Space].” In: Proceedings of CoNLL-2014.  
** QUOTE: [[Word2vec]]: [[We]] created word representations using the [[word2vec-like System|word2vec toolkit]]<ref>https://code.google.com/p/word2vec/</ref>. </s> The tool is based on a [[feedforward neural network language model]], with modifications to make [[representation learning]] more efficient ([[Mikolov et al., 2013a]]).  </s> [[We]] make use of the [[skip-gram model]], which takes each [[word in a sequence]] as an input to a [[log-linear classifier]] with a [[continuous projection layer]], and [[predicts word]]s within a [[text window|certain range before and after the input word]]. </s> The [[text window size|window size]] was set to 5 and [[vector]]s were trained with both [[100]] and 500 dimensions. </s>
** QUOTE: [[word2vec-like System|Word2vec]]: [[We]] created word representations using the [[word2vec-like System|word2vec toolkit]]<ref>https://code.google.com/p/word2vec/</ref>. </s> The tool is based on a [[feedforward neural network language model]], with modifications to make [[representation learning]] more efficient ([[Mikolov et al., 2013a]]).  </s> [[We]] make use of the [[skip-gram model]], which takes each [[word in a sequence]] as an input to a [[log-linear classifier]] with a [[continuous projection layer]], and [[predicts word]]s within a [[text window|certain range before and after the input word]]. </s> The [[text window size|window size]] was set to 5 and [[vector]]s were trained with both [[100]] and 500 dimensions. </s>


=== 2013 ===
=== 2013 ===

Revision as of 20:45, 23 December 2019

A word2vec-like System is a distributional word embedding training system that applies a word2vec algorithm (based on work by Tomáš Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, et al[1]).



References

2015

2014

2014

2013

2013b

2013a