Text-String Probability Function Training Task: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
(Created page with "A Text-String Probability Function Training Task is a probability function generation task that requires the creation of a text string probability function structure...")
 
Line 15: Line 15:
* ([[Collins, 2013a]]) ⇒ [[Michael Collins]]. (2013). "[http://www.cs.columbia.edu/~mcollins/lm-spring2013.pdf Chapter 1 - Language Modeling]." Course notes for NLP by Michael Collins, Columbia University.
* ([[Collins, 2013a]]) ⇒ [[Michael Collins]]. (2013). "[http://www.cs.columbia.edu/~mcollins/lm-spring2013.pdf Chapter 1 - Language Modeling]." Course notes for NLP by Michael Collins, Columbia University.
** QUOTE: Definition 1 ([[Language Model]]) A [[language model]] consists of a [[finite set]] <math>\mathcal{V}</math>, and a </i>[[vector function|function]]</i> <math>p(x_1, x_2, ... x_n)</math> such that:
** QUOTE: Definition 1 ([[Language Model]]) A [[language model]] consists of a [[finite set]] <math>\mathcal{V}</math>, and a </i>[[vector function|function]]</i> <math>p(x_1, x_2, ... x_n)</math> such that:
**# For any <math><x_1 ...x_n> \in \mathcal{V}^{\dagger}, p(x_1,x_2,... x_n) \ge 0</math>
**# For any <math>\lt x_1 ... x_n> \in \mathcal{V}^{\dagger}, p(x_1,x_2,... x_n) \ge 0</math>
**# In addition, <math>\Sigma_{<x_1 ... x+n>} \in \mathcal{V}^{\dagger} p(x1; x2, ... xn) = 1</math>  
**# In addition, <math>\Sigma_{\lt x_1 ... x+n>} \in \mathcal{V}^{\dagger} p(x1; x2, ... xn) = 1</math>  
** Hence <math>p(x_1,x_2,... x_n)</math> is a [[probability distribution]] over the [[sentence]]s in <math>\mathcal{V}^{\dagger}</math>.
** Hence <math>p(x_1,x_2,... x_n)</math> is a [[probability distribution]] over the [[sentence]]s in <math>\mathcal{V}^{\dagger}</math>.



Revision as of 14:07, 21 March 2018

A Text-String Probability Function Training Task is a probability function generation task that requires the creation of a text string probability function structure.



References

2013

  • (Collins, 2013a) ⇒ Michael Collins. (2013). "Chapter 1 - Language Modeling." Course notes for NLP by Michael Collins, Columbia University.
    • QUOTE: Definition 1 (Language Model) A language model consists of a finite set [math]\displaystyle{ \mathcal{V} }[/math], and a function [math]\displaystyle{ p(x_1, x_2, ... x_n) }[/math] such that:
      1. For any [math]\displaystyle{ \lt x_1 ... x_n\gt \in \mathcal{V}^{\dagger}, p(x_1,x_2,... x_n) \ge 0 }[/math]
      2. In addition, [math]\displaystyle{ \Sigma_{\lt x_1 ... x+n\gt } \in \mathcal{V}^{\dagger} p(x1; x2, ... xn) = 1 }[/math]
    • Hence [math]\displaystyle{ p(x_1,x_2,... x_n) }[/math] is a probability distribution over the sentences in [math]\displaystyle{ \mathcal{V}^{\dagger} }[/math].

2003

2001