NLTK Python Toolkit

From GM-RKB
(Redirected from NLTK toolkit)
Jump to navigation Jump to search

An NLTK Python Toolkit is a broad-coverage Python-based NLP Toolkit.



References

2015

2014

  • http://www.nltk.org/
    • QUOTE: NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning, and an active discussion forum.

      Thanks to a hands-on guide introducing programming fundamentals alongside topics in computational linguistics, NLTK is suitable for linguists, engineers, students, educators, researchers, and industry users alike. NLTK is available for Windows, Mac OS X, and Linux. Best of all, NLTK is a free, open source, community-driven project.

       NLTK has been called “a wonderful tool for teaching, and working in, computational linguistics using Python,” and “an amazing library to play with natural language.”

      Natural Language Processing with Python provides a practical introduction to programming for language processing. Written by the creators of NLTK , it guides the reader through the fundamentals of writing Python programs, working with corpora, categorizing text, analyzing linguistic structure, and more. The book is being updated for Python 3 and NLTK 3. (The original Python 2 version is still available at http://nltk.org/book_1ed.)


2011

2009


  • http://www.nltk.org/code
    • NLTK includes the following software modules (~120k lines of Python code):
    • Corpus readers: interfaces to many corpora
    • Tokenizers: whitespace, newline, blankline, word, treebank, sexpr, regexp, Punkt sentence segmenter
    • Stemmers: Porter, Lancaster, regexp
    • Taggers: regexp, n-gram, backoff, Brill, HMM, TnT
    • Chunkers: regexp, n-gram, named-entity
    • Parsers: recursive descent, shift-reduce, chart, feature-based, probabilistic, dependency, …
    • Semantic interpretation: untyped lambda calculus, first-order models, DRT, glue semantics, hole semantics, parser interface
    • WordNet: WordNet interface, lexical relations, similarity, interactive browser
    • Classifiers: decision tree, maximum entropy, naive Bayes, Weka interface, megam
    • Clusterers: expectation maximization, agglomerative, k-means
    • Metrics: accuracy, precision, recall, windowdiff, distance metrics, inter-annotator agreement coefficients, word association measures, rank correlation
    • Estimation: uniform, maximum likelihood, Lidstone, Laplace, expected likelihood, heldout, cross-validation, Good-Turing, Witten-Bell