Nltk

Latest version: v3.9.1

Safety actively analyzes 688917 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 14

0.9.1

NLTK:
- new interface for text categorization corpora
- new corpus readers: RTE, Movie Reviews, Question Classification, Brown Corpus
- bugfix in ConcatenatedCorpusView that caused iteration to fail if it didn't start from the beginning of the corpus

Data:
- Question classification data, included with permission of Li & Roth
- Reuters 21578 Corpus, ApteMod version, from CPAN
- Movie Reviews corpus (sentiment polarity), included with permission of Lillian Lee
- Corpus for Recognising Textual Entailment (RTE) Challenges 1, 2 and 3
- Brown Corpus (reverted to original file structure: ca01-cr09)
- Penn Treebank corpus sample (simplified implementation, new readers treebank_raw and treebank_chunk)
- Minor redesign of corpus readers, to use filenames instead of "items" to identify parts of a corpus

Contrib:
- theorem_prover: Prover9, tableau, MaltParser, Mace4, glue semantics, docs (Dan Garrette, Ewan Klein)
- drt: improved drawing, conversion to FOL (Dan Garrette)
- gluesemantics: GUI demonstration, abstracted LFG code, documentation (Dan Garrette)
- readability: various text readability scores (Thomas Jakobsen, Thomas Skardal)
- toolbox: code to normalize toolbox databases (Greg Aumann)

Book:
- many improvements in early chapters in response to reader feedback
- updates for revised corpus readers
- moved unicode section to chapter 3
- work on engineering.txt (not included in 0.9.1)

Distributions:
- Fixed installation for Mac OS 10.5 (Joshua Ritterman)
- Generalize doctest_driver to work with doc_contrib

0.9

Not secure
NLTK:
- New naming of packages and modules, and more functions imported into
top-level nltk namespace, e.g. nltk.chunk.Regexp -> nltk.RegexpParser,
nltk.tokenize.Line -> nltk.LineTokenizer, nltk.stem.Porter -> nltk.PorterStemmer,
nltk.parse.ShiftReduce -> nltk.ShiftReduceParser
- processing class names changed from verbs to nouns, e.g.
StemI -> StemmerI, ParseI -> ParserI, ChunkParseI -> ChunkParserI, ClassifyI -> ClassifierI
- all tokenizers are now available as subclasses of TokenizeI,
selected tokenizers are also available as functions, e.g. wordpunct_tokenize()
- rewritten ngram tagger code, collapsed lookup tagger with unigram tagger
- improved tagger API, permitting training in the initializer
- new system for deprecating code so that users are notified of name changes.
- support for reading feature cfgs to parallel reading cfgs (parse_featcfg())
- text classifier package, maxent (GIS, IIS), naive Bayes, decision trees, weka support
- more consistent tree printing
- wordnet's morphy stemmer now accessible via stemmer package
- RSLP Portuguese stemmer (originally developed by Viviane Moreira Orengo, reimplemented by Tiago Tresoldi)
- promoted ieer_rels.py to the sem package
- improvements to WordNet package (Jussi Salmela)
- more regression tests, and support for checking coverage of tests
- miscellaneous bugfixes
- remove numpy dependency

Data:
- new corpus reader implementation, refactored syntax corpus readers
- new data package: corpora, grammars, tokenizers, stemmers, samples
- CESS-ESP Spanish Treebank and corpus reader
- CESS-CAT Catalan Treebank and corpus reader
- Alpino Dutch Treebank and corpus reader
- MacMorpho POS-tagged Brazilian Portuguese news text and corpus reader
- trained model for Portuguese sentence segmenter
- Floresta Portuguese Treebank version 7.4 and corpus reader
- TIMIT player audio support

Contrib:
- BioReader (contributed by Carlos Rodriguez)
- TnT tagger (contributed by Sam Huston)
- wordnet browser (contributed by Jussi Salmela, requires wxpython)
- lpath interpreter (contributed by Haejoong Lee)
- timex -- regular expression-based temporal expression tagger

Book:
- polishing of early chapters
- introductions to parts 1, 2, 3
- improvements in book processing software (xrefs, avm & gloss formatting, javascript clipboard)
- updates to book organization, chapter contents
- corrections throughout suggested by readers (acknowledged in preface)
- more consistent use of US spelling throughout
- all examples redone to work with single import statement: "import nltk"
- reordered chapters: 5->7->8->9->11->12->5
* language engineering in part 1 to broaden the appeal
of the earlier part of the book and to talk more about
evaluation and baselines at an earlier stage
* concentrate the partial and full parsing material in part 2,
and remove the specialized feature-grammar material into part 3

Distributions:
- streamlined mac installation (Joshua Ritterman)
- included mac distribution with ISO image

0.8

Not secure
Code:
- changed nltk.__init__ imports to explicitly import names from top-level modules
- changed corpus.util to use the 'rb' flag for opening files, to fix problems
reading corpora under MSWindows
- updated stale examples in engineering.txt
- extended feature structure interface to permit chained features, e.g. fs['F','G']
- further misc improvements to test code plus some bugfixes
Tutorials:
- rewritten opening section of tagging chapter
- reorganized some exercises

0.8b2

Code (major):
- new corpus package, obsoleting old corpora package
- supports caching, slicing, corpus search path
- more flexible API
- global updates so all NLTK modules use new corpus package
- moved nltk/contrib to separate top-level package nltk_contrib
- changed wordpunct tokenizer to use \w instead of a-zA-Z0-9
as this will be more robust for languages other than English,
with implications for many corpus readers that use it
- known bug: certain re-entrant structures in featstruct
- known bug: when the LHS of an edge contains an ApplicationExpression,
variable values in the RHS bindings aren't copied over when the
fundamental rule applies
- known bug: HMM tagger is broken
Tutorials:
- global updates to NLTK and docs
- ongoing polishing
Corpora:
- treebank sample reverted to published multi-file structure
Contrib:
- DRT and Glue Semantics code (nltk_contrib.drt, nltk_contrib.gluesemantics, by Dan Garrette)

0.8b1

Code (major):
- changed package name to nltk
- import all top-level modules into nltk, reducing need for import statements
- reorganization of sub-package structures to simplify imports
- new featstruct module, unifying old featurelite and featurestructure modules
- FreqDist now inherits from dict, fd.count(sample) becomes fd[sample]
- FreqDist initializer permits: fd = FreqDist(len(token) for token in text)
- made numpy optional
Code (minor):
- changed GrammarFile initializer to accept filename
- consistent tree display format
- fixed loading process for WordNet and TIMIT that prevented code installation if data not installed
- taken more care with unicode types
- incorporated pcfg code into cfg module
- moved cfg, tree, featstruct to top level
- new filebroker module to make handling of example grammar files more transparent
- more corpus readers (webtext, abc)
- added cfg.covers() to check that a grammar covers a sentence
- simple text-based wordnet browser
- known bug: parse/featurechart.py uses incorrect apply() function
Corpora:
- csv data file to document NLTK corpora
Contrib:
- added Glue semantics code (contrib.glue, by Dan Garrette)
- Punkt sentence segmenter port (contrib.punkt, by Willy)
- added LPath interpreter (contrib.lpath, by Haejoong Lee)
- extensive work on classifiers (contrib.classifier*, Sumukh Ghodke)
Tutorials:
- polishing on parts I, II
- more illustrations, data plots, summaries, exercises
- continuing to make prose more accessible to non-linguistic audience
- new default import that all chapters presume: from nltk.book import *
Distributions:
- updated to latest version of numpy
- removed WordNet installation instructions as WordNet is now included in corpus distribution
- added pylab (matplotlib)

0.7.5

Code:
- improved WordNet and WordNet-Similarity interface
- the Lancaster Stemmer (contributed by Steven Tomcavage)
Corpora:
- Web text samples
- BioCreAtIvE-PPI - a corpus for protein-protein interactions
- Switchboard Telephone Speech Corpus Sample (via Talkbank)
- CMU Problem Reports Corpus sample
- CONLL2002 POS+NER data
- Patient Information Leaflet corpus
- WordNet 3.0 data files
- English wordlists: basic English, frequent words
Tutorials:
- more improvements to text and images

Page 11 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.