* Add support for a tokenizer for splitting words into tokens
0.3.1
* Add full python 2.7 support for foreign dictionaries
0.3.0
* Ensure all checks against the word frequency are lower case * Slightly better performance on edit distance of 2
0.2.2
* Minor package fix for non-wheel deployments
0.2.1
* Ignore case for language identifiers
0.2.0
* Changed `words` function to `split_words` to differentiate with the `word_frequency.words` function * Added ***Portuguese*** dictionary: `pt` * Add encoding argument to `gzip.open` and `open` dictionary loading and exporting * Use of **slots** for class objects