I removed the *ngram_sequential* and *ngram_overlap* stemmers from the *Term_Matrix*, *transform_vec_docs* and *vocabulary_parser* methods of the corresponding *tokenizer*, *docs_matrix* and *utils* classes. I overlooked the fact that the n-gram stemming is based on the whole corpus and not on each vector of the document(s), which is the case for the *Term_Matrix*, *transform_vec_docs* and *vocabulary_parser* methods.
I also updated the package documentation.
I modified the *secondary_n_grams* of the *tokenization.cpp* source file due to a bug.