Convokit

Latest version: v3.1.0

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

2.5.1

This release includes a new method `from_pandas` in the Corpus class that should simplify the Corpus creation process.

It generates a ConvoKit corpus from pandas dataframes of speakers, utterances, and conversations.

A notebook demonstrating the use of this method can be found [here](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/corpus_from_pandas.ipynb).

2.5

This release contains an implementation of the [Expected Conversational Context Framework](https://convokit.cornell.edu/documentation/expected_context_model.html), and [associated demos](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/tree/master/convokit/expected_context_framework/demos).

2.4

This release describes changes that have been implemented as part of the v2.4 release.

Public-facing functionality

ConvoKitMatrix and Vectors

Vectors and Matrices now get first-class treatment in ConvoKit. Vector data can now be stored in a ConvoKitMatrix object that is integrated with the Corpus and its objects, allowing for straightforward access from Corpus component objects, user-friendly display of vectors data, and more. Read our [introduction to vectors](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/vectors/vector_demo.ipynb) for more details.

Accordingly, we have re-implemented the relevant Transformers that were already using array or vector-like data to leverage on this new data structure, namely:
- PromptTypes
- HyperConvo
- BoWTransformer
- BoWClassifier - now renamed to VectorClassifier
- PairedBoW - now renamed to PairedVectorClassifier

The last two Transformers can now be used for any general vector data, as opposed to just bag-of-words vector data.

Metadata deletion
We have implemented a formal way to delete metadata attributes from a Corpus component object. Prior to this, metadata attributes were deleted from objects individually -- leading to possible inconsistencies between the ConvoKitIndex (that tracks what metadata attributes currently exist) and the Corpus component objects. To rectify this, we now **disallow deletion of metadata attributes from objects individually.** Such deletion should instead be carried out using the Corpus method `delete_metadata()`.

Other changes
- FightingWords and BoWTransformer now have default `text_func` values for the three main component types: utterance, speaker, and conversation.
- `corpus.iterate_by()` is now deprecated.
- The API of PromptTypes has been modified: rather than selecting types of prompt and response utterances to use in the constructor, we now give users the option to select prompts and responses as arguments to the `fit` and `transform` calls.

Other internal changes
- In light of SIGDIAL 2020, we have a new [video introduction](https://www.youtube.com/watch?v=nofzyxM4h1k) and [Jupyter notebook tutorial](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/Introduction_to_ConvoKit.ipynb) introducing new users to ConvoKit.
- ConvoKitIndex now tracks a list of class types for each metadata attribute, instead of a single class type. This will lead to changes in `index.json` during dumps of any currently existing corpora, but will have no compatibility issues with loading from existing corpora.
- We updated the following demos that make use of Vectors and PromptTypes: [PromptTypes](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/prompt-types/prompt-type-demo.ipynb) and [Predicting conversations gone awry](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/conversations-gone-awry/Conversations_Gone_Awry_Prediction.ipynb)

2.3.2

This release describes changes that have happened since the v2.3 release, and includes changes from both v2.3.1 and v2.3.2.

Functionality

Naming changes

- `Utterance.root` has been renamed to `Utterance.conversation_id`
- `User` has been renamed to `Speaker`. Functions with 'user' in the name have been renamed accordingly
- `User.name` has been renamed to `Speaker.id`

(Backwards compatibility will be maintained for all the deprecated attributes and functions.)

Corpus

- Corpus now allows users to generate `pandas` DataFrames for its internal components using `get_conversations_dataframe()`, `get_utterances_dataframe()`, and `get_speakers_dataframe()`.
- `Conversation` objects have a `get_chronological_speaker_list()` method for getting a chronological list of conversation participants
- `Conversation`'s `print_conversation_structure()` method has a new argument `limit` for limiting the number of utterances displayed to the number specified in `limit`.

Transformers

- New `invalid_val` argument for `HyperConvo` that automatically replaces NaN values with the default value specified in `invalid_val`.
- `FightingWords.summarize()` now provides labelled plots

Bug fixes

- Fixed minor bug in `download()` when downloading Reddit corpora.
- Fixed bugs in `HyperConvo` that were causing NaN warnings and incorrect calculation. Fixed minor bug that was causing HyperConvo annotations to not be JSON-serializable.
- Fixed bug in `Classifier` and `BoWClassifier` that was causing inconsistent behaviour for compressed vs. uncompressed vector metadata


Other changes

- Warnings in ConvoKit for deprecation have been made more consistent.
- We now have continuous integration for pushes and pull requests! Thanks to mwilbz for helping set this up.

2.3

Functionality

Transformers new summarize() functionality

Some Transformers now have a summarize() function that summarizes the annotated corpus (i.e. annotated by a transform() call) in a way that gives the user a high-level view / interpretation of the annotated metadata.

New Transformers

We introduce several new Transformers: Classifier, Bag-of-Words Classifier, Ranker, Pairer, Paired Prediction, Paired Bag-of-Words Prediction, Fighting Words, and (Conversational) Forecaster (with variants: Bag-of-Words and CRAFT).

New TextProcessor

We introduce TextCleaner, which does text cleaning for online text data. This cleaner depends on the *clean-text* package.

Enhanced Conversation functionality
- Conversation.check_integrity() can be used to check if a conversation has a valid and intact reply-to chain (i.e. only one root utterance, every utterance specified by reply-to exists, etc)
- Conversation.print_conversation_structure() is a way of pretty-printing a Conversation's thread structure (whether displaying just its utterances' ids, texts, or other details is customizable)
- Conversation.get_chronological_utterance_list() provides a list of the Conversation's utterances sorted from earliest to latest timestamp

**Tree operations**
- Conversation.traverse() allows for Conversations to be traversed as a tree structure, e.g. breadth-first, depth-first, pre-order, post-order. Specifically, traverse() returns an iterator of Utterances or UtteranceNodes (a wrapper class for working with Utterances in a conversational tree setting)
- Conversation allows for subtree extraction using any arbitrary utterance in the Conversation as the new root
- Conversation.get_root_to_leaf_paths() returns all the root to leaf paths in the conversation tree

Other changes

Public-facing interface changes
- All Corpus objects now support a full set of all possible object iterators (e.g. User.iter_utterances() or Corpus.iter_users()) with selector functions (i.e. filters that select for the corpus object to be generated)
- Corpus has new methods for checking for the presence of corpus objects, e.g. corpus.has_utterance(), corpus.has_conversation(), corpus.has_user()
- A random User / Utterance / Conversation can be obtained from a Corpus with corpus.random_user() / corpus.random_utterance() / corpus.random_conversation()
- User objects now have ids, not names. Corpus.get_usernames() and User.name are deprecated (in favor of Corpus.get_user_ids() and User.id respectively) and print a warning when used.
- Corpora can be mutated to only include specific Conversations by using Corpus.filter_conversations_by()
- Corpus filtering by utterance is no longer supported to avoid encouraging Corpus mutations that break Conversation reply-to chains. Corpus.filter_utterances_by() is now deprecated and no longer usable.
- Corpus object (i.e. User, Utterance, Conversation) ids and metadata keys must now be strings or None. It used to be that any Hashable object could be used, but this posed problems for corpus dumping to and loading from jsons.
- Deletion of a metadata key for one object results in deletion of that metadata key for all objects of that object type
- Corpus.dump() automatically increments the version number of the Corpus by 1.
- Corpus.download() now has a *use_local* boolean parameter that allows offline users to skip the online check for a new dataset version and uses the local version by default.
- Fixed a bug where specified conversation and user metadata were not getting excluded correctly during Corpus initialisation step
- \_\_str\_\_ is now implemented to provide a concise human-readable string display of the Corpus object (that hides private variables)
- Fixed some bugs with Hypergraph motif counting

Internal changes
- Corpus initialisation and dumping have been heavily refactored to improve future maintainability.
- There is a new CorpusObject parent class that User, Utterance, and Conversation inherit from. This parent class implements some shared functionality for all Corpus objects.
- Corpus now uses a ConvokitIndex object to correctly track the metadata state of itself and its Corpus objects. Previously, this index was computed on the spot when Corpus.dump() was called, and referred to when loading a Corpus. However, any changes to a loaded Corpus object would not update the internal index of the Corpus, meaning the index could be inconsistent with the Corpus state.
- Corpus objects (Corpus, User, Utterance, Conversation) all use a ConvokitMeta object instead of a simple dict() for their metadata. This change is necessary to ensure that updates to the metadata (key additions / deletions) are reflected in ConvokitIndex. However, because ConvokitMeta inherits from the dict class, there is no change to how users should work with the .meta attribute.
- Users and Utterances now have 'owner' attributes to indicate the Corpus they belong to. This change is necessary for the maintaining of a consistent index. (Conversations have always had this attribute.)
- Introduces optional dependencies on the *clean-text* and *torch* packages for sanitizing text under the FightingWords Transformer and running a neural network as part of the Forecaster-CRAFT Transformer respectively.
- A single script for running all existing test suites has been created to speed up testing before deployment: *tests/run_all_tests.py*

2.2

Updates to various parts of ConvoKit:

Text processing

Added support for creating Transformers that compute utterance attributes. Also updated support for dependency-parsing text. An example of how this new functionality can be used is found [here](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/text-processing/text_preprocessing_demo.ipynb).

Corpus

Added some functionality to

* support loading and storage of auxiliary data
* handling of vector representations
* organizing users' activities within conversations
* build dataframes containing attributes of various objects

Prompt types

Updated the code used to compute prompt types and phrasing motifs, deprecating the old QuestionTypology module. An example of how the updated code is used can be found [here](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/prompt-types/prompt-type-demo.ipynb) and [here](https://github.com/CornellNLP/Cornell-Conversational-Analysis-Toolkit/blob/master/examples/prompt-types/prompt-type-wrapper-demo.ipynb).

User Conversation Diversity

Updated code used to compute linguistic divergence.

Other

Added support for pipelining, and some limited support for computing per-utterance attributes.

Page 2 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.