Miditok

Latest version: v3.0.5.post1

Safety actively analyzes 714860 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 11

2.1.3

This big update brings a few important changes and improvements.

A new common tokenization workflow for all tokenizers.

We distinguish now three types of tokens:
1. Global MIDI tokens, which represent attributes and events affecting the music globally, such as the tempo or time signature;
2. Track tokens, representing values of distinct tracks such as the notes, chords or effects;
3. Time tokens, which serve to structure and place the previous categories of tokens in time.

All tokenisations now follows the pattern:

1. Preprocess the MIDI;
2. Gather global MIDI events (tempo...);
3. Gather track events (notes, chords);
4. If "one token stream", concatenate all global and track events and sort them by time of occurrence. Else, concatenate the global events to each sequence of track events;
5. Deduce the time events for all the sequences of events (only one if "one token stream");
6. Return the tokens, as a combination of list of strings and list of integers (token ids).

This cleans considerably the code (DRY, less redundant methods), while bringing speedups as the calls to sorting methods has been reduced.

TLDR; other changes

* New submodule `pytorch_data` offering PyTorch `Dataset` objects and a data collator, to be used when training a PyTorch model. Learn more in the documentation of the module;
* `MIDILike`, `CPWord` and `Structured` now handle natively `Program` tokens in a multitrack / `one_token_stream` way;
* Time signature changes are now handled by `TSD`, `MIDILike` and `CPWord`;
* The `time_signature_range` config option is now more flexible / convenient.

Changelog

* 61 new `pytorch_data` submodule, with `DatasetTok` and `DatasetJsonIO` classes. This module is only loaded if `torch` is installed in the python environment;
* 61 `tokenize_midi_dataset()` method now have a `tokenizer_config_file_name` argument, allowing to save the tokenizer config with a custom file name;
* 61 "all-in-one" `DataCollator` object to be used with PyTorch `DataLoader`s;
* 62 `Structured` and `MIDILike` now natively handle `Program` tokens. When setting `config.use_programs` true, a `Program` token will be added before each `Pitch`/`NoteOn`/`NoteOff` token to associate its instrument. MIDIs will also be treated as a single stream of tokens in this case, whereas otherwise each track is converted into independent token sequences;
* 62 `miditok.utils.remove_duplicated_notes` method can now remove notes with the same pitch and onset time, regardless of their offset time / duration;
* 62 `miditok.utils.merge_same_program_tracks` is now called in `preprocess_midi` when `config.use_programs` is True;
* 62 Big refactor of the `REMI` codebase, that now has all the features of `REMIPlus`, and code clean and speedups (less calls to sorting). The `REMIPlus` class is now basically only a wrapped `REMI` with programs and time signature enabled;
* 62 `TSD` and `MIDILike` now encode and decode time signature changes;
* 63 ilya16 The `Tempo`s can now be created with a logarithmic scale, instead of the default linear scale.
* c53a008cadda0f111058a892c23375edde364077 and 5d1c12e18a35e3e633863f1f675374f28c8f7748 `track_to_tokens` and `tokens_to_track` methods are now partially removed. They are now protected, for classes that still rely on them, and removed from the others. These methods were made for internal calls and not recommended to use. Instead, the `midi_to_tokens` method is recommended;
* 65 ilya16 changes `time_signature_range` into a dictionary `{denom_i: [num_i1, ..., num_in] / (min_num_i, max_num_i)}`;
* 65 ilya16 fix in the formula computing the number of ticks per bar.
* 66 Adds an option to `TokenizerConfig` to delete the successive tempo / time signature changes carrying the same value during MIDI preprocessing;
* 66 now using xdist for tests, big speedup on Github actions (ty ilya16 !);
* 66 `CPWord` and `Octuple` now follow the common tokenization workflow;
* 66 As a consequence to the previous point, `OctupleMono` is removed as there was no records of its use. It is now equivalent to `Octuple` without `config.use_programs`;
* 66 `CPWord` now handling time signature changes;
* 66 tests for tempo and time signatures changes are now more robust, exceptions were removed and fixed.
* 5a6378b26d4d8176ca84361c5ecab038d7026f8a `save_tokens` now by default doesn't save programs if `config.use_programs` is False

Compatibility

* Calls to `track_to_tokens` and `tokens_to_track` methods are not supported anymore. If you used these methods, you may replace them with `midi_to_tokens` and `tokens_to_midi` (or just __call__ the tokenizer) while selecting the appropriate token sequences / tracks;
* `time_signature_range` now needs to be given as a dictionary;
* Due to changes in the order of vocabularies of `Octuple` (as programs are now optional), tokenizers and tokens made with previous versions will not be compatible unless the vocabulary order is swapped, idx 3 moved to 5.

2.1.2

Thanks to Kapitan11 who spotted bugs when decodings tokens given as ids / integers (59), this update brings a few fixes that solve them alongside tests ensuring that the input / output (i/o) formats of the tokenizers are well handled in every cases.
The documentation has also been updated on this subject, that was unclear until now.

Changes

* 394dc4d Fix in `MuMIDI` and `Octuple` token encodings that performed the preprocessing steps twice;
* 394dc4d code of [single track tests](tests/test_one_track.py) improved and now covering tempos for most tokenizations;
* 394dc4d `MuMIDI` can now decode tempo tokens;
* 394dc4d `_in_as_seq` decorator now used solely for the `tokens_to_midi()` method, and removed from `tokens_to_track()` which explicitly expects a `TokSequence` object as argument (089fa74);
* 089fa74 `_in_as_seq` decorator now handling all token ids input formats as it should;
* 9fe7639 Fix in `TSD` decoding with multiple input sequences when not in `one_token_stream ` mode;
* 9fe7639 Adding i/o input ids tests;
* 8c2349bfb771145c805c8a652392ae8f11ed0756 `unique_track` property renamed to `one_token_stream` as it is more explicit and accurate;
* 8c2349bfb771145c805c8a652392ae8f11ed0756 new `convert_sequence_to_tokseq` method, which can convert any input sequence holding ids (integer), tokens (string) or events (Event) data into a `TokSequence` or list of `TokSequence`s objects, with the appropriate format depending on the tokenizer. This method is used by the `_in_as_seq` decorator;
* 8c2349bfb771145c805c8a652392ae8f11ed0756 new `io_format` tokenizer property, returning the tokenizer's io format as a tuple of strings. Their significations are: *I* for instrument (for non one_token_stream tokenizers), *T* for token, *C* for sub-token class (for multi-voc tokenizers)
* Minor code lint improvements;

Compatibility

* All good 🙌

2.1.1

Changes

* 220f3842a55693e0d5a68e89f31c3eede6b4ab12 Fix in `learn_bpe()` for tokenizers in `unique_track` mode;
* 30d554693b5c0c6e271cdcd72cb969ef5dc1efaa Fixes in data augmentation (on tokens) in `unique_track` mode: 1) was skipping files (detected as drums) and 2) it now augment all pitches except drums ones (as opposed to all before);
* 30d554693b5c0c6e271cdcd72cb969ef5dc1efaa Tokenizer creating `Program` tokens from `tokenizer.config.programs` given by user.

Compatibility

* If you used custom `Program` tokens, make sure to give `(-1, 128)` as argument for your tokenizer's config (`TokenizerConfig` `programs` arg). It's already it by default, this message only applied if you gave something else.

2.1.0

Major change

This "mid-size" update brings a new `TokenizerConfig` object, holding any tokenizer's configuration. This object is now used to instantiate all tokenizers, and replaces the now removed `beat_res`, `nb_velocities`, `pitch_range` and `additional_tokens` arguments. It allows to simplify the code, reduce exceptions, and expose a simplified way to custom tokenizers.
You can read the documentation and example to see how to use it.

Changes

* e586b1fa444f90fd4f925f636f1eeffb549aae9d New `TokenizerConfig` object to hold config and instantiate tokenizers
* 26a67a65b1d7af174d271294c5df38238c9a71b5 tingled Fix in `__repr__`
* 9970ec472bd7d6e983574d5b28d4b8cbcdd82013 Fix in CPWord token type graph
* 69e64a7f4c1a8f511bd437d519b13fb838229fa6 `max_bar_embedding` argument for `REMIPlus` is now by default set to False
* 62292d63bde48f619be354c69e371e03b3ee0d21 Kapitan11 `load_params` now private method, and documentation updated for this feature
* 3aeb7ffa03b3e5e5235ca1c42eabedc1311a1db5 Removing the depreciated "slow" BPE methods
* f8ca8548c7e1bd5ac10092f3601fdaeed253694a ilya16 Fixing PitchBend time attribute in `merge_tracks` method
* b12d270660cff14ae36549e3cfc00c320c5032b0 `TSD` now natively handle `Program` tokens, the same way `REMIPlus` does. Using the `use_prorams` option will convert MIDIs into a single token sequence for all tracks, instead of one seq per track instead;
* Other minor code, lint and docstring improvements

Compatibility

* On your current / previous projects, you will need to update your code, specifically the way you create tokenizers, to use this update. This doesn't apply to code creating tokenizers from config file (`params` arg);
* Slow BPE removed. If you still use these methods, we encourage you to switch to the new fast ones. You trained models will need to be using with old slow tokenizers.

2.0.6

Changes

* 811bd684e68b2c4ede5a6d0fc4f2396490b1851d 40 41 Adding the `MMM` tokenizer ([Multi-Track Music Machine](https://arxiv.org/abs/2008.06048))

Compatibility

* All good 🙌

2.0.5

Changes

* f9f63d076bd630606f0482291375af44e37d1136 (related to 37) adding a compatibility check to `learn_bpe` method
* f1af66ad2fec24007a59961d96069782f9b97ffc fixing an issue when loading tokens in `learn_bpe` with `unique_track` compatible tokenizer (REMIPlus) causing no BPE learning
* f1af66ad2fec24007a59961d96069782f9b97ffc in `learn_bpe`: checking that the total number of unique base tokens (chars) is inferior to the target vocabulary size
* 47b616643643bdb3dac82388d10b0603ad988b4f handling multi-voc indexing with tokens present in all vocabs eg special

Compatibility

* All good 🙌

Page 3 of 11

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.