Tslearn

Latest version: v0.6.3

Safety actively analyzes 710445 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

0.3.1

Fixed

* Fixed a bug in `TimeSeriesSVC` and `TimeSeriesSVR` that caused user-input
`gamma` to be ignored (always treated as if it were `"auto"`) for `gak` kernel

0.3

* `GlobalAlignmentKernelKMeans` is deprecated in favor of `KernelKMeans` that
accepts various kernels (and "gak" is the default)
* `ShapeletModel` is now called `LearningShapelets` to be more explicit about
which shapelet-based classifier is implemented. `ShapeletModel` is still
available as an alias, but is now considered part of the private API

Added

* Python 3.8 support
* `dtw_path_from_metric` allows one to pick a dedicated ground metric on top
of which the DTW algorithm can be run
* Nearest Neighbors on SAX representation (with custom distance)
* Calculate pairwise distance matrix between SAX representations
* `PiecewiseAggregateApproximation` can now handle variable lengths
* `ShapeletModel` is now serializable to JSON and pickle formats
* Multivariate datasets from the UCR/UEA archive are now available through
`UCR_UEA_datasets().load_dataset(...)`
* `ShapeletModel` now accepts variable-length time series dataset; a `max_size`
parameter has been introduced to save room at fit time for possibly longer
series to be fed to the model afterwards
* `ShapeletModel` now accepts a `scale` parameter that drives time series
pre-processing for better convergence
* `ShapeletModel` now has a public `history_` attribute that stores
loss and accuracy along fit epochs
* SAX and variants now accept a `scale` parameter that drives time series
pre-processing to fit the N(0,1) underlying hypothesis for SAX
* `TimeSeriesKMeans` now has a `transform` method that returns distances to
centroids
* A new `matrix_profile` module is added that allows `MatrixProfile` to be
computed using the stumpy library or using a naive "numpy" implementation.
* A new `early_classification` module is added that offers early classification
estimators
* A new `neural_network` module is added that offers Multi Layer Perceptron
estimators for classification and regression

Fixed

* Estimators that can operate on variable length time series now allow
for test time datasets to have a different length from the one that was
passed at fit time
* Bugfix in `kneighbors()` methods.

Removed

* Support for Python 2 is dropped

0.3.0

Changed

* `dtw_barycenter_averaging` is made faster by using vectorized computations
* `dtw_barycenter_averaging` can be restarted several times to reach better
local optima using a parameter `n_init` set to 1 by default
* Functions `load_timeseries_txt` and `save_timeseries_txt` from the utils
module have changed their names to `load_time_series_txt` and
`save_time_series_txt`. Old names can still be used but considered deprecated
and removed from the public API documentation for the sake of harmonization
* Default value for the maximum number of iterations to train `ShapeletModel`
and `SerializableShapeletModel` is now set to 10,000 (used to be 100)
* `TimeSeriesScalerMeanVariance` and `TimeSeriesScalerMinMax` now ignore any
NaNs when calling their respective `transform` methods in order to better
mirror scikit-learn's handling of missing data in preprocessing.
* `KNeighborsTimeSeries` now accepts variable-length time series as inputs
when used with metrics that can deal with it (eg. DTW)
* When constrained DTW is used, if the name of the constraint is not given but
its parameter is set, that is now considered sufficient to identify the
constraint.

Added

* `KNeighborsTimeSeriesRegressor` is a new regressor based on
k-nearest-neighbors that accepts the same metrics as
`KNeighborsTimeSeriesClassifier`
* A `set_weights` method is added to the `ShapeletModel` and
`SerializableShapeletModel` estimators
* `subsequence_path` and `subsequence_cost_matrix` are now part of the public
API and properly documented as such with an example use case in which more than
one path could be of interest (cf. `plot_sdtw.py`)
* `verbose` levels can be set for all functions / classes that use `joblib`
for parallel computations and `joblib` levels are used;
* conversion functions are provided in the `utils` module to interact with
other Python time series packages (`pyts`, `sktime`, `cesium`, `seglearn`,
`tsfresh`, `stumpy`, `pyflux`)
* `dtw_barycenter_averaging_subgradient` is now available to compute DTW
barycenter based on subgradient descent
* `dtw_limited_warping_length` is provided as a way to compute DTW under upper
bound constraint on warping path length
* `BaseModelPackage` is a base class for serializing models to hdf5, json and
pickle. h5py is added to requirements for hdf5 support.
* `BaseModelPackage` is used to add serialization functionality to the
following models: `GlobalAlignmentKernelKMeans`, `TimeSeriesKMeans`,
`KShape`, `KNeighborsTimeSeries`, `KNeighborsTimeSeriesClassifier`,
`PiecewiseAggregateApproximation`, `SymbolicAggregateApproximation`,
and `OneD_SymbolicAggregateApproximation`

0.2.4

Fixed

* The `tests` subdirectory is now made a python package and hence included in
wheels

0.2.2

Fixed

* The way version number is retrieved in `setup.py` was not working properly
on Python 3.4 (and made the install script fail), switched back to the previous
version

0.2.1

Added

* A `RuntimeWarning` is raised when an `'itakura'` constraint is set
that is unfeasible given the provided shapes.

Fixed

* `'itakura'` and `'sakoe_chiba'` were swapped in `metrics.compute_mask`

Page 3 of 4

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.