- Support hashing the `folded_tensor.length` field (via a UserList), which is convenient for caching - Improve error messaging when refolding with missing dims
0.3.4
- Fix a data_dims access issue - Marginally improve the speed of handling FoldedTensors in standard torch operations - Use default torch types (e.g. `torch.float32` or `torch.torch64`)
0.3.3
- Handle empty inputs (e.g. `as_folded_tensor([[[], []], [[]]])`) by returning an empty tensor - Correctly bubble errors when converting inputs with varying deepness (e.g. `as_folded_tensor([1, [2, 3]])`)
0.3.2
- Allow to use `as_folded_tensor` with no args, as a simple padding function
0.3.1
- Enable sharing FoldedTensor instances in a multiprocessing + cuda context by autocloning the indexer before fork-pickling an instance - Distribute arm64 wheels for macOS
0.3.0
- Allow dims after last foldable dim during list conversion (e.g. embeddings)