Huggingface-hub

Latest version: v0.26.2

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 16

0.22.0

Discuss about the release in our [Community Tab](https://huggingface.co/spaces/Wauplin/huggingface_hub/discussions/5). Feedback is welcome!! 🤗

✨ InferenceClient

Support for inference tools continues to improve in `huggingface_hub`. At the menu in this release? A new `chat_completion` API and fully typed inputs/outputs!

Chat-completion API!

A long-awaited API has just landed in `huggingface_hub`! `InferenceClient.chat_completion` follows most of OpenAI's API, making it much easier to integrate with existing tools.

Technically speaking it uses the same backend as the `text-generation` task but requires a preprocessing step to format the list of messages into a single text prompt. The chat template is rendered server-side when models are powered by [TGI](https://huggingface.co/docs/text-generation-inference/index), which is the case for most LLMs: Llama, Zephyr, Mistral, Gemma, etc. Otherwise, the templating happens client-side which requires `minijinja` package to be installed. We are actively working on bridging this gap, aiming at rendering all templates server-side in the future.

py
>>> from huggingface_hub import InferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")

Batch completion
>>> client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
choices=[
ChatCompletionOutputChoice(
finish_reason='eos_token',
index=0,
message=ChatCompletionOutputChoiceMessage(
content='The capital of France is Paris. The official name of the city is "Ville de Paris" (City of Paris) and the name of the country\'s governing body, which is located in Paris, is "La République française" (The French Republic). \nI hope that helps! Let me know if you need any further information.'
)
)
],
created=1710498360
)

Stream new tokens one by one
>>> for token in client.chat_completion(messages, max_tokens=10, stream=True):
... print(token)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content='The', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' capital', role='assistant'), index=0, finish_reason=None)], created=1710498504)
(...)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' may', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=None, role=None), index=0, finish_reason='length')], created=1710498504)


* Implement `InferenceClient.chat_completion` + use new types for text-generation by Wauplin in [2094](https://github.com/huggingface/huggingface_hub/pull/2094)
* Fix InferenceClient.text_generation for non-tgi models by Wauplin in [2136](https://github.com/huggingface/huggingface_hub/pull/2136)
* https://github.com/huggingface/huggingface_hub/pull/2153 by Wauplin in [#2153](https://github.com/huggingface/huggingface_hub/pull/2153)

Inference types

We are currently working towards more consistency in tasks definitions across the Hugging Face ecosystem. This is no easy job but a major milestone has recently been achieved! All inputs and outputs of the main ML tasks are now fully specified as JSONschema objects. This is the first brick needed to have consistent expectations when running inference across our stack: transformers (Python), transformers.js (Typescript), Inference API (Python), Inference Endpoints (Python), Text Generation Inference (Rust), Text Embeddings Inference (Rust), InferenceClient (Python), Inference.js (Typescript), etc.

Integrating those definitions will require more work but `huggingface_hub` is one of the first tools to integrate them. As a start, **all `InferenceClient` return values are now typed dataclasses.** Furthermore, typed dataclasses have been generated for all tasks' inputs and outputs. This means you can now integrate them in your own library to ensure consistency with the Hugging Face ecosystem. Specifications are open-source (see [here](https://github.com/huggingface/huggingface.js/tree/main/packages/tasks/src/tasks)) meaning anyone can access and contribute to them. Python's generated classes are documented [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_types).

Here is a short example showcasing the new output types:

py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.object_detection("people.jpg"):
[
ObjectDetectionOutputElement(
score=0.9486683011054993,
label='person',
box=ObjectDetectionBoundingBox(xmin=59, ymin=39, xmax=420, ymax=510)
),
...
]


Note that those dataclasses are backward-compatible with the dict-based interface that was previously in use. In the example above, both `ObjectDetectionBoundingBox(...).xmin` and `ObjectDetectionBoundingBox(...)["xmin"]` are correct, even though the former should be the preferred solution from now on.

* Generate inference types + start using output types by Wauplin in [2036](https://github.com/huggingface/huggingface_hub/pull/2036)
* Add = None at optional parameters by LysandreJik in [2095](https://github.com/huggingface/huggingface_hub/pull/2095)
* Fix inference types shared between tasks by Wauplin in [2125](https://github.com/huggingface/huggingface_hub/pull/2125)

🧩 ModelHubMixin

[`ModelHubMixin`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.ModelHubMixin) is an object that can be used as a parent class for the objects in your library in order to provide built-in serialization methods to upload and download pretrained models from the Hub. This mixin is adapted into a [`PyTorchHubMixin`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) that can serialize and deserialize any Pytorch model. The 0.22 release brings its share of improvements to these classes:
1. Better support of init values. If you instantiate a model with some custom arguments, the values will be automatically stored in a config.json file and restored when reloading the model from pretrained weights. This should unlock integrations with external libraries in a much smoother way.
2. Library authors integrating the hub mixin can now define custom metadata for their library: library name, tags, document url and repo url. These are to be defined only once when integrating the library. Any model pushed to the Hub using the library will then be easily discoverable thanks to those tags.
3. A base modelcard is generated for each saved model. This modelcard includes default tags (e.g. `model_hub_mixin`) and custom tags from the library (see 2.). You can extend/modify this modelcard by overwriting the `generate_model_card` method.

python
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin


Define your Pytorch model exactly the same way you are used to
>>> class MyModel(
... nn.Module,
... PyTorchModelHubMixin, multiple inheritance
... library_name="keras-nlp",
... tags=["keras"],
... repo_url="https://github.com/keras-team/keras-nlp",
... docs_url="https://keras.io/keras_nlp/",
... ^ optional metadata to generate model card
... ):
... def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
... super().__init__()
... self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
... self.linear = nn.Linear(output_size, vocab_size)

... def forward(self, x):
... return self.linear(x + self.param)

1. Create model
>>> model = MyModel(hidden_size=128)

Config is automatically created based on input + default values
>>> model._hub_mixin_config
{"hidden_size": 128, "vocab_size": 30000, "output_size": 4}

2. (optional) Save model to local directory
>>> model.save_pretrained("path/to/my-awesome-model")

3. Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")

4. Initialize model from the Hub => config has been preserved
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model._hub_mixin_config
{"hidden_size": 128, "vocab_size": 30000, "output_size": 4}

Model card has been correctly populated
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["keras", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"keras-nlp"


For more details on how to integrate these classes, check out the [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/integrations#a-more-complex-approach-class-inheritance).

* Fix `ModelHubMixin`: pass config when `__init__` accepts **kwargs by Wauplin in [2058](https://github.com/huggingface/huggingface_hub/pull/2058)
* [PyTorchModelHubMixin] Fix saving model with shared tensors by NielsRogge in [2086](https://github.com/huggingface/huggingface_hub/pull/2086)
* Correctly inject config in `PytorchModelHubMixin` by Wauplin in [2079](https://github.com/huggingface/huggingface_hub/pull/2079)
* Fix passing kwargs in PytorchHubMixin by Wauplin in [2093](https://github.com/huggingface/huggingface_hub/pull/2093)
* Generate modelcard in `ModelHubMixin` by Wauplin in [2080](https://github.com/huggingface/huggingface_hub/pull/2080)
* Fix ModelHubMixin: save config only if doesn't exist by Wauplin in [2105](https://github.com/huggingface/huggingface_hub/pull/2105)
* Fix ModelHubMixin - kwargs should be passed correctly when reloading by Wauplin in [2099](https://github.com/huggingface/huggingface_hub/pull/2099)
* Fix ModelHubMixin when kwargs and config are both passed by Wauplin in [2138](https://github.com/huggingface/huggingface_hub/pull/2138)
* ModelHubMixin overwrite config if preexistant by Wauplin in [2142](https://github.com/huggingface/huggingface_hub/pull/2142)

🛠️ Misc improvements

`HfFileSystem` download speed was limited by some internal logic in `fsspec`. We've now updated the `get_file` and `read` implementations to improve their download speed to a level similar to `hf_hub_download`.

* Fast download in hf file system by Wauplin in [2143](https://github.com/huggingface/huggingface_hub/pull/2143)

We are aiming at moving all errors raised by `huggingface_hub` into a single module `huggingface_hub.errors` to ease the developer experience. This work has been started as a community contribution from Y4suyuki.

* Start defining custom errors in one place by Y4suyuki in [2122](https://github.com/huggingface/huggingface_hub/pull/2122)

`HfApi` class now accepts a `headers` parameters that is then passed to every HTTP call made to the Hub.

* Allow passing custom headers to HfApi by Wauplin in [2098](https://github.com/huggingface/huggingface_hub/pull/2098)

📚 More documentation in Korean!

* [i18n-KO] Translated `package_reference/overview.md` to Korean by jungnerd in [2113](https://github.com/huggingface/huggingface_hub/pull/2113)

💔 Breaking changes

- The new types returned by `InferenceClient` methods should be backward compatible, especially to access values either as attributes (`.my_field`) or as items (i.e. `["my_field"]`). However, dataclasses and dicts do not always behave exactly the same so might notice some breaking changes. Those breaking changes should be very limited.

- `ModelHubMixin` internals changed quite a bit, breaking *some* use cases. We don't think those use cases were in use and changing them should really benefit 99% of integrations. If you witness any inconsistency or error in your integration, please let us know and we will do our best to mitigate the problem. One of the biggest change is that the config values are not attached to the mixin instance as `instance.config` anymore but as `instance._model_hub_mixin`. The `.config` attribute has been mistakenly introduced in `0.20.x` so we hope it has not been used much yet.

- `huggingface_hub.file_download.http_user_agent` has been removed in favor of the officially document `huggingface_hub.utils.build_hf_headers`. It was a deprecated method since `0.18.x`.

Small fixes and maintenance

⚙️ CI optimization

The CI pipeline has been greatly improved, especially thanks to the efforts from bmuskalla. Most tests are now passing in under 3 minutes, against 8 to 10 minutes previously. Some long-running tests have been greatly simplified and all tests are now ran in parallel with `python-xdist`, thanks to a complete decorrelation between them.

We are now also using the great [`uv`](https://github.com/astral-sh/uv) installer instead of `pip` in our CI, which saves around 30-40s per pipeline.

* More optimized tests by Wauplin in [2054](https://github.com/huggingface/huggingface_hub/pull/2054)
* Enable `python-xdist` on all tests by bmuskalla in [2059](https://github.com/huggingface/huggingface_hub/pull/2059)
* do not list all models by Wauplin in [2061](https://github.com/huggingface/huggingface_hub/pull/2061)
* update ruff by Wauplin in [2071](https://github.com/huggingface/huggingface_hub/pull/2071)
* Use uv in CI to speed-up requirements install by Wauplin in [2072](https://github.com/huggingface/huggingface_hub/pull/2072)


⚙️ fixes
* Fix Space variable when updatedAt is missing by Wauplin in [2050](https://github.com/huggingface/huggingface_hub/pull/2050)
* Fix tests involving temp directory on macOS by bmuskalla in [2052](https://github.com/huggingface/huggingface_hub/pull/2052)
* fix glob no magic by lhoestq in [2056](https://github.com/huggingface/huggingface_hub/pull/2056)
* Point out that the token must have write scope by bmuskalla in [2053](https://github.com/huggingface/huggingface_hub/pull/2053)
* Fix commonpath in read-only filesystem by stevelaskaridis in [2073](https://github.com/huggingface/huggingface_hub/pull/2073)
* rm unnecessary early makedirs by poedator in [2092](https://github.com/huggingface/huggingface_hub/pull/2092)
* Fix unhandled filelock issue by Wauplin in [2108](https://github.com/huggingface/huggingface_hub/pull/2108)
* Handle .DS_Store files in _scan_cache_repos by sealad886 in [2112](https://github.com/huggingface/huggingface_hub/pull/2112)
* Fix REPO_API_REGEX by Wauplin in [2119](https://github.com/huggingface/huggingface_hub/pull/2119)
* Fix uploading to HF proxy by Wauplin in [2120](https://github.com/huggingface/huggingface_hub/pull/2120)
* Fix --delete in huggingface-cli upload command by Wauplin in [2129](https://github.com/huggingface/huggingface_hub/pull/2129)
* Explicitly fail on Keras3 by Wauplin in [2107](https://github.com/huggingface/huggingface_hub/pull/2107)
* Fix serverless naming by Wauplin in [2137](https://github.com/huggingface/huggingface_hub/pull/2137)

⚙️ internal
* tag as 0.22.0.dev + remove deprecated code by Wauplin in [2049](https://github.com/huggingface/huggingface_hub/pull/2049)
* Some cleaning by Wauplin in [2070](https://github.com/huggingface/huggingface_hub/pull/2070)
* Fix test test_delete_branch_on_missing_branch_fails by Wauplin in [2088](https://github.com/huggingface/huggingface_hub/pull/2088)


Significant community contributions

The following contributors have made significant changes to the library over the last release:

* Y4suyuki
* Start defining custom errors in one place ([2122](https://github.com/huggingface/huggingface_hub/pull/2122))
* bmuskalla
* Enable `python-xdist` on all tests by bmuskalla in [2059](https://github.com/huggingface/huggingface_hub/pull/2059)

0.21.4

Release v0.21 introduced a breaking change make it impossible to save a `PytorchModelHubMixin`-based model that has shared tensors. This has been fixed in https://github.com/huggingface/huggingface_hub/pull/2086.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.21.3...v0.21.4

0.21.3

More details in https://github.com/huggingface/huggingface_hub/pull/2058.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.21.2...v0.21.3

0.21.2

See https://github.com/huggingface/huggingface_hub/pull/2056. (+https://github.com/huggingface/huggingface_hub/pull/2050 shipped as v0.21.1).

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.21.0...v0.21.2

0.21.0

Discuss about the release [in our Community Tab](https://huggingface.co/spaces/Wauplin/huggingface_hub/discussions/4). Feedback welcome!! 🤗

🖇️ Dataclasses everywhere!

All objects returned by the `HfApi` client are now dataclasses!

In the past, objects were either dataclasses, typed dictionaries, non-typed dictionaries and even basic classes. This is now all harmonized with the goal of improving developer experience.

Kudos goes to the community for the implementation and testing of all the harmonization process. Thanks again for the contributions!

* Use dataclasses for all objects returned by HfApi [1911](https://github.com/huggingface/huggingface_hub/pull/1911) by Ahmedniz1 in [#1974](https://github.com/huggingface/huggingface_hub/pull/1974)
* Updating HfApi objects to use dataclass by Ahmedniz1 in [1988](https://github.com/huggingface/huggingface_hub/pull/1988)
* Dataclasses for objects returned hf api by NouamaneELGueddarii in [1993](https://github.com/huggingface/huggingface_hub/pull/1993)

💾 FileSystem

The [`HfFileSystem`](https://huggingface.co/docs/huggingface_hub/main/en/guides/hf_file_system) class implements the [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/) interface to allow loading and writing files with a filesystem-like interface. The interface is highly used by the `datasets` library and this release will improve further the efficiency and robustness of the integration.

* Pass revision in path to AbstractBufferedFile init by albertvillanova in [1948](https://github.com/huggingface/huggingface_hub/pull/1948)
* [HfFileSystem] Fix `rm` on branch by lhoestq in [1957](https://github.com/huggingface/huggingface_hub/pull/1957)
* Retry fetching data on 502 error in `HfFileSystem` by mariosasko in [1981](https://github.com/huggingface/huggingface_hub/pull/1981)
* Add HfFileSystemStreamFile by lhoestq in [1967](https://github.com/huggingface/huggingface_hub/pull/1967)
* [HfFileSystem] Copy non lfs files by lhoestq in [1996](https://github.com/huggingface/huggingface_hub/pull/1996)
* Add `HfFileSystem.url` method by mariosasko in [2027](https://github.com/huggingface/huggingface_hub/pull/2027)

🧩 Pytorch Hub Mixin

The [`PyTorchModelHubMixin`](https://huggingface.co/docs/huggingface_hub/main/en/guides/integrations#a-concrete-example-pytorch) class let's you upload ANY pytorch model to the Hub in a few lines of code. More precisely, it is a class that can be inherited in any `nn.Module` class to add the `from_pretrained`, `save_pretrained` and `push_to_hub` helpers to your class. It handles serialization and deserialization of weights and configs for you and enables download counts on the Hub.

With this release, we've fixed 2 pain points holding back users from using this lib:
1. Configs are now better handled. The mixin automatically detects if the base class defines a config, saves it on the Hub and then injects it at load time, either as a dictionary or a dataclass depending on the base class's expectations.
2. Weights are now saved as `.safetensors` files instead of pytorch pickles for safety reasons. Loading from previous pytorch pickles is still supported but we are moving toward completely deprecating them (in a mid to long term plan).

* Better config support in ModelHubMixin by Wauplin in [2001](https://github.com/huggingface/huggingface_hub/pull/2001)
* Use safetensors by default for `PyTorchModelHubMixin` by bmuskalla in [2033](https://github.com/huggingface/huggingface_hub/pull/2033)

✨ InferenceClient improvements

Audio-to-audio task is now supported by both by the `InferenceClient`!

py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>> with open(f"output_{i}.flac", "wb") as f:
f.write(item["blob"])


* Added audio to audio in inference client by Ahmedniz1 in [2020](https://github.com/huggingface/huggingface_hub/pull/2020)

Also fixed a few things:
* Fix intolerance for new field in TGI stream response: 'index' by danielpcox in [2006](https://github.com/huggingface/huggingface_hub/pull/2006)
* Fix optional model in tabular tasks by Wauplin in [2018](https://github.com/huggingface/huggingface_hub/pull/2018)
* Added best_of to non-TGI ignored parameters by dopc in [1949](https://github.com/huggingface/huggingface_hub/pull/1949)

📤 Model serialization

With the aim of harmonizing repo structures and file serialization on the Hub, we added a new module `serialization` with a first helper [`split_state_dict_into_shards`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/serialization) that takes a state dict and split it into shards. Code implementation is mostly taken from `transformers` and aims to be reused by other libraries in the ecosystem. It seamlessly supports `torch`, `tensorflow` and `numpy` weights, and can be easily extended to other frameworks.

This is a first step in the harmonization process and more loading/saving helpers will be added soon.

* Framework-agnostic `split_state_dict_into_shards` helper by Wauplin in [1938](https://github.com/huggingface/huggingface_hub/pull/1938)

📚 Documentation

🌐 Translations

Community is actively getting the job done to translate the `huggingface_hub` to other languages. We now have docs available in Simplified Chinese ([here](https://huggingface.co/docs/huggingface_hub/main/cn/index)) and in French ([here](https://huggingface.co/docs/huggingface_hub/main/fr/index)) to help democratize good machine learning!

* [i18n-CN] Translated some files to simplified Chinese [1915](https://github.com/huggingface/huggingface_hub/pull/1915) by 2404589803 in [#1916](https://github.com/huggingface/huggingface_hub/pull/1916)
* Update .github workflow to build cn docs on PRs by Wauplin in [1931](https://github.com/huggingface/huggingface_hub/pull/1931)
* [i18n-FR] Translated files in french and reviewed them by JibrilEl in [2024](https://github.com/huggingface/huggingface_hub/pull/2024)

Docs misc

* Document `base_model` in modelcard metadata by Wauplin in [1936](https://github.com/huggingface/huggingface_hub/pull/1936)
* Update the documentation of add_collection_item by FremyCompany in [1958](https://github.com/huggingface/huggingface_hub/pull/1958)
* Docs[i18n-en]: added pkgx as an installation method to the docs by michaelessiet in [1955](https://github.com/huggingface/huggingface_hub/pull/1955)
* Added `hf_transfer` extra into `setup.py` and `docs/` by jamesbraza in [1970](https://github.com/huggingface/huggingface_hub/pull/1970)
* Documenting CLI default for `download --repo-type` by jamesbraza in [1986](https://github.com/huggingface/huggingface_hub/pull/1986)
* Update repository.md by xmichaelmason in [2010](https://github.com/huggingface/huggingface_hub/pull/2010)

Docs fixes

* Fix URL in `get_safetensors_metadata` docstring by Wauplin in [1951](https://github.com/huggingface/huggingface_hub/pull/1951)
* Fix grammar by Anthonyg5005 in [2003](https://github.com/huggingface/huggingface_hub/pull/2003)
* Fix doc by jordane95 in [2013](https://github.com/huggingface/huggingface_hub/pull/2013)
* typo fix by Decryptu in [2035](https://github.com/huggingface/huggingface_hub/pull/2035)

🛠️ Misc improvements

Creating a commit with an invalid README will fail early instead of uploading all LFS files before failing to commit.

* Fail early on invalid metadata by Wauplin in [1934](https://github.com/huggingface/huggingface_hub/pull/1934)

Added a `revision_exists` helper, working similarly to `repo_exists` and `file_exists`:

py
>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False


* Add `revision_exists` helper by Wauplin in [2042](https://github.com/huggingface/huggingface_hub/pull/2042)

`InferenceClient.wait(...)` now raises an error if the endpoint is in a failed state.

* raise on failed inference endpoint by Wauplin in [1935](https://github.com/huggingface/huggingface_hub/pull/1935)

Improved progress bar when downloading a file

* improve http_get by Wauplin in [1954](https://github.com/huggingface/huggingface_hub/pull/1954)

Other stuff:
* added will not echo message to the login token message by vtrenton in [1925](https://github.com/huggingface/huggingface_hub/pull/1925)
* Raise if repo is disabled by Wauplin in [1965](https://github.com/huggingface/huggingface_hub/pull/1965)
* Fix timezone in datetime parsing by Wauplin in [1982](https://github.com/huggingface/huggingface_hub/pull/1982)
* retry on any 5xx on upload by Wauplin in [2026](https://github.com/huggingface/huggingface_hub/pull/2026)


💔 Breaking changes

- Classes `ModelFilter` and `DatasetFilter` are deprecated when listing models and datasets in favor of a simpler API that lets you pass the parameters directly to `list_models` and `list_datasets`.

py
>>> from huggingface_hub import list_models, ModelFilter

use
>>> list_models(language="zh")
instead of
>>> list_models(filter=ModelFilter(language="zh"))


Cleaner, right? `ModelFilter` and `DatasetFilter` will still be supported until `v0.24` release.

* Deprecate `ModelFilter/DatasetFilter` by druvdub in [2028](https://github.com/huggingface/huggingface_hub/pull/2028)
* List models tweaks by julien-c in [2044](https://github.com/huggingface/huggingface_hub/pull/2044)

- In the inference client, [`ModelStatus.compute_type`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.inference._common.ModelStatus) is not a string anymore but a dictionary with more detailed information (instance type + number of replicas). This breaking change reflects a server-side update.

* Fix ModelStatus compute type by Wauplin in [2047](https://github.com/huggingface/huggingface_hub/pull/2047)

Small fixes and maintenance

⚙️ fixes

* Make GitRefs backward comp by Wauplin in [1960](https://github.com/huggingface/huggingface_hub/pull/1960)
* Fix pagination when listing discussions by Wauplin in [1962](https://github.com/huggingface/huggingface_hub/pull/1962)
* Fix inconsistent `warnings.warn` in repocard.py by Wauplin in [1980](https://github.com/huggingface/huggingface_hub/pull/1980)
* fix: actual error won't be raised while `force_download=True` by scruel in [1983](https://github.com/huggingface/huggingface_hub/pull/1983)
* Fix download from private renamed repo by Wauplin in [1999](https://github.com/huggingface/huggingface_hub/pull/1999)
* Disable tqdm progress bar if no TTY attached by mssalvatore in [2000](https://github.com/huggingface/huggingface_hub/pull/2000)
* Deprecate legacy parameters in update_repo_visibility by Wauplin in [2014](https://github.com/huggingface/huggingface_hub/pull/2014)
* Fix getting widget_data from model_info by Wauplin in [2041](https://github.com/huggingface/huggingface_hub/pull/2041)

⚙️ internal

* prepare for 0.21.0 by Wauplin in [1928](https://github.com/huggingface/huggingface_hub/pull/1928)
* Remove PRODUCTION_TOKEN by Wauplin in [1937](https://github.com/huggingface/huggingface_hub/pull/1937)
* Add reminder for model card consistency by Wauplin in [1979](https://github.com/huggingface/huggingface_hub/pull/1979)
* Finished migration from `setup.cfg` to `pyproject.toml` by jamesbraza in [1971](https://github.com/huggingface/huggingface_hub/pull/1971)
* Newer `pre-commit` by jamesbraza in [1987](https://github.com/huggingface/huggingface_hub/pull/1987)
* Removed now unnecessary setup.cfg path variable by jamesbraza in [1990](https://github.com/huggingface/huggingface_hub/pull/1990)
* Added `toml-sort` tool by jamesbraza in [1972](https://github.com/huggingface/huggingface_hub/pull/1972)
* update name of dummy dataset user by Wauplin in [2019](https://github.com/huggingface/huggingface_hub/pull/2019)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* 2404589803
* [i18n-CN] Translated some files to implified Chinese [1915](https://github.com/huggingface/huggingface_hub/pull/1915) ([#1916](https://github.com/huggingface/huggingface_hub/pull/1916))
* jamesbraza
* Added `hf_transfer` extra into `setup.py` and `docs/` ([1970](https://github.com/huggingface/huggingface_hub/pull/1970))
* Finished migration from `setup.cfg` to `pyproject.toml` ([1971](https://github.com/huggingface/huggingface_hub/pull/1971))
* Documenting CLI default for `download --repo-type` ([1986](https://github.com/huggingface/huggingface_hub/pull/1986))
* Newer `pre-commit` ([1987](https://github.com/huggingface/huggingface_hub/pull/1987))
* Removed now unnecessary setup.cfg path variable ([1990](https://github.com/huggingface/huggingface_hub/pull/1990))
* Added `toml-sort` tool ([1972](https://github.com/huggingface/huggingface_hub/pull/1972))
* Ahmedniz1
* Use dataclasses for all objects returned by HfApi [1911](https://github.com/huggingface/huggingface_hub/pull/1911) ([#1974](https://github.com/huggingface/huggingface_hub/pull/1974))
* Updating HfApi objects to use dataclass ([1988](https://github.com/huggingface/huggingface_hub/pull/1988))
* Added audio to audio in inference client ([2020](https://github.com/huggingface/huggingface_hub/pull/2020))
* druvdub
* Deprecate `ModelFilter/DatasetFilter` ([2028](https://github.com/huggingface/huggingface_hub/pull/2028))
* JibrilEl
* [i18n-FR] Translated files in french and reviewed them ([2024](https://github.com/huggingface/huggingface_hub/pull/2024))
* bmuskalla
* Use safetensors by default for `PyTorchModelHubMixin` ([2033](https://github.com/huggingface/huggingface_hub/pull/2033))

0.20.3

This patch release fixes an issue when retrieving the locally saved token using `huggingface_hub.HfFolder.get_token`. For the record, this is a "planned to be deprecated" method, in favor of `huggingface_hub.get_token` which is more robust and versatile. The issue came from a breaking change introduced in https://github.com/huggingface/huggingface_hub/pull/1895 meaning only `0.20.x` is affected.

For more details, please refer to https://github.com/huggingface/huggingface_hub/pull/1966.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.20.2...v0.20.3

Page 5 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.