Huggingface-hub

Latest version: v0.30.1

Safety actively analyzes 723625 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 18

1.0

You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model


For more details, check out the [CLI guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli#huggingface-cli-tag).

* CLI Tag Functionality by bilgehanertan in 2172

🧩 ModelHubMixin

This `ModelHubMixin` got a set of nice improvement to generate model cards and handle custom data types in the `config.json` file. More info in the [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/integrations#advanced-usage).

* `ModelHubMixin`: more metadata + arbitrary config types + proper guide by Wauplin in 2230
* Fix ModelHubMixin when class is a dataclass by Wauplin in 2159
* Do not document private attributes of ModelHubMixin by Wauplin in 2216
* Add support for pipeline_tag in ModelHubMixin by Wauplin in 2228

βš™οΈ Other

In a shared environment, it is now possible to set a custom path `HF_TOKEN_PATH` as environment variable so that each user of the cluster has their own access token.

* Support `HF_TOKEN_PATH` as environment variable by Wauplin in 2185

Thanks to Y4suyuki and lappemic, most custom errors defined in `huggingface_hub` are now aggregated in the same module. This makes it very easy to import them from `from huggingface_hub.errors import ...`.

* Define errors in errors.py by Y4suyuki in 2170
* Define errors in errors file by lappemic in 2202

Fixed `HFSummaryWriter` (class to seamlessly log tensorboard events to the Hub) to work with either `tensorboardX` or `torch.utils` implementation, depending on the user setup.

* Import SummaryWriter from either tensorboardX or torch.utils by Wauplin in 2205

Speed to list files using `HfFileSystem` has been drastically improved, thanks to awgr. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by `HfFileSystem`, they would need to copy them before-hand. This is expected to be a very limited drawback.

* fix: performance of _ls_tree by awgr in 2103

Progress bars in `huggingface_hub` got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to `logging.getLogger`) and to enable/disable only some progress bars. More details in [this guide](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/utilities#configure-progress-bars).

py
>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")

No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
... pass

But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
... pass
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:00<00:00, 117817.53it/s]


* Implement hierarchical progress bar control in huggingface_hub by lappemic in 2217

πŸ’” Breaking changes

`--local-dir-use-symlink` and `--resume-download`

As part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:
- a `.cache/huggingface/` folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command.
- `--local-dir-use-symlink` is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the `.cache/huggingface/` folder, it shouldn't be needed anyway.
- `--resume-download` has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use `--force-download`.

Inference Types

As part of 2237 (Grammar and Tools support), we've updated the return value from `InferenceClient.chat_completion` and `InferenceClient.text_generation` to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had `from huggingface_hub import TextGenerationOutput` in your code. This is however not the common usage since those objects are already instantiated by `huggingface_hub` directly.

Expected breaking changes

Some other breaking changes were expected (and announced since 0.19.x):
- `list_files_info` is definitively removed in favor of `get_paths_info` and `list_repo_tree`
- `WebhookServer.run` is definitively removed in favor of `WebhookServer.launch`
- `api_endpoint` in ModelHubMixin `push_to_hub`'s method is definitively removed in favor of the `HF_ENDPOINT` environment variable

Check 2156 for more details.

Small fixes and maintenance

βš™οΈ CI optimization

βš™οΈ fixes
* Fix HF_ENDPOINT not handled correctly by Wauplin in 2155
* Fix proxy if dynamic endpoint by Wauplin (direct commit on main)
* Update the note message when logging in to make it easier to understand and clearer by lh0x00 in 2163
* Fix URL when uploading to proxy by Wauplin in 2167
* Fix SafeTensorsInfo initialization by Wauplin in 2190
* Doc cli download timeout by zioalex in 2198
* Fix Typos in CONTRIBUTION.md and Formatting in README.md by lappemic in 2201
* change default model card by Wauplin (direct commit on main)
* Add returns documentation for save_pretrained by alexander-soare in 2226
* Update cli.md by QuinnPiers in 2242
* add warning tip that list_deployed_models only searches over cache by MoritzLaurer in 2241
* Respect default timeouts in `hf_file_system` by Wauplin in 2253
* Update harmonized token param desc and type def by lappemic in 2252
* Better document download attribute by Wauplin in 2250
* Correctly check inference endpoint is ready by Wauplin in 2229
* Add support for `updatedRefs` in WebhookPayload by Wauplin in 2169

βš™οΈ internal
* prepare for 0.23 by Wauplin in 2156
* lint by Wauplin (direct commit on main)
* quick fix by Wauplin (direct commit on main)
* Fix CI (inference tests, dataset viewer user, mypy) by Wauplin in 2208
* link by Wauplin (direct commit on main)
* Fix circular imports in eager mode? by Wauplin in 2211
* Drop generic from InferenceAPI framework list by Wauplin in 2240
* Remove test sort by acsending likes by Wauplin in 2243
* Delete legacy tests in `TestHfHubDownloadRelativePaths` + implicit delete folder is ok by Wauplin in 2259
* small doc clarification by julien-c [2261](https://github.com/huggingface/huggingface_hub/pull/2261)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* lappemic
* Fix Typos in CONTRIBUTION.md and Formatting in README.md ([2201](https://github.com/huggingface/huggingface_hub/pull/2201))
* Define errors in errors file ([2202](https://github.com/huggingface/huggingface_hub/pull/2202))
* [wip] Implement hierarchical progress bar control in huggingface_hub ([2217](https://github.com/huggingface/huggingface_hub/pull/2217))
* Update harmonized token param desc and type def ([2252](https://github.com/huggingface/huggingface_hub/pull/2252))
* bilgehanertan
* User API endpoints ([2147](https://github.com/huggingface/huggingface_hub/pull/2147))
* CLI Tag Functionality ([2172](https://github.com/huggingface/huggingface_hub/pull/2172))
* cjfghk5697
* 🌐 [i18n-KO] Translated `guides/repository.md` to Korean ([2124](https://github.com/huggingface/huggingface_hub/pull/2124))
* 🌐 [i18n-KO] Translated `package_reference/inference_client.md` to Korean ([2178](https://github.com/huggingface/huggingface_hub/pull/2178))
* 🌐 [i18n-KO] Translated `package_reference/utilities.md` to Korean ([2196](https://github.com/huggingface/huggingface_hub/pull/2196))
* SeungAhSon
* 🌐 [i18n-KO] Translated `guides/model_cards.md` to Korean" ([2128](https://github.com/huggingface/huggingface_hub/pull/2128))
* 🌐 [i18n-KO] Translated `reference/login.md` to Korean ([2151](https://github.com/huggingface/huggingface_hub/pull/2151))
* 🌐 [i18n-KO] Translated package_reference/hf_file_system.md to Korean ([2174](https://github.com/huggingface/huggingface_hub/pull/2174))
* seoulsky-field
* 🌐 [i18n-KO] Translated `guides/community.md` to Korean ([2126](https://github.com/huggingface/huggingface_hub/pull/2126))
* Y4suyuki
* Define errors in errors.py ([2170](https://github.com/huggingface/huggingface_hub/pull/2170))
* harheem
* 🌐 [i18n-KO] Translated `guides/cli.md` to Korean ([2131](https://github.com/huggingface/huggingface_hub/pull/2131))
* 🌐 [i18n-KO] Translated `reference/inference_endpoints.md` to Korean ([2180](https://github.com/huggingface/huggingface_hub/pull/2180))
* seoyoung-3060
* 🌐 [i18n-KO] Translated `guides/search.md` to Korean ([2134](https://github.com/huggingface/huggingface_hub/pull/2134))
* 🌐 [i18n-KO] Translated `package_reference/file_download.md` to Korean ([2184](https://github.com/huggingface/huggingface_hub/pull/2184))
* 🌐 [i18n-KO] Translated package_reference/serialization.md to Korean ([2233](https://github.com/huggingface/huggingface_hub/pull/2233))
* boyunJang
* 🌐 [i18n-KO] Translated `guides/inference.md` to Korean ([2130](https://github.com/huggingface/huggingface_hub/pull/2130))
* 🌐 [i18n-KO] Translated `package_reference/collections.md` to Korean ([2214](https://github.com/huggingface/huggingface_hub/pull/2214))
* 🌐 [i18n-KO] Translated `package_reference/space_runtime.md` to Korean ([2213](https://github.com/huggingface/huggingface_hub/pull/2213))
* 🌐 [i18n-KO] Translated `guides/manage-spaces.md` to Korean ([2220](https://github.com/huggingface/huggingface_hub/pull/2220))
* nuatmochoi
* 🌐 [i18n-KO] Translated `guides/webhooks_server.md` to Korean ([2145](https://github.com/huggingface/huggingface_hub/pull/2145))
* 🌐 [i18n-KO] Translated `package_reference/cache.md` to Korean ([2191](https://github.com/huggingface/huggingface_hub/pull/2191))
* fabxoe
* 🌐 [i18n-KO] Translated `package_reference/tensorboard.md` to Korean ([2173](https://github.com/huggingface/huggingface_hub/pull/2173))
* 🌐 [i18n-KO] Translated `package_reference/inference_types.md` to Korean ([2171](https://github.com/huggingface/huggingface_hub/pull/2171))
* 🌐 [i18n-KO] Translated `package_reference/hf_api.md` to Korean ([2165](https://github.com/huggingface/huggingface_hub/pull/2165))
* 🌐 [i18n-KO] Translated `package_reference/mixins.md` to Korean ([2166](https://github.com/huggingface/huggingface_hub/pull/2166))
* junejae
* 🌐 [i18n-KO] Translated `guides/upload.md` to Korean ([2139](https://github.com/huggingface/huggingface_hub/pull/2139))
* 🌐 [i18n-KO] Translated `reference/repository.md` to Korean ([2189](https://github.com/huggingface/huggingface_hub/pull/2189))
* heuristicwave
* 🌐 [i18n-KO] Translating `guides/hf_file_system.md` to Korean ([2146](https://github.com/huggingface/huggingface_hub/pull/2146))
* usr-bin-ksh
* 🌐 [i18n-KO] Translated `guides/inference_endpoints.md` to Korean ([2164](https://github.com/huggingface/huggingface_hub/pull/2164))

0.999

_update_metadata_model_index(existing_results, new_results, overwrite=True)


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.999}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


2. Add new metric to existing result

py
new_results = deepcopy(existing_results)
new_results[0]["metrics"][0]["name"] = "Recall"
new_results[0]["metrics"][0]["type"] = "recall"


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995},
{'name': 'Recall', 'type': 'recall', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


3. Add new result
py
new_results = deepcopy(existing_results)
new_results[0]["dataset"] = {'name': 'IMDb-2', 'type': 'imdb_2'}


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}},
{'dataset': ({'name': 'IMDb-2', 'type': 'imdb_2'},),
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


* ENH Add update metadata to repocard by lvwerra in 844

Improvements and bug fixes

* Keras: Saving history in a JSON file by merveenoyan in 861
* space after uri by leondz in 866

0.30.1

Patch release to fix https://github.com/huggingface/huggingface_hub/issues/2967.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.30.0...v0.30.1

0.30.0

πŸš€ Ready. Xet. Go!

This might just be our biggest update in the past two years! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk levelβ€”making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details.

You can start using Xet today by installing the optional dependency:

bash
pip install -U huggingface_hub[hf_xet]


With that, you can seamlessly download files from Xet-enabled repositories! And don’t worryβ€”everything remains fully backward-compatible if you’re not ready to upgrade yet.

**Blog post:** [Xet on the Hub](https://huggingface.co/blog/xet-on-the-hub)
**Docs:** [Storage backends β†’ Xet](https://huggingface.co/docs/hub/en/storage-backends#xet)

> [!TIP]
> Want to store your own files with Xet? We’re gradually rolling out support on the Hugging Face Hub, so `hf_xet` uploads may need to be enabled for your repo. Join the [waitlist](https://huggingface.co/join/xet) to get onboarded soon!

This is the result of collaborative work by bpronan, hanouticelina, rajatarya, jsulz, assafvayner, Wauplin, + many others on the infra/Hub side!
* Xet download workflow by hanouticelina in 2875
* Add ability to enable/disable xet storage on a repo by hanouticelina in 2893
* Xet upload workflow by hanouticelina in 2887
* Xet Docs for huggingface_hub by rajatarya in 2899
* Adding Token Refresh Xet Tests by rajatarya in 2932
* Using a two stage download path for xet files. by bpronan in 2920
* add `xetEnabled` as an expand property by hanouticelina in 2907
* Xet integration by Wauplin in 2958


⚑ Enhanced InferenceClient

The `InferenceClient` has received significant updates and improvements in this release, making it more robust and easy to work with.

We’re thrilled to introduce **Cerebras** and **Cohere** as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.

* Add Cohere as an Inference Provider by alexrs-cohere in 2888
* Add Cerebras provider by Wauplin in 2901
* remove cohere from testing and fix quality by hanouticelina in 2902

Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate:

py
from huggingface_hub import InferenceClient

client = InferenceClient(provider="novita")

video = client.text_to_video(
"A young man walking on the street",
model="Wan-AI/Wan2.1-T2V-14B",
)


* [Inference Providers] Add text-to-video support for Novita by hanouticelina in 2922

It is now possible to centralize billing on your organization rather than individual accounts! This helps companies managing their budget and setting limits at a team level. Organization must be subscribed to Enterprise Hub.

py
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="openai")
image = client.text_to_image(
"A majestic lion in a fantasy forest",
model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")


* Support bill_to in InferenceClient by Wauplin in 2940

Handling long-running inference tasks just got easier! To prevent request timeouts, we’ve introduced asynchronous calls for text-to-video inference. We are expecting more providers to leverage the same structure soon, ensuring better robustness and developer-experience.

* [Inference Providers] Async calls for fal.ai by hanouticelina in 2927
* update polling interval by hanouticelina in 2937
* [Inference Providers] Fix status and response URLs when polling text-to-video results with fal-ai by hanouticelina in 2943

Miscellaneous improvements:

* [Bot] Update inference types by HuggingFaceInfra in 2832
* Update `InferenceClient` docstring to reflect that `token=False` is no longer accepted by abidlabs in 2853
* [Inference providers] Root-only base URLs by Wauplin in 2918
* Add prompt in image_to_image type by Wauplin in 2956
* [Inference Providers] fold OpenAI support into `provider` parameter by hanouticelina in 2949
* clean up some inference stuff by Wauplin in 2941
* regenerate cassettes by hanouticelina in 2925
* Fix payload model name when model id is a URL by hanouticelina in 2911
* [InferenceClient] Fix token initialization and add more tests by hanouticelina in 2921
* [Inference Providers] check inference provider mapping for HF Inference API by hanouticelina in 2948

✨ New Features and Improvements

This release also includes several other notable features and improvements.

It's now possible to pass a path with wildcard to the upload command instead of passing `--include=...` option:


huggingface-cli upload my-cool-model *.safetensors


* Added support for Wildcards in huggingface-cli upload by devesh-2002 in 2868

Deploying an Inference Endpoint from the [Model Catalog](https://endpoints.huggingface.co/catalog) just got 100x easier! Simply select which model to deploy and we handle the rest to guarantee the best hardware and settings for your dedicated endpoints.

py
from huggingface_hub import create_inference_endpoint_from_catalog

endpoint = create_inference_endpoint_from_catalog("unsloth/DeepSeek-R1-GGUF")
endpoint.wait()

endpoint.client.chat_completion(...)


* Support deploy Inference Endpoint from model catalog by Wauplin in 2892

The `ModelHubMixin` got two small updates:
- authors can provide a paper URL that will be added to all model cards pushed by the library.
- dataclasses are now supported for any init arg (was only the case of `config` until now)

* Add paper URL to hub mixin by NielsRogge in 2917
* [HubMixin] handle dataclasses in all args, not only 'config' by Wauplin in 2928

You can now sort by name, size, last updated and last used where using the `delete-cache` command:

bash
huggingface-cli delete-cache --sort=size


* feat: add `--sort` arg to `delete-cache` to sort by size by AlpinDale in 2815

Since end 2024, it is possible to manage the LFS files stored in a repo from the UI (see [docs](https://huggingface.co/docs/hub/storage-limits#how-can-i-free-up-storage-space-in-my-accountorganization)). This release makes it possible to do the same programmatically. The goal is to enable users to free-up some storage space in their private repositories.

py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> lfs_files = api.list_lfs_files("username/my-cool-repo")

Filter files files to delete based on a combination of `filename`, `pushed_at`, `ref` or `size`.
e.g. select only LFS files in the "checkpoints" folder
>>> lfs_files_to_delete = (lfs_file for lfs_file in lfs_files if lfs_file.filename.startswith("checkpoints/"))

Permanently delete LFS files
>>> api.permanently_delete_lfs_files("username/my-cool-repo", lfs_files_to_delete)


> [!WARNING]
> This is a power-user tool to use carefully. Deleting LFS files from a repo is a non-revertible action.

* Support permanently deleting LFS files by Wauplin in 2954

πŸ’” Breaking Changes

`labels` has been removed from `InferenceClient.zero_shot_classification` and `InferenceClient.zero_shot_image_classification` tasks in favor of `candidate_labels`. There has been a proper deprecation warning for that.

* Prepare for 0.30 by Wauplin in 2878

πŸ› οΈ Small Fixes and Maintenance

πŸ› Bug and Typo Fixes

* Fix revision bug in _upload_large_folder.py by yuantuo666 in 2879
* bug fix in inference_endpoint wait function for proper waiting on update by Ajinkya-25 in 2867
* Update SpaceHardware enum by Wauplin in 2891
* Fix: Restore sys.stdout in notebook_login after error by LEEMINJOO in 2896
* Remove link to unmaintained model card app Space by davanstrien in 2897
* Fixing a typo in chat_completion example by Wauplin in 2910
* chore: Link to Authentication by FL33TW00D in 2905
* Handle file-like objects in curlify by hanouticelina in 2912
* Fix typos by omahs in 2951
* Add expanduser and expandvars to path envvars by FredHaa in 2945

πŸ—οΈ Internal

Thanks to the work previously introduced by the `diffusers` team, we've published a GitHub Action that runs code style tooling on demand on Pull Requests, making the life of contributors and reviewers easier.

* add style bot GitHub action by hanouticelina in 2898
* fix style bot GH action by hanouticelina in 2906
* Fix bot style GH action (again) by hanouticelina in 2909

Other minor updates:

* Fix prerelease CI by Wauplin in 2877
* Update update-inference-types.yaml by Wauplin in 2926
* [Internal] Fix check parameters script by hanouticelina in 2957

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* Ajinkya-25
* bug fix in inference_endpoint wait function for proper waiting on update (2867)
* abidlabs
* Update `InferenceClient` docstring to reflect that `token=False` is no longer accepted (2853)
* devesh-2002
* Added support for Wildcards in huggingface-cli upload (2868)
* alexrs-cohere
* Add Cohere as an Inference Provider (2888)
* NielsRogge
* Add paper URL to hub mixin (2917)
* AlpinDale
* feat: add `--sort` arg to `delete-cache` to sort by size (2815)
* FredHaa
* Add expanduser and expandvars to path envvars (2945)
* omahs
* Fix typos (2951)

0.29.3

Added client-side support for Cerebras and Cohere providers for upcoming official launch on the Hub.

Cerebras: https://github.com/huggingface/huggingface_hub/pull/2901.
Cohere: https://github.com/huggingface/huggingface_hub/pull/2888.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.29.2...v0.29.3

0.29.2

This patch release includes two fixes:

- Fix payload model name when model id is a URL https://github.com/huggingface/huggingface_hub/pull/2911
- Fix: Restore sys.stdout in notebook_login after error https://github.com/huggingface/huggingface_hub/pull/2896


**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.29.1...v0.29.2

Page 1 of 18

Β© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.