Huggingface-hub

Latest version: v0.27.0

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 17

1.0

You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model


For more details, check out the [CLI guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli#huggingface-cli-tag).

* CLI Tag Functionality by bilgehanertan in 2172

🧩 ModelHubMixin

This `ModelHubMixin` got a set of nice improvement to generate model cards and handle custom data types in the `config.json` file. More info in the [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/integrations#advanced-usage).

* `ModelHubMixin`: more metadata + arbitrary config types + proper guide by Wauplin in 2230
* Fix ModelHubMixin when class is a dataclass by Wauplin in 2159
* Do not document private attributes of ModelHubMixin by Wauplin in 2216
* Add support for pipeline_tag in ModelHubMixin by Wauplin in 2228

⚙️ Other

In a shared environment, it is now possible to set a custom path `HF_TOKEN_PATH` as environment variable so that each user of the cluster has their own access token.

* Support `HF_TOKEN_PATH` as environment variable by Wauplin in 2185

Thanks to Y4suyuki and lappemic, most custom errors defined in `huggingface_hub` are now aggregated in the same module. This makes it very easy to import them from `from huggingface_hub.errors import ...`.

* Define errors in errors.py by Y4suyuki in 2170
* Define errors in errors file by lappemic in 2202

Fixed `HFSummaryWriter` (class to seamlessly log tensorboard events to the Hub) to work with either `tensorboardX` or `torch.utils` implementation, depending on the user setup.

* Import SummaryWriter from either tensorboardX or torch.utils by Wauplin in 2205

Speed to list files using `HfFileSystem` has been drastically improved, thanks to awgr. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by `HfFileSystem`, they would need to copy them before-hand. This is expected to be a very limited drawback.

* fix: performance of _ls_tree by awgr in 2103

Progress bars in `huggingface_hub` got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to `logging.getLogger`) and to enable/disable only some progress bars. More details in [this guide](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/utilities#configure-progress-bars).

py
>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")

No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
... pass

But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
... pass
100%|█████████████████| 5/5 [00:00<00:00, 117817.53it/s]


* Implement hierarchical progress bar control in huggingface_hub by lappemic in 2217

💔 Breaking changes

`--local-dir-use-symlink` and `--resume-download`

As part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:
- a `.cache/huggingface/` folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command.
- `--local-dir-use-symlink` is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the `.cache/huggingface/` folder, it shouldn't be needed anyway.
- `--resume-download` has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use `--force-download`.

Inference Types

As part of 2237 (Grammar and Tools support), we've updated the return value from `InferenceClient.chat_completion` and `InferenceClient.text_generation` to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had `from huggingface_hub import TextGenerationOutput` in your code. This is however not the common usage since those objects are already instantiated by `huggingface_hub` directly.

Expected breaking changes

Some other breaking changes were expected (and announced since 0.19.x):
- `list_files_info` is definitively removed in favor of `get_paths_info` and `list_repo_tree`
- `WebhookServer.run` is definitively removed in favor of `WebhookServer.launch`
- `api_endpoint` in ModelHubMixin `push_to_hub`'s method is definitively removed in favor of the `HF_ENDPOINT` environment variable

Check 2156 for more details.

Small fixes and maintenance

⚙️ CI optimization

⚙️ fixes
* Fix HF_ENDPOINT not handled correctly by Wauplin in 2155
* Fix proxy if dynamic endpoint by Wauplin (direct commit on main)
* Update the note message when logging in to make it easier to understand and clearer by lh0x00 in 2163
* Fix URL when uploading to proxy by Wauplin in 2167
* Fix SafeTensorsInfo initialization by Wauplin in 2190
* Doc cli download timeout by zioalex in 2198
* Fix Typos in CONTRIBUTION.md and Formatting in README.md by lappemic in 2201
* change default model card by Wauplin (direct commit on main)
* Add returns documentation for save_pretrained by alexander-soare in 2226
* Update cli.md by QuinnPiers in 2242
* add warning tip that list_deployed_models only searches over cache by MoritzLaurer in 2241
* Respect default timeouts in `hf_file_system` by Wauplin in 2253
* Update harmonized token param desc and type def by lappemic in 2252
* Better document download attribute by Wauplin in 2250
* Correctly check inference endpoint is ready by Wauplin in 2229
* Add support for `updatedRefs` in WebhookPayload by Wauplin in 2169

⚙️ internal
* prepare for 0.23 by Wauplin in 2156
* lint by Wauplin (direct commit on main)
* quick fix by Wauplin (direct commit on main)
* Fix CI (inference tests, dataset viewer user, mypy) by Wauplin in 2208
* link by Wauplin (direct commit on main)
* Fix circular imports in eager mode? by Wauplin in 2211
* Drop generic from InferenceAPI framework list by Wauplin in 2240
* Remove test sort by acsending likes by Wauplin in 2243
* Delete legacy tests in `TestHfHubDownloadRelativePaths` + implicit delete folder is ok by Wauplin in 2259
* small doc clarification by julien-c [2261](https://github.com/huggingface/huggingface_hub/pull/2261)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* lappemic
* Fix Typos in CONTRIBUTION.md and Formatting in README.md ([2201](https://github.com/huggingface/huggingface_hub/pull/2201))
* Define errors in errors file ([2202](https://github.com/huggingface/huggingface_hub/pull/2202))
* [wip] Implement hierarchical progress bar control in huggingface_hub ([2217](https://github.com/huggingface/huggingface_hub/pull/2217))
* Update harmonized token param desc and type def ([2252](https://github.com/huggingface/huggingface_hub/pull/2252))
* bilgehanertan
* User API endpoints ([2147](https://github.com/huggingface/huggingface_hub/pull/2147))
* CLI Tag Functionality ([2172](https://github.com/huggingface/huggingface_hub/pull/2172))
* cjfghk5697
* 🌐 [i18n-KO] Translated `guides/repository.md` to Korean ([2124](https://github.com/huggingface/huggingface_hub/pull/2124))
* 🌐 [i18n-KO] Translated `package_reference/inference_client.md` to Korean ([2178](https://github.com/huggingface/huggingface_hub/pull/2178))
* 🌐 [i18n-KO] Translated `package_reference/utilities.md` to Korean ([2196](https://github.com/huggingface/huggingface_hub/pull/2196))
* SeungAhSon
* 🌐 [i18n-KO] Translated `guides/model_cards.md` to Korean" ([2128](https://github.com/huggingface/huggingface_hub/pull/2128))
* 🌐 [i18n-KO] Translated `reference/login.md` to Korean ([2151](https://github.com/huggingface/huggingface_hub/pull/2151))
* 🌐 [i18n-KO] Translated package_reference/hf_file_system.md to Korean ([2174](https://github.com/huggingface/huggingface_hub/pull/2174))
* seoulsky-field
* 🌐 [i18n-KO] Translated `guides/community.md` to Korean ([2126](https://github.com/huggingface/huggingface_hub/pull/2126))
* Y4suyuki
* Define errors in errors.py ([2170](https://github.com/huggingface/huggingface_hub/pull/2170))
* harheem
* 🌐 [i18n-KO] Translated `guides/cli.md` to Korean ([2131](https://github.com/huggingface/huggingface_hub/pull/2131))
* 🌐 [i18n-KO] Translated `reference/inference_endpoints.md` to Korean ([2180](https://github.com/huggingface/huggingface_hub/pull/2180))
* seoyoung-3060
* 🌐 [i18n-KO] Translated `guides/search.md` to Korean ([2134](https://github.com/huggingface/huggingface_hub/pull/2134))
* 🌐 [i18n-KO] Translated `package_reference/file_download.md` to Korean ([2184](https://github.com/huggingface/huggingface_hub/pull/2184))
* 🌐 [i18n-KO] Translated package_reference/serialization.md to Korean ([2233](https://github.com/huggingface/huggingface_hub/pull/2233))
* boyunJang
* 🌐 [i18n-KO] Translated `guides/inference.md` to Korean ([2130](https://github.com/huggingface/huggingface_hub/pull/2130))
* 🌐 [i18n-KO] Translated `package_reference/collections.md` to Korean ([2214](https://github.com/huggingface/huggingface_hub/pull/2214))
* 🌐 [i18n-KO] Translated `package_reference/space_runtime.md` to Korean ([2213](https://github.com/huggingface/huggingface_hub/pull/2213))
* 🌐 [i18n-KO] Translated `guides/manage-spaces.md` to Korean ([2220](https://github.com/huggingface/huggingface_hub/pull/2220))
* nuatmochoi
* 🌐 [i18n-KO] Translated `guides/webhooks_server.md` to Korean ([2145](https://github.com/huggingface/huggingface_hub/pull/2145))
* 🌐 [i18n-KO] Translated `package_reference/cache.md` to Korean ([2191](https://github.com/huggingface/huggingface_hub/pull/2191))
* fabxoe
* 🌐 [i18n-KO] Translated `package_reference/tensorboard.md` to Korean ([2173](https://github.com/huggingface/huggingface_hub/pull/2173))
* 🌐 [i18n-KO] Translated `package_reference/inference_types.md` to Korean ([2171](https://github.com/huggingface/huggingface_hub/pull/2171))
* 🌐 [i18n-KO] Translated `package_reference/hf_api.md` to Korean ([2165](https://github.com/huggingface/huggingface_hub/pull/2165))
* 🌐 [i18n-KO] Translated `package_reference/mixins.md` to Korean ([2166](https://github.com/huggingface/huggingface_hub/pull/2166))
* junejae
* 🌐 [i18n-KO] Translated `guides/upload.md` to Korean ([2139](https://github.com/huggingface/huggingface_hub/pull/2139))
* 🌐 [i18n-KO] Translated `reference/repository.md` to Korean ([2189](https://github.com/huggingface/huggingface_hub/pull/2189))
* heuristicwave
* 🌐 [i18n-KO] Translating `guides/hf_file_system.md` to Korean ([2146](https://github.com/huggingface/huggingface_hub/pull/2146))
* usr-bin-ksh
* 🌐 [i18n-KO] Translated `guides/inference_endpoints.md` to Korean ([2164](https://github.com/huggingface/huggingface_hub/pull/2164))

0.999

_update_metadata_model_index(existing_results, new_results, overwrite=True)


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.999}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


2. Add new metric to existing result

py
new_results = deepcopy(existing_results)
new_results[0]["metrics"][0]["name"] = "Recall"
new_results[0]["metrics"][0]["type"] = "recall"


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995},
{'name': 'Recall', 'type': 'recall', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


3. Add new result
py
new_results = deepcopy(existing_results)
new_results[0]["dataset"] = {'name': 'IMDb-2', 'type': 'imdb_2'}


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}},
{'dataset': ({'name': 'IMDb-2', 'type': 'imdb_2'},),
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


* ENH Add update metadata to repocard by lvwerra in 844

Improvements and bug fixes

* Keras: Saving history in a JSON file by merveenoyan in 861
* space after uri by leondz in 866

0.27.0

📦 Introducing DDUF tooling
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/DDUF/DDUF-Banner.svg" alt="DDUF Banner"/>

DDUF (**D**DUF's **D**iffusion **U**nified **F**ormat) is a single-file format for diffusion models that aims to unify the different model distribution methods and weight-saving formats by packaging all model components into a single file. Check out [the DDUF documentation](https://huggingface.co/docs/hub/en/dduf) for more details and context.

The `huggingface_hub` library now provides tooling to handle DDUF files in Python. It includes helpers to **read** and **export** DDUF files, and built-in rules to validate file integrity.

How to write a DDUF file?

python
>>> from huggingface_hub import export_folder_as_dduf

Export "path/to/FLUX.1-dev" folder as a DDUF file
>>> export_folder_as_dduf("FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev")

How to read a DDUF file?
python
>>> import json
>>> import safetensors.torch
>>> from huggingface_hub import read_dduf_file

Read DDUF metadata (only metadata is loaded, lightweight operation)
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")

Returns a mapping filename <> DDUFEntry
>>> dduf_entries["model_index.json"]
DDUFEntry(filename='model_index.json', offset=66, length=587)

Load the `model_index.json` content
>>> json.loads(dduf_entries["model_index.json"].read_text())
{'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', 'scheduler': ['diffusers', 'FlowMatchEulerDiscreteScheduler'], 'text_encoder': ['transformers', 'CLIPTextModel'], 'text_encoder_2': ['transformers', 'T5EncoderModel'], 'tokenizer': ['transformers', 'CLIPTokenizer'], 'tokenizer_2': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'FluxTransformer2DModel'], 'vae': ['diffusers', 'AutoencoderKL']}

Load VAE weights using safetensors
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm:
... state_dict = safetensors.torch.load(mm)

⚠️ Note that this is a very early version of the parser. The API and implementation can evolve in the near future.
👉 More details about the API in the documentation [here](https://huggingface.co/docs/huggingface_hub/v0.27.0/en/package_reference/serialization#dduf-file-format).

> DDUF parser v0.1 by Wauplin in 2692

💾 Serialization
Following the introduction of the torch serialization module in `0.22.*` and the support of saving torch state dict to disk in `0.24.*`, we now provide helpers to **load** torch state dicts from disk.
By centralizing these functionalities in `huggingface_hub`, we ensure a consistent implementation across the HF ecosystem while allowing external libraries to benefit from standardized weight handling.
python
>>> from huggingface_hub import load_torch_model, load_state_dict_from_file

load state dict from a single file
>>> state_dict = load_state_dict_from_file("path/to/weights.safetensors")

Directly load weights into a PyTorch model
>>> model = ... A PyTorch model
>>> load_torch_model(model, "path/to/checkpoint")

More details in the [serialization package reference](https://huggingface.co/docs/huggingface_hub/v0.27.0/en/package_reference/serialization#loading-tensors).

> [Serialization] support loading torch state dict from disk by hanouticelina in 2687

We added a flag to `save_torch_state_dict()` helper to properly handle model saving in distributed environments, aligning with existing implementations across the Hugging Face ecosystem:

> [Serialization] Add is_main_process argument to save_torch_state_dict() by hanouticelina in 2648

A bug with shared tensor handling reported in [transformers35080](https://github.com/huggingface/transformers/issues/35080) has been fixed:

> add argument to pass shared tensors keys to discard by hanouticelina in 2696

✨ HfApi

The following changes align the client with server-side updates in how security metadata is handled and exposed in the API responses. In particular, The repository security status returned by `HfApi().model_info()` is now available in the `security_repo_status` field:

diff
from huggingface_hub import HfApi

api = HfApi()

model = api.model_info("your_model_id", securityStatus=True)

get security status info of your model
- security_info = model.securityStatus
+ security_info = model.security_repo_status


> - Update how file's security metadata is retrieved following changes in the API response by hanouticelina in 2621
> - Expose repo security status field in ModelInfo by hanouticelina in 2639

🌐 📚 Documentation

Thanks to miaowumiaomiaowu, more documentation is now available in Chinese! And thanks 13579606 for reviewing these PRs. Check out the result [here](https://huggingface.co/docs/huggingface_hub/v0.27.0/cn/index).

> :memo:Translating docs to Simplified Chinese by miaowumiaomiaowu in 2689, 2704 and 2705.

💔 Breaking changes
A few breaking changes have been introduced:
- `RepoCardData` serialization now preserves `None` values in nested structures.
- `InferenceClient.image_to_image()` now takes a `target_size` argument instead of `height` and `width` arguments. This is has been reflected in the InferenceClient async equivalent as well.
- `InferenceClient.table_question_answering()` no longer accepts a `parameter` argument. This is has been reflected in the InferenceClient async equivalent as well.
- Due to low usage, `list_metrics()` has been removed from `HfApi`.

> - Do not remove None values in RepoCardData serialization by Wauplin in 2626
> - manually update chat completion params by hanouticelina in 2682
> - [Bot] Update inference types 2688
> - rm list_metrics by julien-c in 2702

⏳ Deprecations
Some deprecations have been introduced as well:

- Legacy token permission checks are deprecated as they are no longer relevant with fine-grained tokens, This includes `is_write_action` in `build_hf_headers()`, `write_permission=True` in login methods. `get_token_permission` has been deprecated as well.
- `labels` argument is deprecated in `InferenceClient.zero_shot_classification()` and `InferenceClient.image_zero_shot_classification()`. This is has been reflected in the InferenceClient async equivalent as well.

> - Deprecate is_write_action and write_permission=True when login by Wauplin in 2632
> - Fix and deprecate get_token_permission by Wauplin in 2631
> - [Inference Client] fix param docstring and deprecate labels param in zero-shot classification tasks by hanouticelina in 2668

🛠️ Small fixes and maintenance
😌 QoL improvements
* Add utf8 encoding to read_text to avoid Windows charmap crash by tomaarsen in 2627
* Add user CLI unit tests by hanouticelina in 2628
* Update consistent error message (we can't do much about it) by Wauplin in 2641
* Warn about upload_large_folder if really large folder by Wauplin in 2656
* Support context mananger in commit scheduler by Wauplin in 2670
* Fix autocompletion not working with ModelHubMixin by Wauplin in 2695
* Enable tqdm progress in cloud environments by cbensimon in 2698
🐛 Bug and typo fixes
* bugfix huggingface-cli command execution in python3.8 by PineApple777 in 2620
* Fix documentation link formatting in README_cn by BrickYo in 2615
* Update hf_file_system.md by SwayStar123 in 2616
* Fix download local dir edge case (remove lru_cache) by Wauplin in 2629
* Fix typos by omahs in 2634
* Fix ModelCardData's datasets typing by hanouticelina in 2644
* Fix HfFileSystem.exists() for deleted repos and update documentation by hanouticelina in 2643
* Fix max tokens default value in text generation and chat completion by hanouticelina in 2653
* Fix sorting properties by hanouticelina in 2655
* Don't write the ref file unless necessary by d8ahazard in 2657
* update attribute used in delete_collection_item docstring by davanstrien in 2659
* [🐛](src/huggingface_hub/utils/_cache_manager.py): Fix bug by ignoring specific files in cache manager by johnmai-dev in 2660
* Bug in model_card_consistency_reminder.yml by deanwampler in 2661
* [Inference Client] fix zero_shot_image_classification's parameters by hanouticelina in 2665
* Use asyncio.sleep in AsyncInferenceClient (not time.sleep) by Wauplin in 2674
* Make sure create_repo respect organization privacy settings by Wauplin in 2679
* Fix timestamp parsing to always include milliseconds by hanouticelina in 2683
* will be used by julien-c in 2701
* remove context manager when loading shards and handle mlx weights by hanouticelina in 2709
🏗️ internal
* prepare for release v0.27 by hanouticelina in 2622
* Support python 3.13 by hanouticelina in 2636
* Add CI to auto-generate inference types by Wauplin in 2600
* [InferenceClient] Automatically handle outdated task parameters by hanouticelina in 2633
* Fix logo in README when dark mode is on by hanouticelina in 2669
* Fix lint after ruff update by Wauplin in 2680
* Fix test_list_spaces_linked by Wauplin in 2707

0.26.5

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.26.3...v0.26.5

See 2696 for more details.

0.26.3

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.26.2...v0.26.3

See https://github.com/huggingface/huggingface_hub/pull/2683 for more details.

0.26.2

This patch release includes updates to align with recent API response changes:
- Update how file's security metadata is retrieved following changes in the API response (2621).
- Expose repo security status field in ModelInfo (2639).

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.26.1...v0.26.2

Page 1 of 17

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.