Huggingface-hub

Latest version: v0.26.2

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 16

1.0

You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model


For more details, check out the [CLI guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli#huggingface-cli-tag).

* CLI Tag Functionality by bilgehanertan in 2172

🧩 ModelHubMixin

This `ModelHubMixin` got a set of nice improvement to generate model cards and handle custom data types in the `config.json` file. More info in the [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/integrations#advanced-usage).

* `ModelHubMixin`: more metadata + arbitrary config types + proper guide by Wauplin in 2230
* Fix ModelHubMixin when class is a dataclass by Wauplin in 2159
* Do not document private attributes of ModelHubMixin by Wauplin in 2216
* Add support for pipeline_tag in ModelHubMixin by Wauplin in 2228

⚙️ Other

In a shared environment, it is now possible to set a custom path `HF_TOKEN_PATH` as environment variable so that each user of the cluster has their own access token.

* Support `HF_TOKEN_PATH` as environment variable by Wauplin in 2185

Thanks to Y4suyuki and lappemic, most custom errors defined in `huggingface_hub` are now aggregated in the same module. This makes it very easy to import them from `from huggingface_hub.errors import ...`.

* Define errors in errors.py by Y4suyuki in 2170
* Define errors in errors file by lappemic in 2202

Fixed `HFSummaryWriter` (class to seamlessly log tensorboard events to the Hub) to work with either `tensorboardX` or `torch.utils` implementation, depending on the user setup.

* Import SummaryWriter from either tensorboardX or torch.utils by Wauplin in 2205

Speed to list files using `HfFileSystem` has been drastically improved, thanks to awgr. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by `HfFileSystem`, they would need to copy them before-hand. This is expected to be a very limited drawback.

* fix: performance of _ls_tree by awgr in 2103

Progress bars in `huggingface_hub` got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to `logging.getLogger`) and to enable/disable only some progress bars. More details in [this guide](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/utilities#configure-progress-bars).

py
>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")

No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
... pass

But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
... pass
100%|█████████████████| 5/5 [00:00<00:00, 117817.53it/s]


* Implement hierarchical progress bar control in huggingface_hub by lappemic in 2217

💔 Breaking changes

`--local-dir-use-symlink` and `--resume-download`

As part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:
- a `.cache/huggingface/` folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command.
- `--local-dir-use-symlink` is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the `.cache/huggingface/` folder, it shouldn't be needed anyway.
- `--resume-download` has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use `--force-download`.

Inference Types

As part of 2237 (Grammar and Tools support), we've updated the return value from `InferenceClient.chat_completion` and `InferenceClient.text_generation` to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had `from huggingface_hub import TextGenerationOutput` in your code. This is however not the common usage since those objects are already instantiated by `huggingface_hub` directly.

Expected breaking changes

Some other breaking changes were expected (and announced since 0.19.x):
- `list_files_info` is definitively removed in favor of `get_paths_info` and `list_repo_tree`
- `WebhookServer.run` is definitively removed in favor of `WebhookServer.launch`
- `api_endpoint` in ModelHubMixin `push_to_hub`'s method is definitively removed in favor of the `HF_ENDPOINT` environment variable

Check 2156 for more details.

Small fixes and maintenance

⚙️ CI optimization

⚙️ fixes
* Fix HF_ENDPOINT not handled correctly by Wauplin in 2155
* Fix proxy if dynamic endpoint by Wauplin (direct commit on main)
* Update the note message when logging in to make it easier to understand and clearer by lh0x00 in 2163
* Fix URL when uploading to proxy by Wauplin in 2167
* Fix SafeTensorsInfo initialization by Wauplin in 2190
* Doc cli download timeout by zioalex in 2198
* Fix Typos in CONTRIBUTION.md and Formatting in README.md by lappemic in 2201
* change default model card by Wauplin (direct commit on main)
* Add returns documentation for save_pretrained by alexander-soare in 2226
* Update cli.md by QuinnPiers in 2242
* add warning tip that list_deployed_models only searches over cache by MoritzLaurer in 2241
* Respect default timeouts in `hf_file_system` by Wauplin in 2253
* Update harmonized token param desc and type def by lappemic in 2252
* Better document download attribute by Wauplin in 2250
* Correctly check inference endpoint is ready by Wauplin in 2229
* Add support for `updatedRefs` in WebhookPayload by Wauplin in 2169

⚙️ internal
* prepare for 0.23 by Wauplin in 2156
* lint by Wauplin (direct commit on main)
* quick fix by Wauplin (direct commit on main)
* Fix CI (inference tests, dataset viewer user, mypy) by Wauplin in 2208
* link by Wauplin (direct commit on main)
* Fix circular imports in eager mode? by Wauplin in 2211
* Drop generic from InferenceAPI framework list by Wauplin in 2240
* Remove test sort by acsending likes by Wauplin in 2243
* Delete legacy tests in `TestHfHubDownloadRelativePaths` + implicit delete folder is ok by Wauplin in 2259
* small doc clarification by julien-c [2261](https://github.com/huggingface/huggingface_hub/pull/2261)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* lappemic
* Fix Typos in CONTRIBUTION.md and Formatting in README.md ([2201](https://github.com/huggingface/huggingface_hub/pull/2201))
* Define errors in errors file ([2202](https://github.com/huggingface/huggingface_hub/pull/2202))
* [wip] Implement hierarchical progress bar control in huggingface_hub ([2217](https://github.com/huggingface/huggingface_hub/pull/2217))
* Update harmonized token param desc and type def ([2252](https://github.com/huggingface/huggingface_hub/pull/2252))
* bilgehanertan
* User API endpoints ([2147](https://github.com/huggingface/huggingface_hub/pull/2147))
* CLI Tag Functionality ([2172](https://github.com/huggingface/huggingface_hub/pull/2172))
* cjfghk5697
* 🌐 [i18n-KO] Translated `guides/repository.md` to Korean ([2124](https://github.com/huggingface/huggingface_hub/pull/2124))
* 🌐 [i18n-KO] Translated `package_reference/inference_client.md` to Korean ([2178](https://github.com/huggingface/huggingface_hub/pull/2178))
* 🌐 [i18n-KO] Translated `package_reference/utilities.md` to Korean ([2196](https://github.com/huggingface/huggingface_hub/pull/2196))
* SeungAhSon
* 🌐 [i18n-KO] Translated `guides/model_cards.md` to Korean" ([2128](https://github.com/huggingface/huggingface_hub/pull/2128))
* 🌐 [i18n-KO] Translated `reference/login.md` to Korean ([2151](https://github.com/huggingface/huggingface_hub/pull/2151))
* 🌐 [i18n-KO] Translated package_reference/hf_file_system.md to Korean ([2174](https://github.com/huggingface/huggingface_hub/pull/2174))
* seoulsky-field
* 🌐 [i18n-KO] Translated `guides/community.md` to Korean ([2126](https://github.com/huggingface/huggingface_hub/pull/2126))
* Y4suyuki
* Define errors in errors.py ([2170](https://github.com/huggingface/huggingface_hub/pull/2170))
* harheem
* 🌐 [i18n-KO] Translated `guides/cli.md` to Korean ([2131](https://github.com/huggingface/huggingface_hub/pull/2131))
* 🌐 [i18n-KO] Translated `reference/inference_endpoints.md` to Korean ([2180](https://github.com/huggingface/huggingface_hub/pull/2180))
* seoyoung-3060
* 🌐 [i18n-KO] Translated `guides/search.md` to Korean ([2134](https://github.com/huggingface/huggingface_hub/pull/2134))
* 🌐 [i18n-KO] Translated `package_reference/file_download.md` to Korean ([2184](https://github.com/huggingface/huggingface_hub/pull/2184))
* 🌐 [i18n-KO] Translated package_reference/serialization.md to Korean ([2233](https://github.com/huggingface/huggingface_hub/pull/2233))
* boyunJang
* 🌐 [i18n-KO] Translated `guides/inference.md` to Korean ([2130](https://github.com/huggingface/huggingface_hub/pull/2130))
* 🌐 [i18n-KO] Translated `package_reference/collections.md` to Korean ([2214](https://github.com/huggingface/huggingface_hub/pull/2214))
* 🌐 [i18n-KO] Translated `package_reference/space_runtime.md` to Korean ([2213](https://github.com/huggingface/huggingface_hub/pull/2213))
* 🌐 [i18n-KO] Translated `guides/manage-spaces.md` to Korean ([2220](https://github.com/huggingface/huggingface_hub/pull/2220))
* nuatmochoi
* 🌐 [i18n-KO] Translated `guides/webhooks_server.md` to Korean ([2145](https://github.com/huggingface/huggingface_hub/pull/2145))
* 🌐 [i18n-KO] Translated `package_reference/cache.md` to Korean ([2191](https://github.com/huggingface/huggingface_hub/pull/2191))
* fabxoe
* 🌐 [i18n-KO] Translated `package_reference/tensorboard.md` to Korean ([2173](https://github.com/huggingface/huggingface_hub/pull/2173))
* 🌐 [i18n-KO] Translated `package_reference/inference_types.md` to Korean ([2171](https://github.com/huggingface/huggingface_hub/pull/2171))
* 🌐 [i18n-KO] Translated `package_reference/hf_api.md` to Korean ([2165](https://github.com/huggingface/huggingface_hub/pull/2165))
* 🌐 [i18n-KO] Translated `package_reference/mixins.md` to Korean ([2166](https://github.com/huggingface/huggingface_hub/pull/2166))
* junejae
* 🌐 [i18n-KO] Translated `guides/upload.md` to Korean ([2139](https://github.com/huggingface/huggingface_hub/pull/2139))
* 🌐 [i18n-KO] Translated `reference/repository.md` to Korean ([2189](https://github.com/huggingface/huggingface_hub/pull/2189))
* heuristicwave
* 🌐 [i18n-KO] Translating `guides/hf_file_system.md` to Korean ([2146](https://github.com/huggingface/huggingface_hub/pull/2146))
* usr-bin-ksh
* 🌐 [i18n-KO] Translated `guides/inference_endpoints.md` to Korean ([2164](https://github.com/huggingface/huggingface_hub/pull/2164))

0.999

_update_metadata_model_index(existing_results, new_results, overwrite=True)


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.999}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


2. Add new metric to existing result

py
new_results = deepcopy(existing_results)
new_results[0]["metrics"][0]["name"] = "Recall"
new_results[0]["metrics"][0]["type"] = "recall"


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995},
{'name': 'Recall', 'type': 'recall', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


3. Add new result
py
new_results = deepcopy(existing_results)
new_results[0]["dataset"] = {'name': 'IMDb-2', 'type': 'imdb_2'}


[{'dataset': {'name': 'IMDb', 'type': 'imdb'},
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}},
{'dataset': ({'name': 'IMDb-2', 'type': 'imdb_2'},),
'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.995}],
'task': {'name': 'Text Classification', 'type': 'text-classification'}}]


* ENH Add update metadata to repocard by lvwerra in 844

Improvements and bug fixes

* Keras: Saving history in a JSON file by merveenoyan in 861
* space after uri by leondz in 866

0.26.2

This patch release includes updates to align with recent API response changes:
- Update how file's security metadata is retrieved following changes in the API response (2621).
- Expose repo security status field in ModelInfo (2639).

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.26.1...v0.26.2

0.26.1

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.26.0...v0.26.1

See https://github.com/huggingface/huggingface_hub/pull/2620 for more details.

0.26.0

🔐 Multiple access tokens support
Managing fine-grained access tokens locally just became much easier and more efficient!
Fine-grained tokens let you create tokens with specific permissions, making them especially useful in production environments or when working with external organizations, where strict access control is essential.

To make managing these tokens easier, we've added a ✨ **new set of CLI commands** ✨ that allow you to handle them programmatically:

- Store multiple tokens on your machine by simply logging in with the `login()` command with each token:
bash
huggingface-cli login

- Switch between tokens and choose the one that will be used for all interactions with the Hub:
bash
huggingface-cli auth switch

- List available access tokens on your machine:
bash
huggingface-cli auth list

- Delete a specific token from your machine with:
bash
huggingface-cli logout [--token-name TOKEN_NAME]

✅ Nothing changes if you are using the `HF_TOKEN` environment variable as it takes precedence over the token set via the CLI. More details in the [documentation](https://huggingface.co/docs/huggingface_hub/package_reference/authentication). 🤗
* Support multiple tokens locally by hanouticelina in 2549

⚡️ InferenceClient improvements
🖼️ Conversational VLMs support
Conversational vision-language models inference is now supported with `InferenceClient`'s chat completion!
python
from huggingface_hub import InferenceClient

works with remote url or base64 encoded url
image_url ="https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"

client = InferenceClient("meta-llama/Llama-3.2-11B-Vision-Instruct")
output = client.chat.completions.create(
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {"url": image_url},
},
{
"type": "text",
"text": "Describe this image in one sentence.",
},
],
},
],
)

print(output.choices[0].message.content)
A determine figure of Lady Liberty stands tall, holding a torch aloft, atop a pedestal on an island.

🔧 More complete support for inference parameters
You can now pass additional inference parameters to more task methods in the `InferenceClient`, including: `image_classification`, `text_classification`, `image_segmentation`, `object_detection`, `document_question_answering` and more!
For more details, visit the [`InferenceClient` reference guide](https://huggingface.co/docs/huggingface_hub/en/package_reference/inference_client).

✅ Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗

* Support VLM in chat completion (+some specs updates) by Wauplin in 2556
* [Inference Client] Add task parameters and a maintenance script of these parameters by hanouticelina in 2561
* Document vision chat completion with Llama 3.2 11B V by Wauplin in 2569

✨ HfApi
`update_repo_settings` can now be used to switch visibility status of a repo. This is a drop-in replacement for `update_repo_visibility` which is deprecated and will be removed in version `v0.29.0`.
diff
- update_repo_visibility(repo_id, private=True)
+ update_repo_settings(repo_id, private=True)

* Feature: switch visibility with update_repo_settings by WizKnight in 2541

📄 Daily papers API is now supported in `huggingface_hub`, enabling you to search for papers on the Hub and retrieve detailed paper information.
python
>>> from huggingface_hub import HfApi

>>> api = HfApi()
List all papers with "attention" in their title
>>> api.list_papers(query="attention")
Get paper information for the "Attention Is All You Need" paper
>>> api.paper_info(id="1706.03762")

* Daily Papers API by hlky in 2554

🌐 📚 Documentation
Efforts from the Tamil-speaking community to translate guides and package references to TM! Check out the result [here](https://huggingface.co/docs/huggingface_hub/main/tm/index).
* Translated index.md and installation.md to Tamil by Raghul-M in 2555

💔 Breaking changes
A few breaking changes have been introduced:
- `cached_download()`, `url_to_filename()`, `filename_to_url()` methods are now completely removed. From now on, you will have to use `hf_hub_download()` to benefit from the new cache layout.
- `legacy_cache_layout` argument from `hf_hub_download()` has been removed as well.

These breaking changes have been announced with a regular deprecation cycle.

Also, any templating-related utility has been removed from `huggingface_hub`. Client side templating is not necessary now that all conversational text-generation models in InferenceAPI are served with TGI.

Prepare for release 0.26 by hanouticelina in 2579
Remove templating utility by Wauplin in 2611

🛠️ Small fixes and maintenance
😌 QoL improvements
* docs: move translations to `i18n` by SauravMaheshkar in 2566
* Preserve card metadata format/ordering on load->save by hlky in 2570
* Remove raw HTML from error message content and improve request ID capture by hanouticelina in 2584
* [Inference Client] Factorize inference payload build by hanouticelina in 2601
* Use proper logging in auth module by hanouticelina in 2604

🐛 fixes
* Use repo_type in HfApi.grant_access url by albertvillanova in 2551
* Raise error if encountered in chat completion SSE stream by Wauplin in 2558
* Add 500 HTTP Error to retry list by farzadab in 2567
* Add missing documentation by adiaholic in 2572
* Serialization: take into account meta tensor when splitting the `state_dict` by SunMarc in 2591
* Fix snapshot download when `local_dir` is provided. by hanouticelina in 2592
* Fix PermissionError while creating '.no_exist/' directory in cache by Wauplin in 2594
* Fix 2609 - Import packaging by default by Wauplin in 2610

🏗️ internal
* Fix test by Wauplin in 2582
* Make SafeTensorsInfo.parameters a Dict instead of List by adiaholic in 2585
* Fix tests listing text generation models by Wauplin in 2593
* Skip flaky Repository test by Wauplin in 2595
* Support python 3.12 by hanouticelina in 2605

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* SauravMaheshkar
* docs: move translations to `i18n` (2566)
* WizKnight
* Feature: switch visibility with update_repo_settings 2537 (2541)
* hlky
* Preserve card metadata format/ordering on load->save (2570)
* Daily Papers API (2554)
* Raghul-M
* Translated index.md and installation.md to Tamil (2555)

0.25.2

Full Changelog : [v0.25.1...v0.25.2](https://github.com/huggingface/huggingface_hub/compare/v0.25.1...v0.25.2)
For more details, refer to the related PR https://github.com/huggingface/huggingface_hub/pull/2592

Page 1 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.