Huggingface-hub

Latest version: v0.23.4

Safety actively analyzes 640072 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 14

0.17.3

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.17.2...v0.17.3

Fixing a bug when downloading files to a non-existent directory. In https://github.com/huggingface/huggingface_hub/pull/1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in https://github.com/huggingface/huggingface_hub/issues/1690. This hot-fix fixes it thanks to https://github.com/huggingface/huggingface_hub/pull/1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any `OSError`) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.

Checkout those [release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.17.0) to learn more about the v0.17 release.

0.17.2

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.17.1...v0.17.2

Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because `space_sdk` was not found in that case. More details in https://github.com/huggingface/huggingface_hub/pull/1669.
Also updated the user-agent when using `huggingface-cli upload`. See https://github.com/huggingface/huggingface_hub/pull/1664.

Checkout those [release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.17.0) to learn more about the v0.17 release.

0.17.0

InferenceClient

All tasks are now supported! :boom:

Thanks to a massive community effort, all inference tasks are now supported in `InferenceClient`. Newly added tasks are:

* Object detection by dulayjm in 1548
* Text classification by martinbrose in 1606
* Token classification by martinbrose in 1607
* Translation by martinbrose in 1608
* Question answering by martinbrose in 1609
* Table question answering by martinbrose in 1612
* Fill mask by martinbrose in 1613
* Tabular classification by martinbrose in 1614
* Tabular regression by martinbrose in 1615
* Document question answering by martinbrose in 1620
* Visual question answering by martinbrose in 1621
* Zero shot classification by Wauplin in 1644

Documentation, including examples, for each of these tasks can be found [in this table](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference#supported-tasks).

All those methods also support async mode using `AsyncInferenceClient`.

Get InferenceAPI status

Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:

1. `list_deployed_models` aims to help users discover which models are currently deployed, listed by task.
2. `get_model_status` aims to get the status of a specific model. That's useful if you already know which model you want to use.

Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).

py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

Discover zero-shot-classification models currently deployed
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]

Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')


* Add get_model_status function by sifisKoen in 1558
* Add list_deployed_models to inference client by martinbrose in 1622

Few fixes

* Send Accept: image/png as header for image tasks by Wauplin in 1567
* FIX `text_to_image` and `image_to_image` parameters by Wauplin in 1582
* Distinguish _bytes_to_dict and _bytes_to_list + fix issues by Wauplin in 1641
* Return whole response from feature extraction endpoint instead of assuming its shape by skulltech in 1648

Download and upload files... from the CLI :fire: :fire: :fire:

This is a long-awaited feature finally implemented! `huggingface-cli` now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for `git clone`, `git pull` and `git push`. Despite being less feature-complete than `git` (no `.git/` folder, no notion of local commits), it offers the flexibility required when working with large repositories.

**Download**


Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json

Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json

Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files: 100%|████████████████████████████████████████████| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7


**Upload**


Upload single file
huggingface-cli upload my-cool-model model.safetensors

Upload entire directory
huggingface-cli upload my-cool-model ./models

Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"


**Docs**

For more examples, check out the documentation:
- [`huggingface-cli download`](https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-from-the-cli)
- [`huggingface-cli upload`](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#upload-from-the-cli)

* Implemented CLI download functionality by martinbrose in 1617
* Implemented CLI upload functionality by martinbrose in 1618

:rocket: Space API

Some new features have been added to the Space API to:

* request persistent storage for a Space
* set a description to a Space's secrets
* set variables on a Space
* configure your Space (hardware, storage, secrets,...) in a single call when you create or duplicate it

py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
... repo_id=repo_id,
... repo_type="space",
... space_sdk="gradio",
... space_hardware="t4-medium",
... space_sleep_time="3600",
... space_storage="large",
... space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
... space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )


A special thank to martinbrose who largely contributed on those new features.

* Request Persistent Storage by freddyaboulton in 1571
* Support factory reboot when restarting a Space by Wauplin in 1586
* Added support for secret description by martinbrose in 1594
* Added support for space variables by martinbrose in 1592
* Add settings for creating and duplicating spaces by martinbrose in 1625

:books: Documentation

A new section has been added to the [upload guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#tips-and-tricks-for-large-uploads) with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.

* Tips to upload large models/datasets by Wauplin in 1565
* Add the hard limit of 50GB on LFS files by severo in 1624

:world_map: The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!

* Add translation guide + update repo structure by Wauplin in 1602
* Fix i18n issue template links by Wauplin in 1627

Breaking change

The behavior of [`InferenceClient.feature_extraction`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.feature_extraction) has been updated to fix a bug happening with certain models. The shape of the returned array for `transformers` models has changed from `(sequence_length, hidden_size)` to `(1, sequence_length, hidden_size)` which is the breaking change.

* Return whole response from feature extraction endpoint instead of assuming its shape by skulltech in 1648

QOL improvements

**`HfApi` helpers:**

Two new helpers have been added to check if a file or a repo exists on the Hub:

py
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False

>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False


* Check if repo or file exists by martinbrose in 1591

Also, `hf_hub_download` and `snapshot_download` are now part of `HfApi` (keeping the same syntax and behavior).

* Add download alias for `hf_hub_download` to `HfApi` by Wauplin in 1580

**Download improvements:**

1. When a user tries to download a model but the disk is full, a warning is triggered.
2. When a user tries to download a model but a HTTP error happen, we still check locally if the file exists.

* Check local files if (RepoNotFound, GatedRepo, HTTPError) while downloading files by jiamings in 1561
* Implemented check_disk_space function by martinbrose in 1590

Small fixes and maintenance

:gear: Doc fixes

* Fix table by stevhliu in 1577
* Improve docstrings for text generation by osanseviero in 1597
* Fix superfluous-typo by julien-c in 1611
* minor missing paren by julien-c in 1637
* update i18n template by Wauplin (direct commit on main)
* Add documentation for modelcard Metadata. Resolves by sifisKoen in 1448

:gear: Other fixes

* Add `missing_ok` option in `delete_repo` by Wauplin in 1640
* Implement `super_squash_history` in `HfApi` by Wauplin in 1639
* 1546 fix empty metadata on windows by Wauplin in 1547
* Fix tqdm by NielsRogge in 1629
* Fix bug 1634 (drop finishing spaces and EOL) by GBR-613 in 1638

:gear: Internal
* Prepare for 0.17 by Wauplin in 1540
* update mypy version + fix issues + remove deprecatedlist helper by Wauplin in 1628
* mypy traceck by Wauplin (direct commit on main)
* pin pydantic version by Wauplin (direct commit on main)
* Fix ci tests by Wauplin in 1630
* Fix test in contrib CI by Wauplin (direct commit on main)
* skip gated repo test on contrib by Wauplin (direct commit on main)
* skip failing test by Wauplin (direct commit on main)
* Fix fsspec tests in ci by Wauplin in 1635
* FIX windows CI by Wauplin (direct commit on main)
* FIX style issues by pinning black version by Wauplin (direct commit on main)
* forgot test case by Wauplin (direct commit on main)
* shorter is better by Wauplin (direct commit on main)


:hugs: Significant community contributions

The following contributors have made significant changes to the library over the last release:

* dulayjm
* Add object detection to inference client (1548)
* martinbrose
* Added support for secret description (1594)
* Check if repo or file exists (1591)
* Implemented check_disk_space function (1590)
* Added support for space variables (1592)
* Add settings for creating and duplicating spaces (1625)
* Implemented CLI download functionality (1617)
* Implemented CLI upload functionality (1618)
* Add text classification to inference client (1606)
* Add token classification to inference client (1607)
* Add translation to inference client (1608)
* Add question answering to inference client (1609)
* Add table question answering to inference client (1612)
* Add fill mask to inference client (1613)
* Add visual question answering to inference client (1621)
* Add document question answering to InferenceClient (1620)
* Add tabular classification to inference client (1614)
* Add tabular regression to inference client (1615)
* Add list_deployed_models to inference client (1622)
* sifisKoen
* Add get_model_status function (1558) (1559)
* Add documentation for modelcard Metadata. Resolves (1448) (1631)

0.16.4

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.16.3...v0.16.4

Hotfix to avoid sharing `requests.Session` between processes. More information in https://github.com/huggingface/huggingface_hub/pull/1545. Internally, we create a Session object per thread to benefit from the HTTPSConnectionPool (i.e. do not reopen connection between calls). Due to an implementation bug, the Session object from the main thread was shared if a fork of the main process happened. The shared Session gets corrupted in the process, leading to some random ConnectionErrors in rare occasions.

Check out [these release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.16.2) to learn more about the v0.16 release.

0.16.3

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.16.2...v0.16.3

Hotfix to print the request ID if any `RequestException` happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.

Check out [these release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.16.2) to learn more about the v0.16 release.

0.16.2

Inference

Introduced in the `v0.15` release, the `InferenceClient` got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.

Async client

Asyncio calls are supported thanks to `AsyncInferenceClient`. Based on `asyncio` and `aiohttp`, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by `InferenceClient` is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.

py
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")


* Support asyncio with AsyncInferenceClient by Wauplin in 1524

Text-generation

Support for text-generation task has been added. It is focused on fully supporting endpoints running on the [text-generation-inference](https://github.com/huggingface/text-generation-inference) framework. In fact, the code is heavily inspired by TGI's [Python client](https://github.com/huggingface/text-generation-inference/tree/main/clients/python) initially implemented by OlivierDehaene.

Text generation has 4 modes depending on `details` (bool) and `stream` (bool) values. By default, a raw string is returned. If `details=True`, more information about the generated tokens is returned. If `stream=True`, generated tokens are returned one by one as soon as the server generated them. For more information, [check out the documentation](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).

py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>> print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
id=25,
text='.',
logprob=-0.5703125,
special=False),
generated_text='100% open source and built to be easy to use.',
details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)


Of course, the async client also supports text-generation (see [docs](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient.text_generation)):

py
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'


* prepare for tgi by Wauplin in 1511
* Support text-generation in InferenceClient by Wauplin in 1513

Zero-shot-image-classification

`InferenceClient` now supports zero-shot-image-classification (see [docs](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_image_classification)). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.


py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
... labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]


Thanks to dulayjm for your contribution on this task!

* added zero shot image classification by dulayjm in 1528

Other

When using `InferenceClient`'s task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.

* Fetch inference model for task from API by Wauplin in 1510

It is now possible to configure headers and cookies to be sent when initializing the client: `InferenceClient(headers=..., cookies=...)`. All calls made with this client will then use these headers/cookies.

* Custom headers/cookies in InferenceClient by Wauplin in 1507

Commit API

CommitScheduler

The `CommitScheduler` is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.

py
>>> from huggingface_hub import CommitScheduler

Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
... repo_id="report-translation-feedback",
... repo_type="dataset",
... folder_path=feedback_folder,
... path_in_repo="data",
... every=10,
... )


Check out [this guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads) to understand how to use the `CommitScheduler`. It comes with [a Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver) to showcase how to use it in 4 practical examples.

* `CommitScheduler`: upload folder every 5 minutes by Wauplin in 1494
* Encourage to overwrite CommitScheduler.push_to_hub by Wauplin in 1506
* FIX Use token by default in CommitScheduler by Wauplin in 1509
* safer commit scheduler by Wauplin (direct commit on main)

HFSummaryWriter (tensorboard)

The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as `tfevents`) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than [42k models](https://huggingface.co/models?library=tensorboard&sort=trending) are already using this feature!

With the `HFSummaryWriter` you can now take full advantage of the feature for your training, simply by updating a single line of code.

py
>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)


`HFSummaryWriter` inherits from `SummaryWriter` and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.

For more information on how to use it, check out this [documentation page](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/tensorboard). Please note that this is still an experimental feature so feedback is very welcome.

* Experimental hf logger by Wauplin in 1456

CommitOperationCopy

It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationCopy).

* add CommitOperationCopy by lhoestq in 1495
* Use CommitOperationCopy in hffs by Wauplin in 1497
* Batch fetch_lfs_files_to_copy by lhoestq in 1504

Breaking changes

`ModelHubMixin` got updated (after a deprecation cycle):
- Force to use kwargs instead of passing everything a positional arg
- It is not possible anymore to pass `model_id` as `username/repo_namerevision` in `ModelHubMixin`. Revision must be passed as a separate `revision` argument if needed.

* Remove deprecated code for v0.16.x by Wauplin in 1492

Bug fixes and small improvements

Doc fixes

* [doc build] Use secrets by mishig25 in 1501
* Migrate doc files to Markdown by Wauplin in 1522
* fix doc example by Wauplin (direct commit on main)
* Update readme and contributing guide by Wauplin in 1534

HTTP fixes

A `x-request-id` header is sent by default for every request made to the Hub. This should help debugging user issues.

* Add x-request-id to every request by Wauplin in 1518


3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.
* Set 30s timeout on downloads (instead of 10s) by Wauplin in 1514
* Set timeout to 60 instead of 30 when downloading files by Wauplin in 1523
* Set timeout to 10s by ydshieh in 1530

Misc

* Rename "configs" dataset card field to "config_names" by polinaeterna in 1491
* update stats by Wauplin (direct commit on main)
* Retry on both ConnectTimeout and ReadTimeout by Wauplin in 1529
* update tip by Wauplin (direct commit on main)
* make repo_info public by Wauplin (direct commit on main)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* dulayjm
* added zero shot image classification (1528)

Page 5 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.