Inference
Introduced in the `v0.15` release, the `InferenceClient` got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.
Async client
Asyncio calls are supported thanks to `AsyncInferenceClient`. Based on `asyncio` and `aiohttp`, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by `InferenceClient` is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.
py
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
* Support asyncio with AsyncInferenceClient by Wauplin in 1524
Text-generation
Support for text-generation task has been added. It is focused on fully supporting endpoints running on the [text-generation-inference](https://github.com/huggingface/text-generation-inference) framework. In fact, the code is heavily inspired by TGI's [Python client](https://github.com/huggingface/text-generation-inference/tree/main/clients/python) initially implemented by OlivierDehaene.
Text generation has 4 modes depending on `details` (bool) and `stream` (bool) values. By default, a raw string is returned. If `details=True`, more information about the generated tokens is returned. If `stream=True`, generated tokens are returned one by one as soon as the server generated them. For more information, [check out the documentation](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).
py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>> print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
id=25,
text='.',
logprob=-0.5703125,
special=False),
generated_text='100% open source and built to be easy to use.',
details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)
Of course, the async client also supports text-generation (see [docs](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.AsyncInferenceClient.text_generation)):
py
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
* prepare for tgi by Wauplin in 1511
* Support text-generation in InferenceClient by Wauplin in 1513
Zero-shot-image-classification
`InferenceClient` now supports zero-shot-image-classification (see [docs](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.zero_shot_image_classification)). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.
py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
... labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]
Thanks to dulayjm for your contribution on this task!
* added zero shot image classification by dulayjm in 1528
Other
When using `InferenceClient`'s task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.
* Fetch inference model for task from API by Wauplin in 1510
It is now possible to configure headers and cookies to be sent when initializing the client: `InferenceClient(headers=..., cookies=...)`. All calls made with this client will then use these headers/cookies.
* Custom headers/cookies in InferenceClient by Wauplin in 1507
Commit API
CommitScheduler
The `CommitScheduler` is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.
py
>>> from huggingface_hub import CommitScheduler
Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
... repo_id="report-translation-feedback",
... repo_type="dataset",
... folder_path=feedback_folder,
... path_in_repo="data",
... every=10,
... )
Check out [this guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#scheduled-uploads) to understand how to use the `CommitScheduler`. It comes with [a Space](https://huggingface.co/spaces/Wauplin/space_to_dataset_saver) to showcase how to use it in 4 practical examples.
* `CommitScheduler`: upload folder every 5 minutes by Wauplin in 1494
* Encourage to overwrite CommitScheduler.push_to_hub by Wauplin in 1506
* FIX Use token by default in CommitScheduler by Wauplin in 1509
* safer commit scheduler by Wauplin (direct commit on main)
HFSummaryWriter (tensorboard)
The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as `tfevents`) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than [42k models](https://huggingface.co/models?library=tensorboard&sort=trending) are already using this feature!
With the `HFSummaryWriter` you can now take full advantage of the feature for your training, simply by updating a single line of code.
py
>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)
`HFSummaryWriter` inherits from `SummaryWriter` and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.
For more information on how to use it, check out this [documentation page](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/tensorboard). Please note that this is still an experimental feature so feedback is very welcome.
* Experimental hf logger by Wauplin in 1456
CommitOperationCopy
It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information [here](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.CommitOperationCopy).
* add CommitOperationCopy by lhoestq in 1495
* Use CommitOperationCopy in hffs by Wauplin in 1497
* Batch fetch_lfs_files_to_copy by lhoestq in 1504
Breaking changes
`ModelHubMixin` got updated (after a deprecation cycle):
- Force to use kwargs instead of passing everything a positional arg
- It is not possible anymore to pass `model_id` as `username/repo_namerevision` in `ModelHubMixin`. Revision must be passed as a separate `revision` argument if needed.
* Remove deprecated code for v0.16.x by Wauplin in 1492
Bug fixes and small improvements
Doc fixes
* [doc build] Use secrets by mishig25 in 1501
* Migrate doc files to Markdown by Wauplin in 1522
* fix doc example by Wauplin (direct commit on main)
* Update readme and contributing guide by Wauplin in 1534
HTTP fixes
A `x-request-id` header is sent by default for every request made to the Hub. This should help debugging user issues.
* Add x-request-id to every request by Wauplin in 1518
3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.
* Set 30s timeout on downloads (instead of 10s) by Wauplin in 1514
* Set timeout to 60 instead of 30 when downloading files by Wauplin in 1523
* Set timeout to 10s by ydshieh in 1530
Misc
* Rename "configs" dataset card field to "config_names" by polinaeterna in 1491
* update stats by Wauplin (direct commit on main)
* Retry on both ConnectTimeout and ReadTimeout by Wauplin in 1529
* update tip by Wauplin (direct commit on main)
* make repo_info public by Wauplin (direct commit on main)
Significant community contributions
The following contributors have made significant changes to the library over the last release:
* dulayjm
* added zero shot image classification (1528)