Huggingface-hub

Latest version: v0.26.2

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 16

0.19.1

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.19.0...v0.19.1.

Fixes a regression bug (PR https://github.com/huggingface/huggingface_hub/pull/1821) introduced in `0.19.0` that made looping over models with `list_models` fail. The problem came from the fact that we are now parsing the data returned by the server into Python objects. However for some models the metadata in the model card is not valid. This is usually checked by the server but some models created before we started to enforce correct metadata are not valid. This hot-fix fixes the issue by ignoring the corrupted data, if any.

0.19.0

(Discuss about the release [in our Community Tab](https://huggingface.co/spaces/Wauplin/huggingface_hub/discussions/2). Feedback welcome!! 🤗)

🚀 Inference Endpoints API

Inference Endpoints provides a secure solution to easily deploy models hosted on the Hub in a production-ready infrastructure managed by Huggingface. With `huggingface_hub>=0.19.0` integration, you can now manage your Inference Endpoints programmatically. Combined with the `InferenceClient`, this becomes the go-to solution to deploy models and run jobs in production, either sequentially or in batch!

Here is an example how to get an inference endpoint, wake it up, wait for initialization, run jobs in batch and pause back the endpoint. All of this in a few lines of code! For more details, please check out our [dedicated guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference_endpoints).

python
>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint

Get endpoint + wait until initialized
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()

Run inference
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])

Pause endpoint
>>> endpoint.pause()


* Implement API for Inference Endpoints by Wauplin in 1779
* Fix inference endpoints docs by Wauplin in 1785

⏬ Improved download experience

`huggingface_hub` is a library primarily used to transfer (huge!) files with the Huggingface Hub. Our goal is to keep improving the experience for this core part of the library. In this release, we introduce a more robust download mechanism for slow/limited connection while improving the UX for users with a high bandwidth available!

More robust downloads

Getting a connection error in the middle of a download is frustrating. That's why we've implemented a retry mechanism that automatically reconnects if a connection get closed or a ReadTimeout error is raised. The download restart exactly where it stopped without having to redownload any bytes.

* Retry on ConnectionError/ReadTimeout when streaming file from server by Wauplin in 1766
* Reset nb_retries if data has been received from the server by Wauplin in 1784

In addition to this, it is possible to configure `huggingface_hub` with higher timeouts thanks to Shahafgo. This should help getting around some issues on slower connections.

* Adding the ability to configure the timeout of get request by Shahafgo in 1720
* Fix a bug to respect the HF_HUB_ETAG_TIMEOUT. by Shahafgo in 1728

Progress bars while using `hf_transfer`

`hf_transfer` is a Rust-based library focused on improving upload and download speed on machines with a high bandwidth available. Once installed (`pip install -U hf_transfer`), it can transparently be used with `huggingface_hub` simply by setting `HF_HUB_ENABLE_HF_TRANSFER=1` as environment variable. The counterpart of higher performances is the lack of some user-friendly features such as better error handling or a retry mechanism -meaning it is recommended only to power-users-. In this release we still ship a new feature to improve UX: progress bars. No need to update any existing code, a simple library upgrade is enough.

* `hf-transfer` progress bar by cbensimon in 1792
* Add support for progress bars in hf_transfer uploads by Wauplin in 1804


📚 Documentation

`huggingface-cli` guide

`huggingface-cli` is the CLI tool shipped with `huggingface_hub`. It recently got some nice improvement, especially with commands to download and upload files directly from the terminal. All of this needed a guide, so [here it is](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli)!

* Add CLI guide to documentation by Wauplin in 1797

Environment variables

Environment variables are useful to configure how `huggingface_hub` should work. Historically we had some inconsistencies on how those variables were named. This is now improved, with a backward compatible approach. Please check the [package reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables) for more details. The goal is to propagate those changes to the whole HF-ecosystem, making configuration easier for everyone.

* Harmonize environment variables by Wauplin in 1786
* Ensure backward compatibility for HUGGING_FACE_HUB_TOKEN env variable by Wauplin in 1795
* Do not promote `HF_ENDPOINT` environment variable by Wauplin in 1799

Hindi translation

Hindi documentation landed on the Hub thanks to aneeshd27! Checkout the Hindi version of the quickstart guide [here](https://huggingface.co/docs/huggingface_hub/main/hi/quick-start).

* Added translation of 3 files as mentioned in issue by aneeshd27 in 1772

Minor docs fixes

* Added `[[autodoc]]` for `ModelStatus` by jamesbraza in 1758
* Expanded docstrings on `post` and `ModelStatus` by jamesbraza in 1740
* Fix document link for manage-cache by liuxueyang in 1774
* Minor doc fixes by pcuenca in 1775

💔 Breaking changes

Legacy `ModelSearchArguments` and `DatasetSearchArguments` have been completely removed from `huggingface_hub`. This shouldn't cause problem as they were already not in use (and unusable in practice).

* Removed GeneralTags, ModelTags and DatasetTags by VictorHugoPilled in 1761

Classes containing details about a repo (`ModelInfo`, `DatasetInfo` and `SpaceInfo`) have been refactored by mariosasko to be more Pythonic and aligned with the other classes in `huggingface_hub`. In particular those objects are now based the `dataclass` module instead of a custom `ReprMixin` class. Every change is meant to be backward compatible, meaning no breaking changes is expected. However, if you detect any inconsistency, please let us know and we will fix it asap.

* Replace `ReprMixin` with dataclasses by mariosasko in 1788
* Fix SpaceInfo initialization + add test by Wauplin in 1802

The legacy `Repository` and `InferenceAPI` classes are now deprecated but will not be removed before the next major release (`v1.0`).
Instead of the git-based `Repository`, we advice to use the http-based `HfApi`. Check out [this guide](https://huggingface.co/docs/huggingface_hub/main/en/concepts/git_vs_http) explaining the reasons behind it. For `InferenceAPI`, [we recommend](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference#legacy-inferenceapi-client) to switch to `InferenceClient` which is much more feature-complete and will keep getting improved.

* Deprecate `Repository` class by Wauplin in 1724

⚙️ Miscellaneous improvements, fixes and maintenance

`InferenceClient`

* Adding `InferenceClient.get_recommended_model` by jamesbraza in 1770
* Fix InferenceClient.text_generation when pydantic is not installed by Wauplin in 1793
* Supporting `pydantic<3` by jamesbraza in 1727

`HfFileSystem`

* [hffs] Raise `NotImplementedError` on transaction commits by Wauplin in 1736
* Fix huggingface filesystem repo_type not forwarded by Wauplin in 1791
* Fix `HfFileSystemFile` when init fails + improve error message by Wauplin in 1805

FIPS compliance

* Set usedforsecurity=False in hashlib methods (FIPS compliance) by Wauplin in 1782

Misc fixes

* Fix UnboundLocalError when using commit context manager by hahunavth in 1722
* Fixed improperly configured 'every' leading to test_sync_and_squash_history failure by jamesbraza in 1731
* Testing `WEBHOOK_PAYLOAD_EXAMPLE` deserialization by jamesbraza in 1732
* Keep lock files in a `/locks` folder to prevent rare concurrency issue by beeender in 1659
* Fix Space runtime on static Space by Wauplin in 1754
* Clearer error message on unprocessable entity. by Wauplin in 1755
* Do not warn in ModelHubMixin on missing config file by Wauplin in 1776
* Update SpaceHardware enum by Wauplin in 1798
* change prop name by julien-c in 1803

Internal

* Bump version to 0.19 by Wauplin in 1723
* Make `retry_endpoint` a default for all test by Wauplin in 1725
* Retry test on 502 Bad Gateway by Wauplin in 1737
* Consolidated mypy type ignores in `InferenceClient.post` by jamesbraza in 1742
* fix: remove useless token by rtrompier in 1765
* Fix CI (typing-extensions minimal requirement by Wauplin in 1781
* remove black formatter to use only ruff by Wauplin in 1783
* Separate test and prod cache (+ ruff formatter) by Wauplin in 1789
* fix 3.8 tensorflow in ci by Wauplin (direct commit on main)


🤗 Significant community contributions

The following contributors have made significant changes to the library over the last release:

* VictorHugoPilled
* Removed GeneralTags, ModelTags and DatasetTags (1761)
* aneeshd27
* Added translation of 3 files as mentioned in issue (1772)

0.18.0

(Discuss about the release and provide feedback [in the Community Tab](https://huggingface.co/spaces/Wauplin/huggingface_hub/discussions/1)!)

Collection API 🎉

Collection API is now fully supported in `huggingface_hub`!

A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this [guide](https://huggingface.co/docs/hub/collections) to understand in more detail what collections are and this [guide](https://huggingface.co/docs/huggingface_hub/guides/collections) to learn how to build them programmatically.

Create/get/update/delete collection:

- `get_collection`
- `create_collection`: title, description, namespace, private
- `update_collection_metadata`: title, description, position, private, theme
- `delete_collection`

Add/update/remove item from collection:

- `add_collection_item`: item id, item type, note
- `update_collection_item`: note, position
- `delete_collection_item`

Usage

py
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem: {
{'_id': '6507f6d5423b46492ee1413e',
'id': 'TheBloke/TigerBot-70B-Chat-GPTQ',
'author': 'TheBloke',
'item_type': 'model',
'lastModified': '2023-09-19T12:55:21.000Z',
(...)
}}


py
>>> from huggingface_hub import create_collection

Create collection
>>> collection = create_collection(
... title="ICCV 2023",
... description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )

Add item with a note
>>> add_collection_item(
... collection_slug=collection.slug, e.g. "davanstrien/climate-64f99dc2a5067f6b65531bab"
... item_id="datasets/climate_fever",
... item_type="dataset",
... note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )


* Add Collection API by Wauplin in 1687
* Add `url` attribute to Collection class by Wauplin in 1695
* [Fix] Add collections guide to overview page by Wauplin in 1696

📚 Translated documentation

Documentation is now available in both [German](https://huggingface.co/docs/huggingface_hub/main/de/index) and [Korean](https://huggingface.co/docs/huggingface_hub/main/ko/index) thanks to community contributions! This is an important milestone for Hugging Face in its mission to democratize good machine learning.

* 🌐 [i18n-DE] Translate docs to German by martinbrose in 1646
* 🌐 [i18n-KO] Translated README, landing docs to Korean by wonhyeongseo in 1667
* Update i18n template by Wauplin in 1680
* Add German concepts guide by martinbrose in 1686

Preupload files before committing

(**Disclaimer:** this is a power-user usage. It is not expected to be used directly by end users.)

When using `create_commit` (or `upload_file`/`upload_folder`), the internal workflow has 3 main steps:
1. List the files to upload and check if those are regular files (text) or LFS files (binaries or huge files)
2. Upload the LFS files to S3
3. Create a commit on the Hub (upload regular files + reference S3 urls at once). The LFS upload is important to avoid large payloads during the commit call.

In this release, we introduce `preupload_lfs_files` to perform step 2 independently of step 3. This is useful for libraries like `datasets` that generate huge files "on-the-fly" and want to preupload them one by one before making one commit with all the files. For more details, please read this [guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#preupload-lfs-files-before-commit).

* Preupload lfs files before committing by Wauplin in 1699
* Hide `CommitOperationAdd`'s internal attributes by mariosasko in 1716

Miscellaneous improvements

❤️ List repo likers

Similarly to `list_user_likes` (listing all likes of a user), we now introduce `list_repo_likers` to list all likes on a repo - thanks to issamarabi.

py
>>> from huggingface_hub import list_repo_likers
>>> likers = list_repo_likers("gpt2")
>>> len(likers)
204
>>> likers
[User(username=..., fullname=..., avatar_url=...), ...]


* Add list_repo_likers method to HfApi by issamarabi in 1715

Refactored Dataset Card template

Template for the Dataset Card has been updated to be more aligned with the Model Card template.

* Dataset card template overhaul by mariosasko in 1708

QOL improvements

This release also adds a few QOL improvement for the users:

* Suggest to check firewall/proxy settings + default to local file by Wauplin in 1670
* debug logs to debug level by Wauplin (direct commit on main)
* Change `TimeoutError` => `asyncio.TimeoutError` by matthewgrossman in 1666
* Handle `refs/convert/parquet` and PR revision correctly in hffs by Wauplin in 1712
* Document hf_transfer more prominently by Wauplin in 1714

Breaking change

A breaking change has been introduced in `CommitOperationAdd` in order to implement `preupload_lfs_files` in a way that is convenient for the users. The main change is that `CommitOperationAdd` is no longer a static object but is modified internally by `preupload_lfs_files` and `create_commit`. This means that **you cannot reuse a `CommitOperationAdd` object** once it has been committed to the Hub. If you do so, an explicit exception will be raised. You can still reuse the operation objects if the commit call failed and you retry it. We hope that it will not affect any users but please open an issue if you're encountering any problem.

* Preupload lfs files before committing by Wauplin in 1699

⚙️ Small fixes and maintenance

Docs fixes

* Move repo size limitations to Hub docs by Wauplin in 1660
* Correct typo in upload guide by martinbrose in 1677
* Fix broken tips in login reference by Wauplin in 1688

Misc fixes

* Fixes filtering by tags with list_models and adds test case by martinbrose in 1673
* Add default user-agent to huggingface-cli by Wauplin in 1664
* Automatically retry on create_repo if '409 conflicting op in progress' by Wauplin in 1675
* Fix upload CLI when pushing to Space by Wauplin in 1669
* longer pbar descr, drop D-word by poedator in 1679
* Pin `fsspec` to use default `expand_path` by mariosasko in 1681
* Address failing _check_disk_space() when path doesn't exist yet by martinbrose in 1692
* Handle TGI error when streaming tokens by Wauplin in 1711

Internal

* bump version to `0.18.0.dev0` by Wauplin in 1658
* sudo apt update in CI by Wauplin (direct commit on main)
* fix CI tests by Wauplin (direct commit on main)
* Skip flaky InferenceAPI test by Wauplin (direct commit on main)
* Respect `HTTPError` spec by Wauplin in 1693
* skip flaky test by Wauplin (direct commit on main)
* Fix LFS tests after password auth deprecation by Wauplin in 1713


🤗 Significant community contributions

The following contributors have made significant changes to the library over the last release:

* martinbrose
* Correct typo in upload guide (1677)
* 🌐 [i18n-DE] Translate docs to German (1646)
* Fixes filtering by tags with list_models and adds test case (1673)
* Add German concepts guide (1686)
* Address failing _check_disk_space() when path doesn't exist yet (1692)
* wonhyeongseo
* 🌐 [i18n-KO] Translated README, landing docs to Korean (1667)

0.17.3

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.17.2...v0.17.3

Fixing a bug when downloading files to a non-existent directory. In https://github.com/huggingface/huggingface_hub/pull/1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in https://github.com/huggingface/huggingface_hub/issues/1690. This hot-fix fixes it thanks to https://github.com/huggingface/huggingface_hub/pull/1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any `OSError`) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.

Checkout those [release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.17.0) to learn more about the v0.17 release.

0.17.2

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v0.17.1...v0.17.2

Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because `space_sdk` was not found in that case. More details in https://github.com/huggingface/huggingface_hub/pull/1669.
Also updated the user-agent when using `huggingface-cli upload`. See https://github.com/huggingface/huggingface_hub/pull/1664.

Checkout those [release notes](https://github.com/huggingface/huggingface_hub/releases/tag/v0.17.0) to learn more about the v0.17 release.

0.17.0

InferenceClient

All tasks are now supported! :boom:

Thanks to a massive community effort, all inference tasks are now supported in `InferenceClient`. Newly added tasks are:

* Object detection by dulayjm in 1548
* Text classification by martinbrose in 1606
* Token classification by martinbrose in 1607
* Translation by martinbrose in 1608
* Question answering by martinbrose in 1609
* Table question answering by martinbrose in 1612
* Fill mask by martinbrose in 1613
* Tabular classification by martinbrose in 1614
* Tabular regression by martinbrose in 1615
* Document question answering by martinbrose in 1620
* Visual question answering by martinbrose in 1621
* Zero shot classification by Wauplin in 1644

Documentation, including examples, for each of these tasks can be found [in this table](https://huggingface.co/docs/huggingface_hub/main/en/guides/inference#supported-tasks).

All those methods also support async mode using `AsyncInferenceClient`.

Get InferenceAPI status

Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:

1. `list_deployed_models` aims to help users discover which models are currently deployed, listed by task.
2. `get_model_status` aims to get the status of a specific model. That's useful if you already know which model you want to use.

Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).

py
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

Discover zero-shot-classification models currently deployed
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]

Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')


* Add get_model_status function by sifisKoen in 1558
* Add list_deployed_models to inference client by martinbrose in 1622

Few fixes

* Send Accept: image/png as header for image tasks by Wauplin in 1567
* FIX `text_to_image` and `image_to_image` parameters by Wauplin in 1582
* Distinguish _bytes_to_dict and _bytes_to_list + fix issues by Wauplin in 1641
* Return whole response from feature extraction endpoint instead of assuming its shape by skulltech in 1648

Download and upload files... from the CLI :fire: :fire: :fire:

This is a long-awaited feature finally implemented! `huggingface-cli` now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for `git clone`, `git pull` and `git push`. Despite being less feature-complete than `git` (no `.git/` folder, no notion of local commits), it offers the flexibility required when working with large repositories.

**Download**


Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json

Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json

Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files: 100%|████████████████████████████████████████████| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7


**Upload**


Upload single file
huggingface-cli upload my-cool-model model.safetensors

Upload entire directory
huggingface-cli upload my-cool-model ./models

Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"


**Docs**

For more examples, check out the documentation:
- [`huggingface-cli download`](https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-from-the-cli)
- [`huggingface-cli upload`](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#upload-from-the-cli)

* Implemented CLI download functionality by martinbrose in 1617
* Implemented CLI upload functionality by martinbrose in 1618

:rocket: Space API

Some new features have been added to the Space API to:

* request persistent storage for a Space
* set a description to a Space's secrets
* set variables on a Space
* configure your Space (hardware, storage, secrets,...) in a single call when you create or duplicate it

py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
... repo_id=repo_id,
... repo_type="space",
... space_sdk="gradio",
... space_hardware="t4-medium",
... space_sleep_time="3600",
... space_storage="large",
... space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
... space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )


A special thank to martinbrose who largely contributed on those new features.

* Request Persistent Storage by freddyaboulton in 1571
* Support factory reboot when restarting a Space by Wauplin in 1586
* Added support for secret description by martinbrose in 1594
* Added support for space variables by martinbrose in 1592
* Add settings for creating and duplicating spaces by martinbrose in 1625

:books: Documentation

A new section has been added to the [upload guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#tips-and-tricks-for-large-uploads) with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.

* Tips to upload large models/datasets by Wauplin in 1565
* Add the hard limit of 50GB on LFS files by severo in 1624

:world_map: The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!

* Add translation guide + update repo structure by Wauplin in 1602
* Fix i18n issue template links by Wauplin in 1627

Breaking change

The behavior of [`InferenceClient.feature_extraction`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.feature_extraction) has been updated to fix a bug happening with certain models. The shape of the returned array for `transformers` models has changed from `(sequence_length, hidden_size)` to `(1, sequence_length, hidden_size)` which is the breaking change.

* Return whole response from feature extraction endpoint instead of assuming its shape by skulltech in 1648

QOL improvements

**`HfApi` helpers:**

Two new helpers have been added to check if a file or a repo exists on the Hub:

py
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False

>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False


* Check if repo or file exists by martinbrose in 1591

Also, `hf_hub_download` and `snapshot_download` are now part of `HfApi` (keeping the same syntax and behavior).

* Add download alias for `hf_hub_download` to `HfApi` by Wauplin in 1580

**Download improvements:**

1. When a user tries to download a model but the disk is full, a warning is triggered.
2. When a user tries to download a model but a HTTP error happen, we still check locally if the file exists.

* Check local files if (RepoNotFound, GatedRepo, HTTPError) while downloading files by jiamings in 1561
* Implemented check_disk_space function by martinbrose in 1590

Small fixes and maintenance

:gear: Doc fixes

* Fix table by stevhliu in 1577
* Improve docstrings for text generation by osanseviero in 1597
* Fix superfluous-typo by julien-c in 1611
* minor missing paren by julien-c in 1637
* update i18n template by Wauplin (direct commit on main)
* Add documentation for modelcard Metadata. Resolves by sifisKoen in 1448

:gear: Other fixes

* Add `missing_ok` option in `delete_repo` by Wauplin in 1640
* Implement `super_squash_history` in `HfApi` by Wauplin in 1639
* 1546 fix empty metadata on windows by Wauplin in 1547
* Fix tqdm by NielsRogge in 1629
* Fix bug 1634 (drop finishing spaces and EOL) by GBR-613 in 1638

:gear: Internal
* Prepare for 0.17 by Wauplin in 1540
* update mypy version + fix issues + remove deprecatedlist helper by Wauplin in 1628
* mypy traceck by Wauplin (direct commit on main)
* pin pydantic version by Wauplin (direct commit on main)
* Fix ci tests by Wauplin in 1630
* Fix test in contrib CI by Wauplin (direct commit on main)
* skip gated repo test on contrib by Wauplin (direct commit on main)
* skip failing test by Wauplin (direct commit on main)
* Fix fsspec tests in ci by Wauplin in 1635
* FIX windows CI by Wauplin (direct commit on main)
* FIX style issues by pinning black version by Wauplin (direct commit on main)
* forgot test case by Wauplin (direct commit on main)
* shorter is better by Wauplin (direct commit on main)


:hugs: Significant community contributions

The following contributors have made significant changes to the library over the last release:

* dulayjm
* Add object detection to inference client (1548)
* martinbrose
* Added support for secret description (1594)
* Check if repo or file exists (1591)
* Implemented check_disk_space function (1590)
* Added support for space variables (1592)
* Add settings for creating and duplicating spaces (1625)
* Implemented CLI download functionality (1617)
* Implemented CLI upload functionality (1618)
* Add text classification to inference client (1606)
* Add token classification to inference client (1607)
* Add translation to inference client (1608)
* Add question answering to inference client (1609)
* Add table question answering to inference client (1612)
* Add fill mask to inference client (1613)
* Add visual question answering to inference client (1621)
* Add document question answering to InferenceClient (1620)
* Add tabular classification to inference client (1614)
* Add tabular regression to inference client (1615)
* Add list_deployed_models to inference client (1622)
* sifisKoen
* Add get_model_status function (1558) (1559)
* Add documentation for modelcard Metadata. Resolves (1448) (1631)

Page 7 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.