📂 Upload large folders
Uploading large models or datasets is challenging. We've already written some [tips and tricks](https://huggingface.co/docs/hub/repositories-recommendations) to facilitate the process but something was still missing. We are now glad to release the `huggingface-cli upload-large-folder` command. Consider it as a "please upload this no matter what, and be quick" command. Contrarily to `huggingface-cli download`, this new command is more opinionated and will split the upload into several commits. Multiple workers are started locally to hash, pre-upload and commit the files in a way that is **resumable**, **resilient to connection errors**, and **optimized against rate limits**. This feature has already been stress tested by the community over the last months to make it as easy and convenient to use as possible.
Here is how to use it:
sh
huggingface-cli upload-large-folder <repo-id> <local-path> --repo-type=dataset
Every minute, a report is logged with the current status of the files and workers:
sh
---------- 2024-04-26 16:24:25 (0:00:00) ----------
Files: hashed 104/104 (22.5G/22.5G) | pre-uploaded: 0/42 (0.0/22.5G) | committed: 58/104 (24.9M/22.5G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 6 | committing: 0 | waiting: 0
---------------------------------------------------
You can also run it from a script:
py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_large_folder(
... repo_id="HuggingFaceM4/Docmatix",
... repo_type="dataset",
... folder_path="/path/to/local/docmatix",
... )
For more details about the command options, run:
sh
huggingface-cli upload-large-folder --help
or visit the [upload guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/upload#upload-a-large-folder).
* CLI to upload arbitrary huge folder by Wauplin in 2254
* Reduce number of commits in upload large folder by Wauplin in 2546
* Suggest using upload_large_folder when appropriate by Wauplin in 2547
✨ HfApi & CLI improvements
🔍 Search API
The search API have been updated. You can now list gated models and datasets, and filter models by their inference status (warm, cold, frozen).
* Add 'gated' search parameter by Wauplin in 2448
* Filter models by inference status by Wauplin in 2517
More complete support for the `expand[]` parameter:
* Document baseModels and childrenModelCount as expand parameters by Wauplin in 2475
* Better support for trending score by Wauplin in 2513
* Add GGUF as supported expand[] parameter by Wauplin in 2545
👤 User API
Organizations are now included when retrieving the user overview:
* List organizations in `get_user_overview` by Wauplin in 2404
`get_user_followers` and `get_user_following` are now paginated. This was not the case before, leading to issues for users with more than 1000 followers.
* Paginate followers and following endpoints by Wauplin in 2506
📦 Repo API
Added `auth_check` to easily verify if a user has access to a repo. It raises `GatedRepoError` if the repo is gated and the user don't have the permission or `RepositoryNotFoundError` if the repo does not exist or is private. If the method does not raise an error, you can assume the user has the permission to access the repo.
python
>>> from huggingface_hub import auth_check
>>> from huggingface_hub.utils import GatedRepoError, RepositoryNotFoundError
try:
auth_check("user/my-cool-model")
except GatedRepoError:
Handle gated repository error
print("You do not have permission to access this gated repository.")
except RepositoryNotFoundError:
Handle repository not found error
print("The repository was not found or you do not have access.")
* implemented `auth_check` by cjfghk5697 in 2497
It is now possible to set a repo as gated from a script:
py
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.update_repo_settings(repo_id=repo_id, gated="auto") Set to "auto", "manual" or False
* [Feature] Add `update_repo_settings` function to HfApi 2447 by WizKnight in 2502
⚡️ Inference Endpoint API
A few improvements in the `InferenceEndpoint` API. It's now possible to set a `scale_to_zero_timeout` parameter + to configure secrets when creating or updating an Inference Endpoint.
* Add scale_to_zero_timeout parameter to HFApi.create/update_inference_endpoint by hommayushi3 in 2463
* Update endpoint.update signature by Wauplin in 2477
* feat: :sparkles: allow passing secrets to the inference endpoint client by LuisBlanche in 2486
💾 Serialization
The torch serialization module now supports tensor subclasses.
We also made sure that now the library is tested with both `torch` 1.x and 2.x to ensure compatibility.
* Making wrapper tensor subclass to work in serialization by jerryzh168 in 2440
* Torch: test on 1.11 and latest versions + explicitly load with `weights_only=True` by Wauplin in 2488
💔 Breaking changes
Breaking changes:
- `InferenceClient.conversational` task has been removed in favor of `InferenceClient.chat_completion`. Also removed `ConversationalOutput` data class.
- All `InferenceClient` output values are now dataclasses, not dictionaries.
- `list_repo_likers` is now paginated. This means the output is now an iterator instead of a list.
Deprecation:
- `multi_commit: bool` parameter in `upload_folder` is not deprecated, along the `create_commits_on_pr`. It is now recommended to use `upload_large_folder` instead. Thought its API and internals are different, the goal is still to be able to upload many files in several commits.
* Prepare for release 0.25 by Wauplin in 2400
* Paginate repo likers endpoint by hanouticelina in 2530
🛠️ Small fixes and maintenance
⚡️ InferenceClient fixes
Thanks to community feedback, we've been able to improve or fix significant things in both the `InferenceClient` and its async version `AsyncInferenceClient`. This fixes have been mainly focused on the OpenAI-compatible `chat_completion` method and the Inference Endpoints services.
* [Inference] Support `stop` parameter in `text-generation` instead of `stop_sequences` by Wauplin in 2473
* [hot-fix] Handle [DONE] signal from TGI + remove logic for "non-TGI servers" by Wauplin in 2410
* Fix chat completion url for OpenAI compatibility by Wauplin in 2418
* Bug - [InferenceClient] - use proxy set in var env by morgandiverrez in 2421
* Document the difference between model and base_url by Wauplin in 2431
* Fix broken AsyncInferenceClient on [DONE] signal by Wauplin in 2458
* Fix `InferenceClient` for HF Nvidia NIM API by Wauplin in 2482
* Properly close session in `AsyncInferenceClient` by Wauplin in 2496
* Fix unclosed aiohttp.ClientResponse objects by Wauplin in 2528
* Fix resolve chat completion URL by Wauplin in 2540
😌 QoL improvements
When uploading a folder, we validate the README.md file **before** hashing all the files, not after.
This should save some precious time when uploading large files and a corrupted model card.
Also, it is now possible to pass a `--max-workers` argument when uploading a folder from the CLI
* huggingface-cli upload - Validate README.md before file hashing by hlky in 2452
* Solved: Need to add the max-workers argument to the huggingface-cli command by devymex in 2500
All custom exceptions raised by `huggingface_hub` are now defined in `huggingface_hub.errors` module. This should make it easier to import them for your `try/except` statements.
* Define error by cjfghk5697 in 2444
* Define cache errors in errors.py by 010kim in 2470
At the same occasion, we've reworked how errors are formatted in `hf_raise_for_status` to print more relevant information to the users.
* Refacto error parsing (HfHubHttpError) by Wauplin in 2474
* Raise with more info on 416 invalid range by Wauplin in 2449
All constants in `huggingface_hub` are now imported as a module. This makes it easier to patch their values, for example in a test pipeline.
* Update `constants` import to use module-level access 1172 by WizKnight in 2453
* Update constants imports with module level access 1172 by WizKnight in 2469
* Refactor all constant imports to module-level access by WizKnight in 2489
Other quality of life improvements:
* Warn if user tries to upload a parquet file to a model repo by Wauplin in 2403
* Tag repos using `HFSummaryWriter` with 'hf-summary-writer' by Wauplin in 2398
* Do not raise if branch exists and no write permission by Wauplin in 2426
* expose scan_cache table generation to python by rsxdalv in 2437
* Expose `RepoUrl` info in `CommitInfo` object by Wauplin in 2487
* Add new hardware flavors by apolinario in 2512
* http_backoff retry with SliceFileObj by hlky in 2542
* Add version cli command by 010kim in 2498
🐛 fixes
* Fix filelock if flock not supported by Wauplin in 2402
* Fix creating empty commit on PR by Wauplin in 2413
* fix expand in CI by Wauplin (direct commit on main)
* Update quick-start.md by AxHa in 2422
* fix repo-files CLI example by Wauplin in 2428
* Do not raise if chmod fails by Wauplin in 2429
* fix .huggingface to .cache/huggingface in doc by lizzzcai in 2432
* Fix shutil move by Wauplin in 2433
* Correct "login" to "log in" when used as verb by DePasqualeOrg in 2434
* Typo for plural by david4096 in 2439
* fix typo in file download warning message about symlinks by joetam in 2442
* Fix typo double assignment by Wauplin in 2443
* [webhooks server] rely on SPACE_ID to check if app is local or in a Sapce by Wauplin in 2450
* Fix error message on permission issue by Wauplin in 2465
* Fix: do not erase existing values on update_inference_endpoint by Wauplin in 2476
* fix secrets inference endpoints by LuisBlanche in 2490
* Fix broken link and update translation content by wuchangming in 2501
* fixes: URL fixes by chenglu in 2504
* fix empty siblings by lhoestq in 2503
* Do not fail on `touch()` if `OSError` (to cache non existence of file) by Wauplin in 2505
* Fix 416 requested range not satisfiable by Wauplin in 2511
* Fix race-condition issue when downloading from multiple threads by Wauplin in 2534
* Fixed the issue 2535 Added user followers and following in class User and added test cases for it by Amrit02102004 in 2536
* Exclude Colab Enterprise from Google Colab token retrieval by hanouticelina in 2529
🏗️ internal
* skip unrelevant test in CI by Wauplin (direct commit on main)
* Do not use modelId + remove some self.assert by Wauplin in 2405
* Prepare for release 0.25 by Wauplin in 2400
* Merge branch 'main' of github.com:huggingface/huggingface_hub by Wauplin (direct commit on main)
* FIX: Use _RECOMMENDED_MODELS_FOR_VCR in TestResolveURL by Wauplin in 2531
* Implemented https://github.com/huggingface/huggingface_hub/issues/2516 by yuxi-liu-wired in #2532
Significant community contributions
The following contributors have made significant changes to the library over the last release:
* morgandiverrez
* Bug - [InferenceClient] - use proxy set in var env (2421)
* hlky
* huggingface-cli upload - Validate README.md before file hashing (2452)
* http_backoff retry with SliceFileObj (2542)
* cjfghk5697
* Define error (2444)
* implemented `auth_check` (2497)
* WizKnight
* Update `constants` import to use module-level access 1172 (2453)
* Update constants imports with module level access 1172 (2469)
* Refactor all constant imports to module-level access (2489)
* [Feature] Add `update_repo_settings` function to HfApi 2447 (2502)
* rsxdalv
* expose scan_cache table generation to python (2437)
* 010kim
* Define cache errors in errors.py (2470)
* Add version cli command (2498)
* jerryzh168
* Making wrapper tensor subclass to work in serialization (2440)