huggingface_hub

The official Python client for the Huggingface Hub.

APACHE-2.0 License

Downloads
43.3M
Stars
1.6K
Committers
197

Bot releases are hidden (Show)

huggingface_hub - v0.20.0: Authentication, speed, safetensors metadata, access requests and more.

Published by Wauplin 10 months ago

(Discuss about the release in our Community Tab. Feedback welcome!! ๐Ÿค—)

๐Ÿ” Authentication

Authentication has been greatly improved in Google Colab. The best way to authenticate in a Colab notebook is to define a HF_TOKEN secret in your personal secrets. When a notebook tries to reach the Hub, a pop-up will ask you if you want to share the HF_TOKEN secret with this notebook -as an opt-in mechanism. This way, no need to call huggingface_hub.login and copy-paste your token anymore! ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

In addition to the Google Colab integration, the login guide has been revisited to focus on security. It is recommended to authenticate either using huggingface_hub.login or the HF_TOKEN environment variable, rather than passing a hardcoded token in your scripts. Check out the new guide here.

  • Login/authentication enhancements by @Wauplin in #1895
  • Catch SecretNotFoundError in google colab login by @Wauplin in #1912

๐ŸŽ๏ธ Faster HfFileSystem

HfFileSystem is a pythonic fsspec-compatible file interface to the Hugging Face Hub. Implementation has been greatly improved to optimize fs.find performances.

Here is a quick benchmark with the bigcode/the-stack-dedup dataset:

v0.19.4 v0.20.0
hffs.find("datasets/bigcode/the-stack-dedup", detail=False) 46.2s 1.63s
hffs.find("datasets/bigcode/the-stack-dedup", detail=True) 47.3s 24.2s
  • Faster HfFileSystem.find by @mariosasko in #1809
  • Faster HfFileSystem.glob by @lhoestq in #1815
  • Fix common path in _ ls_tree by @lhoestq in #1850
  • Remove maxdepth param from HfFileSystem.glob by @mariosasko in #1875
  • [HfFileSystem] Support quoted revisions in path by @lhoestq in #1888
  • Deprecate HfApi.list_files_info by @mariosasko in #1910

๐Ÿšช Access requests API (gated repos)

Models and datasets can be gated to monitor who's accessing the data you are sharing. You can also filter access with a manual approval of the requests. Access requests can now be managed programmatically using HfApi. This can be useful for example if you have advanced user request screening requirements (for advanced compliance requirements, etc) or if you want to condition access to a model based on completing a payment flow.

Check out this guide to learn more about gated repos.

>>> from huggingface_hub import list_pending_access_requests, accept_access_request

# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> requests[0]
[
    AccessRequest(
        username='clem',
        fullname='Clem ๐Ÿค—',
        email='***',
        timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
        status='pending',
        fields=None,
    ),
    ...
]

# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
  • Manage access requests programmatically by @Wauplin in #1905

๐Ÿ” Parse Safetensors metadata

Safetensors is a simple, fast and secured format to save tensors in a file. Its advantages makes it the preferred format to host weights on the Hub. Thanks to its specification, it is possible to parse the file metadata on-the-fly. HfApi now provides get_safetensors_metadata, an helper to get safetensors metadata from a repo.

# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
    metadata=None,
    sharded=False,
    weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
    files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}
  • Parse safetensors metadata by @Wauplin in #1855

Other improvements

List and filter collections

You can now list collections on the Hub. You can filter them to return only collection containing a given item, or created by a given author.

>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
...   print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568
  • add list_collections endpoint, solves #1835 by @ceferisbarov in #1856
  • fix list collections sort values by @Wauplin in #1867
  • Warn about truncation when listing collections by @Wauplin in #1873

Respect .gitignore

upload_folder now respect gitignore files!

Previously it was possible to filter which files should be uploaded from a folder using the allow_patterns and ignore_patterns parameters. This can now automatically be done by simply creating a .gitignore file in your repo.

  • Respect .gitignore file in commits by @Wauplin in #1868
  • Remove respect_gitignore parameter by @Wauplin in #1876

Robust uploads

Uploading LFS files has also gotten more robust with a retry mechanism if a transient error happen while uploading to S3.

  • More robust uploads by @Wauplin in #1827

Target language in InferenceClient.translation

InferenceClient.translation now supports src_lang/tgt_lang for applicable models.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="es_XX")
'Mi nombre es Sarah Jessica Parker pero puedes llamarme Jessica'
  • add language support to translation client, solves #1763 by @ceferisbarov in #1869

Support source in reported EvalResult

EvalResult now support source_name and source_link to provide a custom source for a reported result.

  • Support source in EvalResult for model cards by @Wauplin in #1874

๐Ÿ› ๏ธ Misc

Fetch all pull requests refs with list_repo_refs.

  • Add include_pull_requests to list_repo_refs by @Wauplin in #1822

Filter discussion when listing them with get_repo_discussions.

# List opened PR from "sanchit-gandhi" on model repo "openai/whisper-large-v3"
>>> from huggingface_hub import get_repo_discussions
>>> discussions = get_repo_discussions(
...     repo_id="openai/whisper-large-v3",
...     author="sanchit-gandhi",
...     discussion_type="pull_request",
...     discussion_status="open",
... )
  • โœจ Add filters to HfApi.get_repo_discussions by @SBrandeis in #1845

New field createdAt for ModelInfo, DatasetInfo and SpaceInfo.

  • Add support for createdAt field by @Wauplin in #1816

It's now possible to create an inference endpoint running on a custom docker image (typically: a TGI container).

# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
...     "aws-zephyr-7b-beta-0486",
...     repository="HuggingFaceH4/zephyr-7b-beta",
...     framework="pytorch",
...     task="text-generation",
...     accelerator="gpu",
...     vendor="aws",
...     region="us-east-1",
...     type="protected",
...     instance_size="medium",
...     instance_type="g5.2xlarge",
...     custom_image={
...         "health_route": "/health",
...         "env": {
...             "MAX_BATCH_PREFILL_TOKENS": "2048",
...             "MAX_INPUT_LENGTH": "1024",
...             "MAX_TOTAL_TOKENS": "1512",
...             "MODEL_ID": "/repository"
...         },
...         "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
...     },
... )
  • Allow create inference endpoint from docker image by @Wauplin in #1861

Upload CLI: create branch when revision does not exist

  • Create branch if missing in hugginface-cli upload by @Wauplin in #1857

๐Ÿ–ฅ๏ธ Environment variables

huggingface_hub.constants.HF_HOME has been made a public constant (see reference).

  • Expose HF_HOME in constants by @Wauplin in #1825

Offline mode has gotten more consistent. If HF_HUB_OFFLINE is set, any http call to the Hub will fail. The fallback mechanism is snapshot_download has been refactored to be aligned with the hf_hub_download workflow. If offline mode is activated (or a connection error happens) and the files are already in the cache, snapshot_download returns the corresponding snapshot directory.

  • Respect HF_HUB_OFFLINE for every http call by @Wauplin in #1899
  • Improve snapshot_download offline mode by @Wauplin in #1913

DO_NOT_TRACK environment variable is now respected to deactivate telemetry calls. This is similar to HF_HUB_DISABLE_TELEMETRY but not specific to Hugging Face.

  • Support DO_NOT_TRACK env variable by @Wauplin in #1920

๐Ÿ“š Documentation

  • Document more list repos behavior by @Wauplin in #1823
  • [i18n-KO] ๐ŸŒ Translated git_vs_http.md to Korean by @heuristicwave in #1862

Doc fixes

  • Fixing gated attribute type in docs by @ademait in #1848
  • Update modelcard_template.md by @EziOzoani in #1859
  • fix typo by @pkking in #1864
  • Update references to hub-docs by @mishig25 in #1866
  • Docs: _from_pretrained -> push_to_hub by @tomaarsen in #1871
  • type of conflicting_files of DiscussionWDetails by @ademait in #1847

๐Ÿ’” Breaking change

timeout parameter has been removed from list_repo_files, as part of a planned deprecation cycle.

  • Prepare for v0.20.0 by @Wauplin in #1807

Otherwise, breaking changes should not be expected in this release. We can mention the fact that upload_file and upload_folder are now returning a CommitInfo dataclass instead of a str. Those two methods were previously returning the url of the uploaded file or folder on the Hub as a string. However, some information is lost compared to CommitInfo: commit id, commit title, description, author, etc. In order to make it backward compatible, the return type CommitInfo inherit from both dataclass and str. The plan is to switch to dataclass-only in release v1.0 (not planned yet).

  • Harmonize commit return type by @Wauplin in #1921

Finally, HfFolder is now deprecated in favor of get_token, login and logout. The goal is to force users and integrations to use login/logout (instead of HfFolder.save_token/HfFolder.delete_token) which contain more checks and warning messages. The plan is to get rid of HfFolder in release v1.0 (not planned yet).

  • Login/authentication enhancements by @Wauplin in #1895

Small fixes and maintenance

โš™๏ธ fixes

  • [FIX] Catch TypeError when parsing card data from ModelInfo by @Wauplin in #1821
  • Limit to pydantic<2.x on python3.8 by @Wauplin in #1828
  • Send user_agent in HEAD calls by @Wauplin in #1854
  • Fix pydantic deprecation warning by @Wauplin in #1837
  • Call are_symlink_supported on commonpath by @Wauplin in #1852
  • Fix IndexError when empty string for credential.helper by @SID262000 in #1860
  • fix credentials by @Wauplin (direct commit on main)
  • Fix git credential parsing regex by @Wauplin in #1870
  • Fix Repository is not a class by @Wauplin in #1879
  • Fix WebhookPayload schema + add WebhooksServer.launch by @Wauplin in #1884
  • Fix PermissionError between volumes by @Wauplin in #1886
  • Fix error handling on HTTP 401 by @Wauplin in #1904
  • Send bearer auth in LFS upload by @Wauplin in #1906
  • Fix to_local_dir on hf_hub_download edge case by @Wauplin in #1919

โš™๏ธ internal

  • Prepare for v0.20.0 by @Wauplin in #1807
  • (nit) fix fsspec default mode by @Wauplin (direct commit on main)
  • Use ruff formatter in check_static_imports.py by @Wauplin in #1824
  • ruff formatte by @Wauplin (direct commit on main)
  • Check pydantic correct installation by @Wauplin in #1829
  • FIX ?? send ref in LFS endpoint by @Wauplin in #1838
  • Install doc-builder from source by @Wauplin in #1849
  • robustness by @Wauplin (direct commit on main)
  • style by @Wauplin (direct commit on main)
  • fix list_space_author test by @Wauplin (direct commit on main)
  • finally fix robustness? by @Wauplin (direct commit on main)
  • 4 parallel tests in repo CI instead of 8 to improve stability by @Wauplin (direct commit on main)
  • Remove delete_doc_comment.yaml and delete_doc_comment_trigger.yaml from CI by @Wauplin in #1887
  • skip flaky test by @Wauplin (direct commit on main)
  • Rerun flaky tests in CI by @Wauplin in #1914
  • Sentence Transformers test (soon) no longer expected to fail by @tomaarsen in #1918
  • flakyness by @Wauplin (direct commit on main)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @ademait
    • Fixing gated attribute type in docs (#1848)
    • type of conflicting_files of DiscussionWDetails (#1847)
  • @ceferisbarov
    • add list_collections endpoint, solves #1835 (#1856)
    • add language support to translation client, solves #1763 (#1869)
  • @SID262000
    • Fix IndexError when empty string for credential.helper (#1860)
  • @heuristicwave
    • ๐ŸŒ [i18n-KO] Translated git_vs_http.md to Korean (#1862)
huggingface_hub - v0.19.4 - Hot-fix: do not fail if pydantic install is corrupted

Published by Wauplin 11 months ago

On Python3.8, it is fairly easy to get a corrupted install of pydantic (more specificially, pydantic 2.x cannot run if tensorflow is installed because of an incompatible requirement on typing_extensions). Since pydantic is an optional dependency of huggingface_hub, we do not want to crash at huggingface_hub import time if pydantic install is corrupted. However this was the case because of how imports are made in huggingface_hub. This hot-fix releases fixes this bug. If pydantic is not correctly installed, we only raise a warning and continue as if it was not installed at all.

Related PR: https://github.com/huggingface/huggingface_hub/pull/1829

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.3...v0.19.4

huggingface_hub - v0.19.3 - Hot-fix: pin `pydantic<2.0` on Python3.8

Published by Wauplin 11 months ago

Hot-fix release after https://github.com/huggingface/huggingface_hub/pull/1828.

In 0.19.0 we've loosen pydantic requirements to accept both 1.x and 2.x since huggingface_hub is compatible with both. However, it started to cause issues when installing both huggingface_hub[inference] and tensorflow in a Python3.8 environment. The problem comes from the fact that on Python3.8, Pydantic>=2.x and tensorflow don't seem to be compatible. Tensorflow depends on
typing_extension<=4.5.0 while pydantic 2.x requires typing_extensions>=4.6. This causes a ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'. when importing huggingface_hub.

As a side note, tensorflow support for Python3.8 has been dropped since 2.14.0. Therefore this issue should affect less and less users over time.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.2...v0.19.3

huggingface_hub - v0.19.2 - Patch: expose HF_HOME in constants

Published by Wauplin 11 months ago

Not a hot-fix.

In https://github.com/huggingface/huggingface_hub/pull/1786 (already release in 0.19.0), we harmonized the environment variables in the HF ecosystem with the goal to propagate this harmonization to other HF libraries. In this work, we forgot to expose HF_HOME as a constant value that can be reused, especially by transformers or datasets. This release fixes this (see https://github.com/huggingface/huggingface_hub/pull/1825).

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.1...v0.19.2

huggingface_hub - v0.19.1 - Hot-fix: ignore TypeError when listing models with corrupted ModelCard

Published by Wauplin 11 months ago

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.0...v0.19.1.

Fixes a regression bug (PR https://github.com/huggingface/huggingface_hub/pull/1821) introduced in 0.19.0 that made looping over models with list_models fail. The problem came from the fact that we are now parsing the data returned by the server into Python objects. However for some models the metadata in the model card is not valid. This is usually checked by the server but some models created before we started to enforce correct metadata are not valid. This hot-fix fixes the issue by ignoring the corrupted data, if any.

huggingface_hub - v0.19.0: Inference Endpoints and robustness!

Published by Wauplin 12 months ago

(Discuss about the release in our Community Tab. Feedback welcome!! ๐Ÿค—)

๐Ÿš€ Inference Endpoints API

Inference Endpoints provides a secure solution to easily deploy models hosted on the Hub in a production-ready infrastructure managed by Huggingface. With huggingface_hub>=0.19.0 integration, you can now manage your Inference Endpoints programmatically. Combined with the InferenceClient, this becomes the go-to solution to deploy models and run jobs in production, either sequentially or in batch!

Here is an example how to get an inference endpoint, wake it up, wait for initialization, run jobs in batch and pause back the endpoint. All of this in a few lines of code! For more details, please check out our dedicated guide.

>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint

# Get endpoint + wait until initialized
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()

# Run inference
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])

# Pause endpoint
>>> endpoint.pause()
  • Implement API for Inference Endpoints by @Wauplin in #1779
  • Fix inference endpoints docs by @Wauplin in #1785

โฌ Improved download experience

huggingface_hub is a library primarily used to transfer (huge!) files with the Huggingface Hub. Our goal is to keep improving the experience for this core part of the library. In this release, we introduce a more robust download mechanism for slow/limited connection while improving the UX for users with a high bandwidth available!

More robust downloads

Getting a connection error in the middle of a download is frustrating. That's why we've implemented a retry mechanism that automatically reconnects if a connection get closed or a ReadTimeout error is raised. The download restart exactly where it stopped without having to redownload any bytes.

  • Retry on ConnectionError/ReadTimeout when streaming file from server by @Wauplin in #1766
  • Reset nb_retries if data has been received from the server by @Wauplin in #1784

In addition to this, it is possible to configure huggingface_hub with higher timeouts thanks to @Shahafgo. This should help getting around some issues on slower connections.

  • Adding the ability to configure the timeout of get request by @Shahafgo in #1720
  • Fix a bug to respect the HF_HUB_ETAG_TIMEOUT. by @Shahafgo in #1728

Progress bars while using hf_transfer

hf_transfer is a Rust-based library focused on improving upload and download speed on machines with a high bandwidth available. Once installed (pip install -U hf_transfer), it can transparently be used with huggingface_hub simply by setting HF_HUB_ENABLE_HF_TRANSFER=1 as environment variable. The counterpart of higher performances is the lack of some user-friendly features such as better error handling or a retry mechanism -meaning it is recommended only to power-users-. In this release we still ship a new feature to improve UX: progress bars. No need to update any existing code, a simple library upgrade is enough.

  • hf-transfer progress bar by @cbensimon in #1792
  • Add support for progress bars in hf_transfer uploads by @Wauplin in #1804

๐Ÿ“š Documentation

huggingface-cli guide

huggingface-cli is the CLI tool shipped with huggingface_hub. It recently got some nice improvement, especially with commands to download and upload files directly from the terminal. All of this needed a guide, so here it is!

  • Add CLI guide to documentation by @Wauplin in #1797

Environment variables

Environment variables are useful to configure how huggingface_hub should work. Historically we had some inconsistencies on how those variables were named. This is now improved, with a backward compatible approach. Please check the package reference for more details. The goal is to propagate those changes to the whole HF-ecosystem, making configuration easier for everyone.

  • Harmonize environment variables by @Wauplin in #1786
  • Ensure backward compatibility for HUGGING_FACE_HUB_TOKEN env variable by @Wauplin in #1795
  • Do not promote HF_ENDPOINT environment variable by @Wauplin in #1799

Hindi translation

Hindi documentation landed on the Hub thanks to @aneeshd27! Checkout the Hindi version of the quickstart guide here.

  • Added translation of 3 files as mentioned in issue by @aneeshd27 in #1772

Minor docs fixes

  • Added [[autodoc]] for ModelStatus by @jamesbraza in #1758
  • Expanded docstrings on post and ModelStatus by @jamesbraza in #1740
  • Fix document link for manage-cache by @liuxueyang in #1774
  • Minor doc fixes by @pcuenca in #1775

๐Ÿ’” Breaking changes

Legacy ModelSearchArguments and DatasetSearchArguments have been completely removed from huggingface_hub. This shouldn't cause problem as they were already not in use (and unusable in practice).

  • Removed GeneralTags, ModelTags and DatasetTags by @VictorHugoPilled in #1761

Classes containing details about a repo (ModelInfo, DatasetInfo and SpaceInfo) have been refactored by @mariosasko to be more Pythonic and aligned with the other classes in huggingface_hub. In particular those objects are now based the dataclass module instead of a custom ReprMixin class. Every change is meant to be backward compatible, meaning no breaking changes is expected. However, if you detect any inconsistency, please let us know and we will fix it asap.

  • Replace ReprMixin with dataclasses by @mariosasko in #1788
  • Fix SpaceInfo initialization + add test by @Wauplin in #1802

The legacy Repository and InferenceAPI classes are now deprecated but will not be removed before the next major release (v1.0).
Instead of the git-based Repository, we advice to use the http-based HfApi. Check out this guide explaining the reasons behind it. For InferenceAPI, we recommend to switch to InferenceClient which is much more feature-complete and will keep getting improved.

  • Deprecate Repository class by @Wauplin in #1724

โš™๏ธ Miscellaneous improvements, fixes and maintenance

InferenceClient

  • Adding InferenceClient.get_recommended_model by @jamesbraza in #1770
  • Fix InferenceClient.text_generation when pydantic is not installed by @Wauplin in #1793
  • Supporting pydantic<3 by @jamesbraza in #1727

HfFileSystem

  • [hffs] Raise NotImplementedError on transaction commits by @Wauplin in #1736
  • Fix huggingface filesystem repo_type not forwarded by @Wauplin in #1791
  • Fix HfFileSystemFile when init fails + improve error message by @Wauplin in #1805

FIPS compliance

  • Set usedforsecurity=False in hashlib methods (FIPS compliance) by @Wauplin in #1782

Misc fixes

  • Fix UnboundLocalError when using commit context manager by @hahunavth in #1722
  • Fixed improperly configured 'every' leading to test_sync_and_squash_history failure by @jamesbraza in #1731
  • Testing WEBHOOK_PAYLOAD_EXAMPLE deserialization by @jamesbraza in #1732
  • Keep lock files in a /locks folder to prevent rare concurrency issue by @beeender in #1659
  • Fix Space runtime on static Space by @Wauplin in #1754
  • Clearer error message on unprocessable entity. by @Wauplin in #1755
  • Do not warn in ModelHubMixin on missing config file by @Wauplin in #1776
  • Update SpaceHardware enum by @Wauplin in #1798
  • change prop name by @julien-c in #1803

Internal

  • Bump version to 0.19 by @Wauplin in #1723
  • Make @retry_endpoint a default for all test by @Wauplin in #1725
  • Retry test on 502 Bad Gateway by @Wauplin in #1737
  • Consolidated mypy type ignores in InferenceClient.post by @jamesbraza in #1742
  • fix: remove useless token by @rtrompier in #1765
  • Fix CI (typing-extensions minimal requirement by @Wauplin in #1781
  • remove black formatter to use only ruff by @Wauplin in #1783
  • Separate test and prod cache (+ ruff formatter) by @Wauplin in #1789
  • fix 3.8 tensorflow in ci by @Wauplin (direct commit on main)

๐Ÿค— Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @VictorHugoPilled
    • Removed GeneralTags, ModelTags and DatasetTags (#1761)
  • @aneeshd27
    • Added translation of 3 files as mentioned in issue (#1772)
huggingface_hub - v0.18.0: Collection API, translated documentation and more!

Published by Wauplin about 1 year ago

Collection API ๐ŸŽ‰

Collection API is now fully supported in huggingface_hub!

A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this guide to understand in more detail what collections are and this guide to learn how to build them programmatically.

Create/get/update/delete collection:

  • get_collection
  • create_collection: title, description, namespace, private
  • update_collection_metadata: title, description, position, private, theme
  • delete_collection

Add/update/remove item from collection:

  • add_collection_item: item id, item type, note
  • update_collection_item: note, position
  • delete_collection_item

Usage

>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem: {
    {'_id': '6507f6d5423b46492ee1413e',
    'id': 'TheBloke/TigerBot-70B-Chat-GPTQ',
    'author': 'TheBloke',
    'item_type': 'model',
    'lastModified': '2023-09-19T12:55:21.000Z',
    (...)
}}
>>> from huggingface_hub import create_collection

# Create collection
>>> collection = create_collection(
...     title="ICCV 2023",
...     description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )

# Add item with a note
>>> add_collection_item(
...     collection_slug=collection.slug,  # e.g. "davanstrien/climate-64f99dc2a5067f6b65531bab"
...     item_id="datasets/climate_fever",
...     item_type="dataset",
...     note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
  • Add Collection API by @Wauplin in #1687
  • Add url attribute to Collection class by @Wauplin in #1695
  • [Fix] Add collections guide to overview page by @Wauplin in #1696

๐Ÿ“š Translated documentation

Documentation is now available in both German and Korean thanks to community contributions! This is an important milestone for Hugging Face in its mission to democratize good machine learning.

  • ๐ŸŒ [i18n-DE] Translate docs to German by @martinbrose in #1646
  • ๐ŸŒ [i18n-KO] Translated README, landing docs to Korean by @wonhyeongseo in #1667
  • Update i18n template by @Wauplin in #1680
  • Add German concepts guide by @martinbrose in #1686

Preupload files before committing

(Disclaimer: this is a power-user usage. It is not expected to be used directly by end users.)

When using create_commit (or upload_file/upload_folder), the internal workflow has 3 main steps:

  1. List the files to upload and check if those are regular files (text) or LFS files (binaries or huge files)
  2. Upload the LFS files to S3
  3. Create a commit on the Hub (upload regular files + reference S3 urls at once). The LFS upload is important to avoid large payloads during the commit call.

In this release, we introduce preupload_lfs_files to perform step 2 independently of step 3. This is useful for libraries like datasets that generate huge files "on-the-fly" and want to preupload them one by one before making one commit with all the files. For more details, please read this guide.

  • Preupload lfs files before committing by @Wauplin in #1699
  • Hide CommitOperationAdd's internal attributes by @mariosasko in #1716

Miscellaneous improvements

โค๏ธ List repo likers

Similarly to list_user_likes (listing all likes of a user), we now introduce list_repo_likers to list all likes on a repo - thanks to @issamarabi.

>>> from huggingface_hub import list_repo_likers
>>> likers = list_repo_likers("gpt2")
>>> len(likers)
204
>>> likers
[User(username=..., fullname=..., avatar_url=...), ...]
  • Add list_repo_likers method to HfApi by @issamarabi in #1715

Refactored Dataset Card template

Template for the Dataset Card has been updated to be more aligned with the Model Card template.

  • Dataset card template overhaul by @mariosasko in #1708

QOL improvements

This release also adds a few QOL improvement for the users:

  • Suggest to check firewall/proxy settings + default to local file by @Wauplin in #1670
  • debug logs to debug level by @Wauplin (direct commit on main)
  • Change TimeoutError => asyncio.TimeoutError by @matthewgrossman in #1666
  • Handle refs/convert/parquet and PR revision correctly in hffs by @Wauplin in #1712
  • Document hf_transfer more prominently by @Wauplin in #1714

Breaking change

A breaking change has been introduced in CommitOperationAdd in order to implement preupload_lfs_files in a way that is convenient for the users. The main change is that CommitOperationAdd is no longer a static object but is modified internally by preupload_lfs_files and create_commit. This means that you cannot reuse a CommitOperationAdd object once it has been committed to the Hub. If you do so, an explicit exception will be raised. We hope that it will not affect any users but please open an issue if you're encountering any problem.

  • Preupload lfs files before committing by @Wauplin in #1699

โš™๏ธ Small fixes and maintenance

Docs fixes

  • Move repo size limitations to Hub docs by @Wauplin in #1660
  • Correct typo in upload guide by @martinbrose in #1677
  • Fix broken tips in login reference by @Wauplin in #1688

Misc fixes

  • Fixes filtering by tags with list_models and adds test case by @martinbrose in #1673
  • Add default user-agent to huggingface-cli by @Wauplin in #1664
  • Automatically retry on create_repo if '409 conflicting op in progress' by @Wauplin in #1675
  • Fix upload CLI when pushing to Space by @Wauplin in #1669
  • longer pbar descr, drop D-word by @poedator in #1679
  • Pin fsspec to use default expand_path by @mariosasko in #1681
  • Address failing _check_disk_space() when path doesn't exist yet by @martinbrose in #1692
  • Handle TGI error when streaming tokens by @Wauplin in #1711

Internal

  • bump version to 0.18.0.dev0 by @Wauplin in #1658
  • sudo apt update in CI by @Wauplin (direct commit on main)
  • fix CI tests by @Wauplin (direct commit on main)
  • Skip flaky InferenceAPI test by @Wauplin (direct commit on main)
  • Respect HTTPError spec by @Wauplin in #1693
  • skip flaky test by @Wauplin (direct commit on main)
  • Fix LFS tests after password auth deprecation by @Wauplin in #1713

๐Ÿค— Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @martinbrose
    • Correct typo in upload guide (#1677)
    • ๐ŸŒ [i18n-DE] Translate docs to German (#1646)
    • Fixes filtering by tags with list_models and adds test case (#1673)
    • Add German concepts guide (#1686)
    • Address failing _check_disk_space() when path doesn't exist yet (#1692)
  • @wonhyeongseo
    • ๐ŸŒ [i18n-KO] Translated README, landing docs to Korean (#1667)
huggingface_hub - v0.17.3 - Hot-fix: ignore errors when checking available disk space

Published by Wauplin about 1 year ago

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.17.2...v0.17.3

Fixing a bug when downloading files to a non-existent directory. In https://github.com/huggingface/huggingface_hub/pull/1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in https://github.com/huggingface/huggingface_hub/issues/1690. This hot-fix fixes it thanks to https://github.com/huggingface/huggingface_hub/pull/1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any OSError) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.

Checkout those release notes to learn more about the v0.17 release.

huggingface_hub - v0.17.2 - Hot-fix: make `huggingface-cli upload` work with Spaces

Published by Wauplin about 1 year ago

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.17.1...v0.17.2

Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because space_sdk was not found in that case. More details in https://github.com/huggingface/huggingface_hub/pull/1669.
Also updated the user-agent when using huggingface-cli upload. See https://github.com/huggingface/huggingface_hub/pull/1664.

Checkout those release notes to learn more about the v0.17 release.

huggingface_hub - v0.17.0: Inference, CLI and Space API

Published by Wauplin about 1 year ago

InferenceClient

All tasks are now supported! ๐Ÿ’ฅ

Thanks to a massive community effort, all inference tasks are now supported in InferenceClient. Newly added tasks are:

  • Object detection by @dulayjm in #1548
  • Text classification by @martinbrose in #1606
  • Token classification by @martinbrose in #1607
  • Translation by @martinbrose in #1608
  • Question answering by @martinbrose in #1609
  • Table question answering by @martinbrose in #1612
  • Fill mask by @martinbrose in #1613
  • Tabular classification by @martinbrose in #1614
  • Tabular regression by @martinbrose in #1615
  • Document question answering by @martinbrose in #1620
  • Visual question answering by @martinbrose in #1621
  • Zero shot classification by @Wauplin in #1644

Documentation, including examples, for each of these tasks can be found in this table.

All those methods also support async mode using AsyncInferenceClient.

Get InferenceAPI status

Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:

  1. list_deployed_models aims to help users discover which models are currently deployed, listed by task.
  2. get_model_status aims to get the status of a specific model. That's useful if you already know which model you want to use.

Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

# Discover zero-shot-classification models currently deployed 
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]

# Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')
  • Add get_model_status function by @sifisKoen in #1558
  • Add list_deployed_models to inference client by @martinbrose in #1622

Few fixes

  • Send Accept: image/png as header for image tasks by @Wauplin in #1567
  • FIX text_to_image and image_to_image parameters by @Wauplin in #1582
  • Distinguish _bytes_to_dict and _bytes_to_list + fix issues by @Wauplin in #1641
  • Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648

Download and upload files... from the CLI ๐Ÿ”ฅ ๐Ÿ”ฅ ๐Ÿ”ฅ

This is a long-awaited feature finally implemented! huggingface-cli now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for git clone, git pull and git push. Despite being less feature-complete than git (no .git/ folder, no notion of local commits), it offers the flexibility required when working with large repositories.

Download

# Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json

# Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json

# Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files:   100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7

Upload

# Upload single file
huggingface-cli upload my-cool-model model.safetensors

# Upload entire directory
huggingface-cli upload my-cool-model ./models

# Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"

Docs

For more examples, check out the documentation:

  • Implemented CLI download functionality by @martinbrose in #1617
  • Implemented CLI upload functionality by @martinbrose in #1618

๐Ÿš€ Space API

Some new features have been added to the Space API to:

  • request persistent storage for a Space
  • set a description to a Space's secrets
  • set variables on a Space
  • configure your Space (hardware, storage, secrets,...) in a single call when you create or duplicate it
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
...     repo_id=repo_id,
...     repo_type="space",
...     space_sdk="gradio",
...     space_hardware="t4-medium",
...     space_sleep_time="3600",
...     space_storage="large",
...     space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
...     space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )

A special thank to @martinbrose who largely contributed on those new features.

  • Request Persistent Storage by @freddyaboulton in #1571
  • Support factory reboot when restarting a Space by @Wauplin in #1586
  • Added support for secret description by @martinbrose in #1594
  • Added support for space variables by @martinbrose in #1592
  • Add settings for creating and duplicating spaces by @martinbrose in #1625

๐Ÿ“š Documentation

A new section has been added to the upload guide with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.

  • Tips to upload large models/datasets by @Wauplin in #1565
  • Add the hard limit of 50GB on LFS files by @severo in #1624

๐Ÿ—บ๏ธ The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!

  • Add translation guide + update repo structure by @Wauplin in #1602
  • Fix i18n issue template links by @Wauplin in #1627

Breaking change

The behavior of InferenceClient.feature_extraction has been updated to fix a bug happening with certain models. The shape of the returned array for transformers models has changed from (sequence_length, hidden_size) to (1, sequence_length, hidden_size) which is the breaking change.

  • Return whole response from feature extraction endpoint instead of assuming its shape by @skulltech in #1648

QOL improvements

HfApi helpers:

Two new helpers have been added to check if a file or a repo exists on the Hub:

>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False

>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False
  • Check if repo or file exists by @martinbrose in #1591

Also, hf_hub_download and snapshot_download are now part of HfApi (keeping the same syntax and behavior).

  • Add download alias for hf_hub_download to HfApi by @Wauplin in #1580

Download improvements:

  1. When a user tries to download a model but the disk is full, a warning is triggered.
  2. When a user tries to download a model but a HTTP error happen, we still check locally if the file exists.
  • Check local files if (RepoNotFound, GatedRepo, HTTPError) while downloading files by @jiamings in #1561
  • Implemented check_disk_space function by @martinbrose in #1590

Small fixes and maintenance

โš™๏ธ Doc fixes

  • Fix table by @stevhliu in #1577
  • Improve docstrings for text generation by @osanseviero in #1597
  • Fix superfluous-typo by @julien-c in #1611
  • minor missing paren by @julien-c in #1637
  • update i18n template by @Wauplin (direct commit on main)
  • Add documentation for modelcard Metadata. Resolves by @sifisKoen in #1448

โš™๏ธ Other fixes

  • Add missing_ok option in delete_repo by @Wauplin in #1640
  • Implement super_squash_history in HfApi by @Wauplin in #1639
  • 1546 fix empty metadata on windows by @Wauplin in #1547
  • Fix tqdm by @NielsRogge in #1629
  • Fix bug #1634 (drop finishing spaces and EOL) by @GBR-613 in #1638

โš™๏ธ Internal

  • Prepare for 0.17 by @Wauplin in #1540
  • update mypy version + fix issues + remove deprecatedlist helper by @Wauplin in #1628
  • mypy traceck by @Wauplin (direct commit on main)
  • pin pydantic version by @Wauplin (direct commit on main)
  • Fix ci tests by @Wauplin in #1630
  • Fix test in contrib CI by @Wauplin (direct commit on main)
  • skip gated repo test on contrib by @Wauplin (direct commit on main)
  • skip failing test by @Wauplin (direct commit on main)
  • Fix fsspec tests in ci by @Wauplin in #1635
  • FIX windows CI by @Wauplin (direct commit on main)
  • FIX style issues by pinning black version by @Wauplin (direct commit on main)
  • forgot test case by @Wauplin (direct commit on main)
  • shorter is better by @Wauplin (direct commit on main)

๐Ÿค— Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @dulayjm
    • Add object detection to inference client (#1548)
  • @martinbrose
    • Added support for secret description (#1594)
    • Check if repo or file exists (#1591)
    • Implemented check_disk_space function (#1590)
    • Added support for space variables (#1592)
    • Add settings for creating and duplicating spaces (#1625)
    • Implemented CLI download functionality (#1617)
    • Implemented CLI upload functionality (#1618)
    • Add text classification to inference client (#1606)
    • Add token classification to inference client (#1607)
    • Add translation to inference client (#1608)
    • Add question answering to inference client (#1609)
    • Add table question answering to inference client (#1612)
    • Add fill mask to inference client (#1613)
    • Add visual question answering to inference client (#1621)
    • Add document question answering to InferenceClient (#1620)
    • Add tabular classification to inference client (#1614)
    • Add tabular regression to inference client (#1615)
    • Add list_deployed_models to inference client (#1622)
  • @sifisKoen
    • Add get_model_status function (#1558) (#1559)
    • Add documentation for modelcard Metadata. Resolves (#1448) (#1631)
huggingface_hub - v0.16.4 - Hot-fix: Do not share request.Session between processes

Published by Wauplin over 1 year ago

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.16.3...v0.16.4

Hotfix to avoid sharing requests.Session between processes. More information in https://github.com/huggingface/huggingface_hub/pull/1545. Internally, we create a Session object per thread to benefit from the HTTPSConnectionPool (i.e. do not reopen connection between calls). Due to an implementation bug, the Session object from the main thread was shared if a fork of the main process happened. The shared Session gets corrupted in the process, leading to some random ConnectionErrors in rare occasions.

Check out these release notes to learn more about the v0.16 release.

huggingface_hub - v0.16.3: Hotfix - More verbose ConnectionError

Published by Wauplin over 1 year ago

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.16.2...v0.16.3

Hotfix to print the request ID if any RequestException happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.

Check out these release notes to learn more about the v0.16 release.

huggingface_hub - v0.16.2: Inference, CommitScheduler and Tensorboard

Published by Wauplin over 1 year ago

Inference

Introduced in the v0.15 release, the InferenceClient got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.

Async client

Asyncio calls are supported thanks to AsyncInferenceClient. Based on asyncio and aiohttp, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.

>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()

>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
  • Support asyncio with AsyncInferenceClient by @Wauplin in #1524

Text-generation

Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @OlivierDehaene.

Text generation has 4 modes depending on details (bool) and stream (bool) values. By default, a raw string is returned. If details=True, more information about the generated tokens is returned. If stream=True, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

# stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'

# stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>>     print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
    id=25,
    text='.',
    logprob=-0.5703125,
    special=False),
    generated_text='100% open source and built to be easy to use.',
    details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)

Of course, the async client also supports text-generation (see docs):

>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
  • prepare for tgi by @Wauplin in #1511
  • Support text-generation in InferenceClient by @Wauplin in #1513

Zero-shot-image-classification

InferenceClient now supports zero-shot-image-classification (see docs). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
...     "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
...     labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]

Thanks to @dulayjm for your contribution on this task!

  • added zero shot image classification by @dulayjm in #1528

Other

When using InferenceClient's task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.

  • Fetch inference model for task from API by @Wauplin in #1510

It is now possible to configure headers and cookies to be sent when initializing the client: InferenceClient(headers=..., cookies=...). All calls made with this client will then use these headers/cookies.

  • Custom headers/cookies in InferenceClient by @Wauplin in #1507

Commit API

CommitScheduler

The CommitScheduler is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.

>>> from huggingface_hub import CommitScheduler

# Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
...     repo_id="report-translation-feedback",
...     repo_type="dataset",
...     folder_path=feedback_folder,
...     path_in_repo="data",
...     every=10,
... )

Check out this guide to understand how to use the CommitScheduler. It comes with a Space to showcase how to use it in 4 practical examples.

  • CommitScheduler: upload folder every 5 minutes by @Wauplin in #1494
  • Encourage to overwrite CommitScheduler.push_to_hub by @Wauplin in #1506
  • FIX Use token by default in CommitScheduler by @Wauplin in #1509
  • safer commit scheduler by @Wauplin (direct commit on main)

HFSummaryWriter (tensorboard)

The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as tfevents) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than 42k models are already using this feature!

With the HFSummaryWriter you can now take full advantage of the feature for your training, simply by updating a single line of code.

>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)

HFSummaryWriter inherits from SummaryWriter and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.

For more information on how to use it, check out this documentation page. Please note that this is still an experimental feature so feedback is very welcome.

  • Experimental hf logger by @Wauplin in #1456

CommitOperationCopy

It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information here.

  • add CommitOperationCopy by @lhoestq in #1495
  • Use CommitOperationCopy in hffs by @Wauplin in #1497
  • Batch fetch_lfs_files_to_copy by @lhoestq in #1504

Breaking changes

ModelHubMixin got updated (after a deprecation cycle):

  • Force to use kwargs instead of passing everything a positional arg
  • It is not possible anymore to pass model_id as username/repo_name@revision in ModelHubMixin. Revision must be passed as a separate revision argument if needed.
  • Remove deprecated code for v0.16.x by @Wauplin in #1492

Bug fixes and small improvements

Doc fixes

  • [doc build] Use secrets by @mishig25 in #1501
  • Migrate doc files to Markdown by @Wauplin in #1522
  • fix doc example by @Wauplin (direct commit on main)
  • Update readme and contributing guide by @Wauplin in #1534

HTTP fixes

A x-request-id header is sent by default for every request made to the Hub. This should help debugging user issues.

  • Add x-request-id to every request by @Wauplin in #1518

3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.

  • Set 30s timeout on downloads (instead of 10s) by @Wauplin in #1514
  • Set timeout to 60 instead of 30 when downloading files by @Wauplin in #1523
  • Set timeout to 10s by @ydshieh in #1530

Misc

  • Rename "configs" dataset card field to "config_names" by @polinaeterna in #1491
  • update stats by @Wauplin (direct commit on main)
  • Retry on both ConnectTimeout and ReadTimeout by @Wauplin in #1529
  • update tip by @Wauplin (direct commit on main)
  • make repo_info public by @Wauplin (direct commit on main)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @dulayjm
    • added zero shot image classification (#1528)
huggingface_hub - v0.15.1: InferenceClient and background uploads!

Published by Wauplin over 1 year ago

InferenceClient

We introduce InferenceClient, a new client to run inference on the Hub. The objective is to:

  • support both InferenceAPI and Inference Endpoints services in a single client.
  • offer a nice interface with:
    • 1 method per task (e.g. summary = client.summarization("this is a long text"))
    • 1 default model per task (i.e. easy to prototype)
    • explicit and documented parameters
    • convenient binary inputs (from url, path, file-like object,...)
  • be flexible and support custom requests if needed

Check out the Inference guide to get a complete overview.

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()

>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")

>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]

The short-term goal is to add support for more tasks (here is the current list), especially text-generation and handle asyncio calls. The mid-term goal is to deprecate and replace InferenceAPI.

  • Enhanced InferenceClient by @Wauplin in #1474

Non-blocking uploads

It is now possible to run HfApi calls in the background! The goal is to make it easier to upload files periodically without blocking the main thread during a training. The was previously possible when using Repository but is now available for HTTP-based methods like upload_file, upload_folder and create_commit. If run_as_future=True is passed:

  • the job is queued in a background thread. Only 1 worker is spawned to ensure no race condition. The goal is NOT to speed up a process by parallelizing concurrent calls to the Hub.
  • a Future object is returned to check the job status
  • main thread is not interrupted, even if an exception occurs during the upload

In addition to this parameter, a run_as_future(...) method is available to queue any other calls to the Hub. More details in this guide.

>>> from huggingface_hub import HfApi

>>> api = HfApi()
>>> api.upload_file(...)  # takes Xs
# URL to upload file

>>> future = api.upload_file(..., run_as_future=True) # instant
>>> future.result() # wait until complete
# URL to upload file
  • Run HfApi methods in the background (run_as_future) by @Wauplin in #1458
  • fix docs for run_as_future by @Wauplin (direct commit on main)

Breaking changes

Some (announced) breaking changes have been introduced:

  • list_models, list_datasets and list_spaces return an iterable instead of a list (lazy-loading of paginated results)
  • The parameter cardData in list_datasets has been removed in favor of the parameter full.

Both changes had a deprecation cycle for a few releases now.

  • Remove deprecated code + adapt tests by @Wauplin in #1450

Bugfixes and small improvements

Token permission

New parameters in login() :

  • new_session : skip login if new_session=False and user is already logged in
  • write_permission : write permission is required (login fails otherwise)

Also added a new HfApi().get_token_permission() method that returns "read" or "write" (or None if not logged in).

  • Add new_session, write_permission args by @aliabid94 in #1476

List files with details

New parameter to get more details when listing files: list_repo_files(..., expand=True).
API call is slower but lastCommit and security fields are returned as well.

  • Add expand parameter to list_repo_files by @Wauplin in #1451

Docs fixes

  • Resolve broken link to 'filesystem' by @tomaarsen in #1461
  • Fix broken link in docs to hf_file_system guide by @albertvillanova in #1469
  • Remove hffs from docs by @albertvillanova in #1468

Misc

  • Fix consistency check when downloading a file by @Wauplin in #1449
  • Fix discussion URL on datasets and spaces by @Wauplin in #1465
  • FIX user agent not passed in snapshot_download by @Wauplin in #1478
  • Avoid ImportError when importing WebhooksServer and Gradio is not installed by @mariosasko in #1482
  • add utf8 encoding when opening files for windows by @abidlabs in #1484
  • Fix incorrect syntax in _deprecation.py warning message for _deprecate_list_output() by @x11kjm in #1485
  • Update _hf_folder.py by @SimonKitSangChu in #1487
  • fix pause_and_restart test by @Wauplin (direct commit on main)
  • Support image-to-image task in InferenceApi by @Wauplin in #1489
huggingface_hub - v0.14.1: patch release

Published by Wauplin over 1 year ago

Fixed an issue reported in diffusers impacting users downloading files from outside of the Hub. Expected download size now takes into account potential compression in the HTTP requests.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.14.0...v0.14.1

HfFileSystem: interact with the Hub through the Filesystem API

We introduce HfFileSystem, a pythonic filesystem interface compatible with fsspec. Built on top of HfApi, it offers typical filesystem operations like cp, mv, ls, du, glob, get_file and put_file.

>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem()

# List all files in a directory
>>> fs.ls("datasets/myself/my-dataset/data", detail=False)
['datasets/myself/my-dataset/data/train.csv', 'datasets/myself/my-dataset/data/test.csv']

>>> train_data = fs.read_text("datasets/myself/my-dataset/data/train.csv")

Its biggest advantage is to provide ready-to-use integrations with popular libraries like Pandas, DuckDB and Zarr.

import pandas as pd

# Read a remote CSV file into a dataframe
df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv")

# Write a dataframe to a remote CSV file
df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv")

For a more detailed overview, please have a look to this guide.

  • Transfer the hffs code to hfh by @mariosasko in #1420
  • Hffs misc improvements by @mariosasko in #1433

Webhook Server

WebhooksServer allows to implement, debug and deploy webhook endpoints on the Hub without any overhead. Creating a new endpoint is as easy as decorating a Python function.

# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload

@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
    if payload.repo.type == "dataset" and payload.event.action == "update":
        # Trigger a training job if a dataset is updated
        ...

For more details, check out this twitter thread or the documentation guide.

Note that this feature is experimental which means the API/behavior might change without prior notice. A warning is displayed to the user when using it. As it is experimental, we would love to get feedback!

  • [Feat] Webhook server by @Wauplin in #1410

Some upload QOL improvements

Faster upload with hf_transfer

Integration with a Rust-based library to upload large files in chunks and concurrently. Expect x3 speed-up if your bandwidth allows it!

  • feat: add hf_transfer upload by @McPatate in #1395

Upload in multiple commits

Uploading large folders at once might be annoying if any error happens while committing (e.g. a connection error occurs). It is now possible to upload a folder in multiple (smaller) commits. If a commit fails, you can re-run the script and resume the upload. Commits are pushed to a dedicated PR. Once completed, the PR is merged to the main branch resulting in a single commit in your git history.

upload_folder(
    folder_path="local/checkpoints",
    repo_id="username/my-dataset",
    repo_type="dataset",
    multi_commits=True, # resumable multi-upload
    multi_commits_verbose=True,
)

Note that this feature is also experimental, meaning its behavior might be updated in the future.

  • New endpoint: create_commits_on_pr by @Wauplin in #1375

Upload validation

Some more pre-validation done before committing files to the Hub. The .git folder is ignored in upload_folder (if any) + fail early in case of invalid paths.

  • Fix path_in_repo validation when committing files by @Wauplin in #1382
  • Raise issue if trying to upload .git/ folder + ignore .git/ folder in upload_folder by @Wauplin in #1408

Keep-alive connections between requests

Internal update to reuse the same HTTP session across huggingface_hub. The goal is to keep the connection open when doing multiple calls to the Hub which ultimately saves a lot of time. For instance, updating metadata in a README became 40% faster while listing all models from the Hub is 60% faster. This has no impact for atomic calls (e.g. 1 standalone GET call).

  • Keep-alive connection between requests by @Wauplin in #1394
  • Accept backend_factory to configure Sessions by @Wauplin in #1442

Custom sleep time for Spaces

It is now possible to programmatically set a custom sleep time on your upgraded Space. After X seconds of inactivity, your Space will go to sleep to save you some $$$.

from huggingface_hub import set_space_sleep_time

# Put your Space to sleep after 1h of inactivity
set_space_sleep_time(repo_id=repo_id, sleep_time=3600)
  • [Feat] Add sleep_time for Spaces by @Wauplin in #1438

Breaking change

  • fsspec has been added as a main dependency. It's a lightweight Python library required for HfFileSystem.

No other breaking change expected in this release.

Bugfixes & small improvements

File-related

A lot of effort has been invested in making huggingface_hub's cache system more robust especially when working with symlinks on Windows. Hope everything's fixed by now.

  • Fix relative symlinks in cache by @Wauplin in #1390
  • Hotfix - use relative symlinks whenever possible by @Wauplin in #1399
  • [hot-fix] Malicious repo can overwrite any file on disk by @Wauplin in #1429
  • Fix symlinks on different volumes on Windows by @Wauplin in #1437
  • [FIX] bug "Invalid cross-device link" error when using snapshot_download to local_dir with no symlink by @thaiminhpv in #1439
  • Raise after download if file size is not consistent by @Wauplin in # 1403

ETag-related

After a server-side configuration issue, we made huggingface_hub more robust when getting Hub's Etags to be more future-proof.

  • Update file_download.py by @Wauplin in #1406
  • ๐Ÿงน Use HUGGINGFACE_HEADER_X_LINKED_ETAG const by @julien-c in #1405
  • Normalize both possible variants of the Etag to remove potentially invalid path elements by @dwforbes in #1428

Documentation-related

  • Docs about how to hide progress bars by @Wauplin in #1416
  • [docs] Update docstring for repo_id in push_to_hub by @tomaarsen in #1436

Misc

  • Prepare for 0.14 by @Wauplin in #1381
  • Add force_download to snapshot_download by @Wauplin in #1391
  • Model card template: Move model usage instructions out of Bias section by @NimaBoscarino in #1400
  • typo by @Wauplin (direct commit on main)
  • Log as warning when waiting for ongoing commands by @Wauplin in #1415
  • Fix: notebook_login() does not update UI on Databricks by @fwetdb in #1414
  • Passing the headers to hf_transfer download. by @Narsil in #1444

Internal stuff

  • Fix CI by @Wauplin in #1392
  • PR should not fail if codecov is bad by @Wauplin (direct commit on main)
  • remove cov check in PR by @Wauplin (direct commit on main)
  • Fix restart space test by @Wauplin (direct commit on main)
  • fix move repo test by @Wauplin (direct commit on main)
huggingface_hub - Security patch v0.13.4

Published by Wauplin over 1 year ago

Security patch to fix a vulnerability in huggingface_hub. In some cases, downloading a file with hf_hub_download or snapshot_download could lead to overwriting any file on a Windows machine. With this fix, only files in the cache directory (or a user-defined directory) can be updated/overwritten.

  • Malicious repo can overwrite any file on disk #429 @Wauplin

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.3...v0.13.4

huggingface_hub - Patch release v0.13.3

Published by Wauplin over 1 year ago

Patch to fix symlinks in the cache directory. Relative paths are used by default whenever possible. Absolute paths are used only on Windows when creating a symlink betweenh 2 paths that are not on the same volume. This hot-fix reverts the logic to what it was in huggingface_hub<=0.12 given the issues that have being reported after the 0.13.2 release (https://github.com/huggingface/huggingface_hub/issues/1398, https://github.com/huggingface/diffusers/issues/2729 and https://github.com/huggingface/transformers/pull/22228)

Hotfix - use relative symlinks whenever possible https://github.com/huggingface/huggingface_hub/pull/1399 @Wauplin

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.2...v0.13.3

huggingface_hub - Patch release v0.13.2

Published by Wauplin over 1 year ago

Patch to fix symlinks in the cache directory. All symlinks are now absolute paths.

  • Fix relative symlinks in cache #1390 @Wauplin

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.1...v0.13.2

huggingface_hub - Patch release v0.13.1

Published by Wauplin over 1 year ago

Patch to fix upload_folder when passing path_in_repo=".". That was a breaking change compared to 0.12.1. Also added more validation around the path_in_repo attribute to improve UX.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.0...v0.13.1