The official Python client for the Huggingface Hub.
APACHE-2.0 License
Bot releases are hidden (Show)
Published by Wauplin 10 months ago
(Discuss about the release in our Community Tab. Feedback welcome!! ๐ค)
Authentication has been greatly improved in Google Colab. The best way to authenticate in a Colab notebook is to define a HF_TOKEN
secret in your personal secrets. When a notebook tries to reach the Hub, a pop-up will ask you if you want to share the HF_TOKEN
secret with this notebook -as an opt-in mechanism. This way, no need to call huggingface_hub.login
and copy-paste your token anymore! ๐ฅ๐ฅ๐ฅ
In addition to the Google Colab integration, the login guide has been revisited to focus on security. It is recommended to authenticate either using huggingface_hub.login
or the HF_TOKEN
environment variable, rather than passing a hardcoded token in your scripts. Check out the new guide here.
SecretNotFoundError
in google colab login by @Wauplin in #1912HfFileSystem
HfFileSystem
is a pythonic fsspec-compatible file interface to the Hugging Face Hub. Implementation has been greatly improved to optimize fs.find
performances.
Here is a quick benchmark with the bigcode/the-stack-dedup dataset:
v0.19.4 | v0.20.0 | |
---|---|---|
hffs.find("datasets/bigcode/the-stack-dedup", detail=False) |
46.2s | 1.63s |
hffs.find("datasets/bigcode/the-stack-dedup", detail=True) |
47.3s | 24.2s |
HfFileSystem.find
by @mariosasko in #1809HfFileSystem.glob
by @lhoestq in #1815_ ls_tree
by @lhoestq in #1850maxdepth
param from HfFileSystem.glob
by @mariosasko in #1875HfApi.list_files_info
by @mariosasko in #1910Models and datasets can be gated to monitor who's accessing the data you are sharing. You can also filter access with a manual approval of the requests. Access requests can now be managed programmatically using HfApi
. This can be useful for example if you have advanced user request screening requirements (for advanced compliance requirements, etc) or if you want to condition access to a model based on completing a payment flow.
Check out this guide to learn more about gated repos.
>>> from huggingface_hub import list_pending_access_requests, accept_access_request
# List pending requests
>>> requests = list_pending_access_requests("meta-llama/Llama-2-7b")
>>> requests[0]
[
AccessRequest(
username='clem',
fullname='Clem ๐ค',
email='***',
timestamp=datetime.datetime(2023, 11, 23, 18, 4, 53, 828000, tzinfo=datetime.timezone.utc),
status='pending',
fields=None,
),
...
]
# Accept Clem's request
>>> accept_access_request("meta-llama/Llama-2-7b", "clem")
Safetensors is a simple, fast and secured format to save tensors in a file. Its advantages makes it the preferred format to host weights on the Hub. Thanks to its specification, it is possible to parse the file metadata on-the-fly. HfApi
now provides get_safetensors_metadata
, an helper to get safetensors metadata from a repo.
# Parse repo with single weights file
>>> metadata = get_safetensors_metadata("bigscience/bloomz-560m")
>>> metadata
SafetensorsRepoMetadata(
metadata=None,
sharded=False,
weight_map={'h.0.input_layernorm.bias': 'model.safetensors', ...},
files_metadata={'model.safetensors': SafetensorsFileMetadata(...)}
)
>>> metadata.files_metadata["model.safetensors"].metadata
{'format': 'pt'}
You can now list collections on the Hub. You can filter them to return only collection containing a given item, or created by a given author.
>>> collections = list_collections(item="models/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF", sort="trending", limit=5):
>>> for collection in collections:
... print(collection.slug)
teknium/quantized-models-6544690bb978e0b0f7328748
AmeerH/function-calling-65560a2565d7a6ef568527af
PostArchitekt/7bz-65479bb8c194936469697d8c
gnomealone/need-to-test-652007226c6ce4cdacf9c233
Crataco/favorite-7b-models-651944072b4fffcb41f8b568
.gitignore
upload_folder
now respect gitignore
files!
Previously it was possible to filter which files should be uploaded from a folder using the allow_patterns
and ignore_patterns
parameters. This can now automatically be done by simply creating a .gitignore
file in your repo.
.gitignore
file in commits by @Wauplin in #1868Uploading LFS files has also gotten more robust with a retry mechanism if a transient error happen while uploading to S3.
InferenceClient.translation
InferenceClient.translation
now supports src_lang
/tgt_lang
for applicable models.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="fr_XX")
"Mon nom est Sarah Jessica Parker mais vous pouvez m'appeler Jessica"
>>> client.translation("My name is Sarah Jessica Parker but you can call me Jessica", model="facebook/mbart-large-50-many-to-many-mmt", src_lang="en_XX", tgt_lang="es_XX")
'Mi nombre es Sarah Jessica Parker pero puedes llamarme Jessica'
EvalResult
EvalResult
now support source_name
and source_link
to provide a custom source for a reported result.
Fetch all pull requests refs with list_repo_refs
.
include_pull_requests
to list_repo_refs by @Wauplin in #1822Filter discussion when listing them with get_repo_discussions
.
# List opened PR from "sanchit-gandhi" on model repo "openai/whisper-large-v3"
>>> from huggingface_hub import get_repo_discussions
>>> discussions = get_repo_discussions(
... repo_id="openai/whisper-large-v3",
... author="sanchit-gandhi",
... discussion_type="pull_request",
... discussion_status="open",
... )
New field createdAt
for ModelInfo
, DatasetInfo
and SpaceInfo
.
It's now possible to create an inference endpoint running on a custom docker image (typically: a TGI container).
# Start an Inference Endpoint running Zephyr-7b-beta on TGI
>>> from huggingface_hub import create_inference_endpoint
>>> endpoint = create_inference_endpoint(
... "aws-zephyr-7b-beta-0486",
... repository="HuggingFaceH4/zephyr-7b-beta",
... framework="pytorch",
... task="text-generation",
... accelerator="gpu",
... vendor="aws",
... region="us-east-1",
... type="protected",
... instance_size="medium",
... instance_type="g5.2xlarge",
... custom_image={
... "health_route": "/health",
... "env": {
... "MAX_BATCH_PREFILL_TOKENS": "2048",
... "MAX_INPUT_LENGTH": "1024",
... "MAX_TOTAL_TOKENS": "1512",
... "MODEL_ID": "/repository"
... },
... "url": "ghcr.io/huggingface/text-generation-inference:1.1.0",
... },
... )
Upload CLI: create branch when revision does not exist
huggingface_hub.constants.HF_HOME
has been made a public constant (see reference).
HF_HOME
in constants by @Wauplin in #1825Offline mode has gotten more consistent. If HF_HUB_OFFLINE
is set, any http call to the Hub will fail. The fallback mechanism is snapshot_download
has been refactored to be aligned with the hf_hub_download
workflow. If offline mode is activated (or a connection error happens) and the files are already in the cache, snapshot_download
returns the corresponding snapshot directory.
snapshot_download
offline mode by @Wauplin in #1913DO_NOT_TRACK
environment variable is now respected to deactivate telemetry calls. This is similar to HF_HUB_DISABLE_TELEMETRY
but not specific to Hugging Face.
git_vs_http.md
to Korean by @heuristicwave in #1862gated
attribute type in docs by @ademait in #1848timeout
parameter has been removed from list_repo_files
, as part of a planned deprecation cycle.
Otherwise, breaking changes should not be expected in this release. We can mention the fact that upload_file
and upload_folder
are now returning a CommitInfo
dataclass instead of a str
. Those two methods were previously returning the url of the uploaded file or folder on the Hub as a string. However, some information is lost compared to CommitInfo
: commit id, commit title, description, author, etc. In order to make it backward compatible, the return type CommitInfo
inherit from both dataclass
and str
. The plan is to switch to dataclass
-only in release v1.0
(not planned yet).
Finally, HfFolder
is now deprecated in favor of get_token
, login
and logout
. The goal is to force users and integrations to use login
/logout
(instead of HfFolder.save_token
/HfFolder.delete_token
) which contain more checks and warning messages. The plan is to get rid of HfFolder
in release v1.0
(not planned yet).
pydantic<2.x
on python3.8
by @Wauplin in #1828user_agent
in HEAD calls by @Wauplin in #1854WebhookPayload
schema + add WebhooksServer.launch
by @Wauplin in #1884The following contributors have made significant changes to the library over the last release:
gated
attribute type in docs (#1848)git_vs_http.md
to Korean (#1862)Published by Wauplin 11 months ago
On Python3.8, it is fairly easy to get a corrupted install of pydantic (more specificially, pydantic 2.x cannot run if tensorflow is installed because of an incompatible requirement on typing_extensions
). Since pydantic
is an optional dependency of huggingface_hub
, we do not want to crash at huggingface_hub
import time if pydantic install is corrupted. However this was the case because of how imports are made in huggingface_hub
. This hot-fix releases fixes this bug. If pydantic is not correctly installed, we only raise a warning and continue as if it was not installed at all.
Related PR: https://github.com/huggingface/huggingface_hub/pull/1829
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.3...v0.19.4
Published by Wauplin 11 months ago
Hot-fix release after https://github.com/huggingface/huggingface_hub/pull/1828.
In 0.19.0
we've loosen pydantic requirements to accept both 1.x and 2.x since huggingface_hub
is compatible with both. However, it started to cause issues when installing both huggingface_hub[inference]
and tensorflow
in a Python3.8 environment. The problem comes from the fact that on Python3.8, Pydantic>=2.x and tensorflow don't seem to be compatible. Tensorflow depends on
typing_extension<=4.5.0
while pydantic 2.x requires typing_extensions>=4.6
. This causes a ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'.
when importing huggingface_hub.
As a side note, tensorflow support for Python3.8 has been dropped since 2.14.0. Therefore this issue should affect less and less users over time.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.2...v0.19.3
Published by Wauplin 11 months ago
Not a hot-fix.
In https://github.com/huggingface/huggingface_hub/pull/1786 (already release in 0.19.0
), we harmonized the environment variables in the HF ecosystem with the goal to propagate this harmonization to other HF libraries. In this work, we forgot to expose HF_HOME
as a constant value that can be reused, especially by transformers
or datasets
. This release fixes this (see https://github.com/huggingface/huggingface_hub/pull/1825).
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.1...v0.19.2
Published by Wauplin 11 months ago
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.19.0...v0.19.1.
Fixes a regression bug (PR https://github.com/huggingface/huggingface_hub/pull/1821) introduced in 0.19.0
that made looping over models with list_models
fail. The problem came from the fact that we are now parsing the data returned by the server into Python objects. However for some models the metadata in the model card is not valid. This is usually checked by the server but some models created before we started to enforce correct metadata are not valid. This hot-fix fixes the issue by ignoring the corrupted data, if any.
Published by Wauplin 12 months ago
(Discuss about the release in our Community Tab. Feedback welcome!! ๐ค)
Inference Endpoints provides a secure solution to easily deploy models hosted on the Hub in a production-ready infrastructure managed by Huggingface. With huggingface_hub>=0.19.0
integration, you can now manage your Inference Endpoints programmatically. Combined with the InferenceClient
, this becomes the go-to solution to deploy models and run jobs in production, either sequentially or in batch!
Here is an example how to get an inference endpoint, wake it up, wait for initialization, run jobs in batch and pause back the endpoint. All of this in a few lines of code! For more details, please check out our dedicated guide.
>>> import asyncio
>>> from huggingface_hub import get_inference_endpoint
# Get endpoint + wait until initialized
>>> endpoint = get_inference_endpoint("batch-endpoint").resume().wait()
# Run inference
>>> async_client = endpoint.async_client
>>> results = asyncio.gather(*[async_client.text_generation(...) for job in jobs])
# Pause endpoint
>>> endpoint.pause()
huggingface_hub
is a library primarily used to transfer (huge!) files with the Huggingface Hub. Our goal is to keep improving the experience for this core part of the library. In this release, we introduce a more robust download mechanism for slow/limited connection while improving the UX for users with a high bandwidth available!
Getting a connection error in the middle of a download is frustrating. That's why we've implemented a retry mechanism that automatically reconnects if a connection get closed or a ReadTimeout error is raised. The download restart exactly where it stopped without having to redownload any bytes.
In addition to this, it is possible to configure huggingface_hub
with higher timeouts thanks to @Shahafgo. This should help getting around some issues on slower connections.
hf_transfer
hf_transfer
is a Rust-based library focused on improving upload and download speed on machines with a high bandwidth available. Once installed (pip install -U hf_transfer
), it can transparently be used with huggingface_hub
simply by setting HF_HUB_ENABLE_HF_TRANSFER=1
as environment variable. The counterpart of higher performances is the lack of some user-friendly features such as better error handling or a retry mechanism -meaning it is recommended only to power-users-. In this release we still ship a new feature to improve UX: progress bars. No need to update any existing code, a simple library upgrade is enough.
hf-transfer
progress bar by @cbensimon in #1792huggingface-cli
guidehuggingface-cli
is the CLI tool shipped with huggingface_hub
. It recently got some nice improvement, especially with commands to download and upload files directly from the terminal. All of this needed a guide, so here it is!
Environment variables are useful to configure how huggingface_hub
should work. Historically we had some inconsistencies on how those variables were named. This is now improved, with a backward compatible approach. Please check the package reference for more details. The goal is to propagate those changes to the whole HF-ecosystem, making configuration easier for everyone.
HF_ENDPOINT
environment variable by @Wauplin in #1799Hindi documentation landed on the Hub thanks to @aneeshd27! Checkout the Hindi version of the quickstart guide here.
[[autodoc]]
for ModelStatus
by @jamesbraza in #1758post
and ModelStatus
by @jamesbraza in #1740Legacy ModelSearchArguments
and DatasetSearchArguments
have been completely removed from huggingface_hub
. This shouldn't cause problem as they were already not in use (and unusable in practice).
Classes containing details about a repo (ModelInfo
, DatasetInfo
and SpaceInfo
) have been refactored by @mariosasko to be more Pythonic and aligned with the other classes in huggingface_hub
. In particular those objects are now based the dataclass
module instead of a custom ReprMixin
class. Every change is meant to be backward compatible, meaning no breaking changes is expected. However, if you detect any inconsistency, please let us know and we will fix it asap.
ReprMixin
with dataclasses by @mariosasko in #1788The legacy Repository
and InferenceAPI
classes are now deprecated but will not be removed before the next major release (v1.0
).
Instead of the git-based Repository
, we advice to use the http-based HfApi
. Check out this guide explaining the reasons behind it. For InferenceAPI
, we recommend to switch to InferenceClient
which is much more feature-complete and will keep getting improved.
Repository
class by @Wauplin in #1724InferenceClient
InferenceClient.get_recommended_model
by @jamesbraza in #1770pydantic<3
by @jamesbraza in #1727HfFileSystem
NotImplementedError
on transaction commits by @Wauplin in #1736HfFileSystemFile
when init fails + improve error message by @Wauplin in #1805WEBHOOK_PAYLOAD_EXAMPLE
deserialization by @jamesbraza in #1732/locks
folder to prevent rare concurrency issue by @beeender in #1659@retry_endpoint
a default for all test by @Wauplin in #1725InferenceClient.post
by @jamesbraza in #1742The following contributors have made significant changes to the library over the last release:
Published by Wauplin about 1 year ago
Collection API is now fully supported in huggingface_hub
!
A collection is a group of related items on the Hub (models, datasets, Spaces, papers) that are organized together on the same page. Collections are useful for creating your own portfolio, bookmarking content in categories, or presenting a curated list of items you want to share. Check out this guide to understand in more detail what collections are and this guide to learn how to build them programmatically.
get_collection
create_collection
: title, description, namespace, privateupdate_collection_metadata
: title, description, position, private, themedelete_collection
add_collection_item
: item id, item type, noteupdate_collection_item
: note, positiondelete_collection_item
>>> from huggingface_hub import get_collection
>>> collection = get_collection("TheBloke/recent-models-64f9a55bb3115b4f513ec026")
>>> collection.title
'Recent models'
>>> len(collection.items)
37
>>> collection.items[0]
CollectionItem: {
{'_id': '6507f6d5423b46492ee1413e',
'id': 'TheBloke/TigerBot-70B-Chat-GPTQ',
'author': 'TheBloke',
'item_type': 'model',
'lastModified': '2023-09-19T12:55:21.000Z',
(...)
}}
>>> from huggingface_hub import create_collection
# Create collection
>>> collection = create_collection(
... title="ICCV 2023",
... description="Portfolio of models, papers and demos I presented at ICCV 2023",
... )
# Add item with a note
>>> add_collection_item(
... collection_slug=collection.slug, # e.g. "davanstrien/climate-64f99dc2a5067f6b65531bab"
... item_id="datasets/climate_fever",
... item_type="dataset",
... note="This dataset adopts the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet."
... )
url
attribute to Collection class by @Wauplin in #1695Documentation is now available in both German and Korean thanks to community contributions! This is an important milestone for Hugging Face in its mission to democratize good machine learning.
(Disclaimer: this is a power-user usage. It is not expected to be used directly by end users.)
When using create_commit
(or upload_file
/upload_folder
), the internal workflow has 3 main steps:
In this release, we introduce preupload_lfs_files
to perform step 2 independently of step 3. This is useful for libraries like datasets
that generate huge files "on-the-fly" and want to preupload them one by one before making one commit with all the files. For more details, please read this guide.
CommitOperationAdd
's internal attributes by @mariosasko in #1716Similarly to list_user_likes
(listing all likes of a user), we now introduce list_repo_likers
to list all likes on a repo - thanks to @issamarabi.
>>> from huggingface_hub import list_repo_likers
>>> likers = list_repo_likers("gpt2")
>>> len(likers)
204
>>> likers
[User(username=..., fullname=..., avatar_url=...), ...]
Template for the Dataset Card has been updated to be more aligned with the Model Card template.
This release also adds a few QOL improvement for the users:
TimeoutError
=> asyncio.TimeoutError
by @matthewgrossman in #1666refs/convert/parquet
and PR revision correctly in hffs by @Wauplin in #1712A breaking change has been introduced in CommitOperationAdd
in order to implement preupload_lfs_files
in a way that is convenient for the users. The main change is that CommitOperationAdd
is no longer a static object but is modified internally by preupload_lfs_files
and create_commit
. This means that you cannot reuse a CommitOperationAdd
object once it has been committed to the Hub. If you do so, an explicit exception will be raised. We hope that it will not affect any users but please open an issue if you're encountering any problem.
fsspec
to use default expand_path
by @mariosasko in #16810.18.0.dev0
by @Wauplin in #1658HTTPError
spec by @Wauplin in #1693The following contributors have made significant changes to the library over the last release:
Published by Wauplin about 1 year ago
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.17.2...v0.17.3
Fixing a bug when downloading files to a non-existent directory. In https://github.com/huggingface/huggingface_hub/pull/1590 we introduced a helper that raises a warning if there is not enough disk space to download a file. A bug made the helper raise an exception if the folder doesn't exist yet as reported in https://github.com/huggingface/huggingface_hub/issues/1690. This hot-fix fixes it thanks to https://github.com/huggingface/huggingface_hub/pull/1692 which recursively checks the parent directories if the full path doesn't exist. If it keeps failing (for any OSError
) we silently ignore the error and keep going. Not having the warning is worse than breaking the download of legit users.
Checkout those release notes to learn more about the v0.17 release.
Published by Wauplin about 1 year ago
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.17.1...v0.17.2
Fixing a bug when uploading files to a Space repo using the CLI. The command was trying to create a repo (even if it already exists) and was failing because space_sdk
was not found in that case. More details in https://github.com/huggingface/huggingface_hub/pull/1669.
Also updated the user-agent when using huggingface-cli upload
. See https://github.com/huggingface/huggingface_hub/pull/1664.
Checkout those release notes to learn more about the v0.17 release.
Published by Wauplin about 1 year ago
Thanks to a massive community effort, all inference tasks are now supported in InferenceClient
. Newly added tasks are:
Documentation, including examples, for each of these tasks can be found in this table.
All those methods also support async mode using AsyncInferenceClient
.
Sometimes knowing which models are available or not on the Inference API service can be useful. This release introduces two new helpers:
list_deployed_models
aims to help users discover which models are currently deployed, listed by task.get_model_status
aims to get the status of a specific model. That's useful if you already know which model you want to use.Those two helpers are only available for the Inference API, not Inference Endpoints (or any other provider).
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
# Discover zero-shot-classification models currently deployed
>>> models = client.list_deployed_models()
>>> models["zero-shot-classification"]
['Narsil/deberta-large-mnli-zero-cls', 'facebook/bart-large-mnli', ...]
# Get status for a specific model
>>> client.get_model_status("bigcode/starcoder")
ModelStatus(loaded=True, state='Loaded', compute_type='gpu', framework='text-generation-inference')
text_to_image
and image_to_image
parameters by @Wauplin in #1582This is a long-awaited feature finally implemented! huggingface-cli
now offers two new commands to easily transfer file from/to the Hub. The goal is to use them as a replacement for git clone
, git pull
and git push
. Despite being less feature-complete than git
(no .git/
folder, no notion of local commits), it offers the flexibility required when working with large repositories.
Download
# Download a single file
>>> huggingface-cli download gpt2 config.json
/home/wauplin/.cache/huggingface/hub/models--gpt2/snapshots/11c5a3d5811f50298f278a704980280950aedb10/config.json
# Download files to a local directory
>>> huggingface-cli download gpt2 config.json --local-dir=./models/gpt2
./models/gpt2/config.json
# Download a subset of a repo
>>> huggingface-cli download bigcode/the-stack --repo-type=dataset --revision=v1.2 --include="data/python/*" --exclu
de="*.json" --exclude="*.zip"
Fetching 206 files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 206/206 [02:31<2:31, ?it/s]
/home/wauplin/.cache/huggingface/hub/datasets--bigcode--the-stack/snapshots/9ca8fa6acdbc8ce920a0cb58adcdafc495818ae7
Upload
# Upload single file
huggingface-cli upload my-cool-model model.safetensors
# Upload entire directory
huggingface-cli upload my-cool-model ./models
# Sync local Space with Hub (upload new files except from logs/, delete removed files)
huggingface-cli upload Wauplin/space-example --repo-type=space --exclude="/logs/*" --delete="*" --commit-message="Sync local Space with Hub"
Docs
For more examples, check out the documentation:
Some new features have been added to the Space API to:
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.create_repo(
... repo_id=repo_id,
... repo_type="space",
... space_sdk="gradio",
... space_hardware="t4-medium",
... space_sleep_time="3600",
... space_storage="large",
... space_secrets=[{"key"="HF_TOKEN", "value"="hf_api_***"}, ...],
... space_variables=[{"key"="MODEL_REPO_ID", "value"="user/repo"}, ...],
... )
A special thank to @martinbrose who largely contributed on those new features.
A new section has been added to the upload guide with some tips about how to upload large models and datasets to the Hub and what are the limits when doing so.
๐บ๏ธ The documentation organization has been updated to support multiple languages. The community effort has started to translate the docs to non-English speakers. More to come in the coming weeks!
The behavior of InferenceClient.feature_extraction
has been updated to fix a bug happening with certain models. The shape of the returned array for transformers
models has changed from (sequence_length, hidden_size)
to (1, sequence_length, hidden_size)
which is the breaking change.
HfApi
helpers:
Two new helpers have been added to check if a file or a repo exists on the Hub:
>>> from huggingface_hub import file_exists
>>> file_exists("bigcode/starcoder", "config.json")
True
>>> file_exists("bigcode/starcoder", "not-a-file")
False
>>> from huggingface_hub import repo_exists
>>> repo_exists("bigcode/starcoder")
True
>>> repo_exists("bigcode/not-a-repo")
False
Also, hf_hub_download
and snapshot_download
are now part of HfApi
(keeping the same syntax and behavior).
hf_hub_download
to HfApi
by @Wauplin in #1580Download improvements:
missing_ok
option in delete_repo
by @Wauplin in #1640super_squash_history
in HfApi
by @Wauplin in #1639The following contributors have made significant changes to the library over the last release:
Published by Wauplin over 1 year ago
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.16.3...v0.16.4
Hotfix to avoid sharing requests.Session
between processes. More information in https://github.com/huggingface/huggingface_hub/pull/1545. Internally, we create a Session object per thread to benefit from the HTTPSConnectionPool (i.e. do not reopen connection between calls). Due to an implementation bug, the Session object from the main thread was shared if a fork of the main process happened. The shared Session gets corrupted in the process, leading to some random ConnectionErrors in rare occasions.
Check out these release notes to learn more about the v0.16 release.
Published by Wauplin over 1 year ago
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.16.2...v0.16.3
Hotfix to print the request ID if any RequestException
happen. This is useful to help the team debug users' problems. Request ID is a generated UUID, unique for each HTTP call made to the Hub.
Check out these release notes to learn more about the v0.16 release.
Published by Wauplin over 1 year ago
Introduced in the v0.15
release, the InferenceClient
got a big update in this one. The client is now reaching a stable point in terms of features. The next updates will be focused on continuing to add support for new tasks.
Asyncio calls are supported thanks to AsyncInferenceClient
. Based on asyncio
and aiohttp
, it allows you to make efficient concurrent calls to the Inference endpoint of your choice. Every task supported by InferenceClient
is supported in its async version. Method inputs and outputs and logic are strictly the same, except that you must await the coroutine.
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> image = await client.text_to_image("An astronaut riding a horse on the moon.")
Support for text-generation task has been added. It is focused on fully supporting endpoints running on the text-generation-inference framework. In fact, the code is heavily inspired by TGI's Python client initially implemented by @OlivierDehaene.
Text generation has 4 modes depending on details
(bool) and stream
(bool) values. By default, a raw string is returned. If details=True
, more information about the generated tokens is returned. If stream=True
, generated tokens are returned one by one as soon as the server generated them. For more information, check out the documentation.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
# stream=False, details=False
>>> client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
# stream=True, details=True
>>> for details in client.text_generation("The huggingface_hub library is ", max_new_tokens=12, details=True, stream=True):
>>> print(details)
TextGenerationStreamResponse(token=Token(id=1425, text='100', logprob=-1.0175781, special=False), generated_text=None, details=None)
...
TextGenerationStreamResponse(token=Token(
id=25,
text='.',
logprob=-0.5703125,
special=False),
generated_text='100% open source and built to be easy to use.',
details=StreamDetails(finish_reason=<FinishReason.Length: 'length'>, generated_tokens=12, seed=None)
)
Of course, the async client also supports text-generation (see docs):
>>> from huggingface_hub import AsyncInferenceClient
>>> client = AsyncInferenceClient()
>>> await client.text_generation("The huggingface_hub library is ", max_new_tokens=12)
'100% open source and built to be easy to use.'
InferenceClient
now supports zero-shot-image-classification (see docs). Both sync and async clients support it. It allows to classify an image based on a list of labels passed as input.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.zero_shot_image_classification(
... "https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg",
... labels=["dog", "cat", "horse"],
... )
[{"label": "dog", "score": 0.956}, ...]
Thanks to @dulayjm for your contribution on this task!
When using InferenceClient
's task methods (text_to_image, text_generation, image_classification,...) you don't have to pass a model id. By default, the client will select a model recommended for the selected task and run on the free public Inference API. This is useful to quickly prototype and test models. In a production-ready setup, we strongly recommend to set the model id/URL manually, as the recommended model is expected to change at any time without prior notice, potentially resulting in different and unexpected results in your workflow. Recommended models are the ones used by default on https://hf.co/tasks.
It is now possible to configure headers and cookies to be sent when initializing the client: InferenceClient(headers=..., cookies=...)
. All calls made with this client will then use these headers/cookies.
The CommitScheduler
is a new class that can be used to regularly push commits to the Hub. It watches changes in a folder and creates a commit every 5 minutes if it detected a file change. One intended use case is to allow regular backups from a Space to a Dataset repository on the Hub. The scheduler is designed to remove the hassle of handling background commits while avoiding empty commits.
>>> from huggingface_hub import CommitScheduler
# Schedule regular uploads every 10 minutes. Remote repo and local folder are created if they don't already exist.
>>> scheduler = CommitScheduler(
... repo_id="report-translation-feedback",
... repo_type="dataset",
... folder_path=feedback_folder,
... path_in_repo="data",
... every=10,
... )
Check out this guide to understand how to use the CommitScheduler
. It comes with a Space to showcase how to use it in 4 practical examples.
CommitScheduler
: upload folder every 5 minutes by @Wauplin in #1494The Hugging Face Hub offers nice support for Tensorboard data. It automatically detects when TensorBoard traces (such as tfevents
) are pushed to the Hub and starts an instance to visualize them. This feature enable a quick and transparent collaboration in your team when training models. In fact, more than 42k models are already using this feature!
With the HFSummaryWriter
you can now take full advantage of the feature for your training, simply by updating a single line of code.
>>> from huggingface_hub import HFSummaryWriter
>>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15)
HFSummaryWriter
inherits from SummaryWriter
and acts as a drop-in replacement in your training scripts. The only addition is that every X minutes (e.g. 15 minutes) it will push the logs directory to the Hub. Commit happens in the background to avoid blocking the main thread. If the upload crashes, the logs are kept locally and the training continues.
For more information on how to use it, check out this documentation page. Please note that this is still an experimental feature so feedback is very welcome.
It is now possible to copy a file in a repo on the Hub. The copy can only happen within a repo and with an LFS file. File can be copied between different revisions. More information here.
ModelHubMixin
got updated (after a deprecation cycle):
model_id
as username/repo_name@revision
in ModelHubMixin
. Revision must be passed as a separate revision
argument if needed.A x-request-id
header is sent by default for every request made to the Hub. This should help debugging user issues.
3 PRs, 3 commits but in the end default timeout did not change. Problem has been solved server-side instead.
The following contributors have made significant changes to the library over the last release:
Published by Wauplin over 1 year ago
We introduce InferenceClient
, a new client to run inference on the Hub. The objective is to:
summary = client.summarization("this is a long text")
)Check out the Inference guide to get a complete overview.
>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> image = client.text_to_image("An astronaut riding a horse on the moon.")
>>> image.save("astronaut.png")
>>> client.image_classification("https://upload.wikimedia.org/wikipedia/commons/thumb/4/43/Cute_dog.jpg/320px-Cute_dog.jpg")
[{'score': 0.9779096841812134, 'label': 'Blenheim spaniel'}, ...]
The short-term goal is to add support for more tasks (here is the current list), especially text-generation and handle asyncio
calls. The mid-term goal is to deprecate and replace InferenceAPI
.
InferenceClient
by @Wauplin in #1474It is now possible to run HfApi calls in the background! The goal is to make it easier to upload files periodically without blocking the main thread during a training. The was previously possible when using Repository
but is now available for HTTP-based methods like upload_file
, upload_folder
and create_commit
. If run_as_future=True
is passed:
Future
object is returned to check the job statusIn addition to this parameter, a run_as_future(...) method is available to queue any other calls to the Hub. More details in this guide.
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> api.upload_file(...) # takes Xs
# URL to upload file
>>> future = api.upload_file(..., run_as_future=True) # instant
>>> future.result() # wait until complete
# URL to upload file
HfApi
methods in the background (run_as_future
) by @Wauplin in #1458Some (announced) breaking changes have been introduced:
list_models
, list_datasets
and list_spaces
return an iterable instead of a list (lazy-loading of paginated results)cardData
in list_datasets
has been removed in favor of the parameter full
.Both changes had a deprecation cycle for a few releases now.
New parameters in login()
:
new_session
: skip login if new_session=False and user is already logged inwrite_permission
: write permission is required (login fails otherwise)Also added a new HfApi().get_token_permission()
method that returns "read"
or "write"
(or None
if not logged in).
New parameter to get more details when listing files: list_repo_files(..., expand=True)
.
API call is slower but lastCommit
and security
fields are returned as well.
ImportError
when importing WebhooksServer
and Gradio is not installed by @mariosasko in #1482_deprecation.py
warning message for _deprecate_list_output()
by @x11kjm in #1485Published by Wauplin over 1 year ago
Fixed an issue reported in diffusers
impacting users downloading files from outside of the Hub. Expected download size now takes into account potential compression in the HTTP requests.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.14.0...v0.14.1
Published by Wauplin over 1 year ago
We introduce HfFileSystem, a pythonic filesystem interface compatible with fsspec
. Built on top of HfApi
, it offers typical filesystem operations like cp
, mv
, ls
, du
, glob
, get_file
and put_file
.
>>> from huggingface_hub import HfFileSystem
>>> fs = HfFileSystem()
# List all files in a directory
>>> fs.ls("datasets/myself/my-dataset/data", detail=False)
['datasets/myself/my-dataset/data/train.csv', 'datasets/myself/my-dataset/data/test.csv']
>>> train_data = fs.read_text("datasets/myself/my-dataset/data/train.csv")
Its biggest advantage is to provide ready-to-use integrations with popular libraries like Pandas, DuckDB and Zarr.
import pandas as pd
# Read a remote CSV file into a dataframe
df = pd.read_csv("hf://datasets/my-username/my-dataset-repo/train.csv")
# Write a dataframe to a remote CSV file
df.to_csv("hf://datasets/my-username/my-dataset-repo/test.csv")
For a more detailed overview, please have a look to this guide.
hffs
code to hfh
by @mariosasko in #1420WebhooksServer
allows to implement, debug and deploy webhook endpoints on the Hub without any overhead. Creating a new endpoint is as easy as decorating a Python function.
# app.py
from huggingface_hub import webhook_endpoint, WebhookPayload
@webhook_endpoint
async def trigger_training(payload: WebhookPayload) -> None:
if payload.repo.type == "dataset" and payload.event.action == "update":
# Trigger a training job if a dataset is updated
...
For more details, check out this twitter thread or the documentation guide.
Note that this feature is experimental which means the API/behavior might change without prior notice. A warning is displayed to the user when using it. As it is experimental, we would love to get feedback!
hf_transfer
Integration with a Rust-based library to upload large files in chunks and concurrently. Expect x3 speed-up if your bandwidth allows it!
hf_transfer
upload by @McPatate in #1395Uploading large folders at once might be annoying if any error happens while committing (e.g. a connection error occurs). It is now possible to upload a folder in multiple (smaller) commits. If a commit fails, you can re-run the script and resume the upload. Commits are pushed to a dedicated PR. Once completed, the PR is merged to the main
branch resulting in a single commit in your git history.
upload_folder(
folder_path="local/checkpoints",
repo_id="username/my-dataset",
repo_type="dataset",
multi_commits=True, # resumable multi-upload
multi_commits_verbose=True,
)
Note that this feature is also experimental, meaning its behavior might be updated in the future.
create_commits_on_pr
by @Wauplin in #1375Some more pre-validation done before committing files to the Hub. The .git
folder is ignored in upload_folder
(if any) + fail early in case of invalid paths.
path_in_repo
validation when committing files by @Wauplin in #1382.git/
folder + ignore .git/
folder in upload_folder
by @Wauplin in #1408Internal update to reuse the same HTTP session across huggingface_hub
. The goal is to keep the connection open when doing multiple calls to the Hub which ultimately saves a lot of time. For instance, updating metadata in a README became 40% faster while listing all models from the Hub is 60% faster. This has no impact for atomic calls (e.g. 1 standalone GET call).
It is now possible to programmatically set a custom sleep time on your upgraded Space. After X seconds of inactivity, your Space will go to sleep to save you some $$$.
from huggingface_hub import set_space_sleep_time
# Put your Space to sleep after 1h of inactivity
set_space_sleep_time(repo_id=repo_id, sleep_time=3600)
sleep_time
for Spaces by @Wauplin in #1438fsspec
has been added as a main dependency. It's a lightweight Python library required for HfFileSystem
.No other breaking change expected in this release.
A lot of effort has been invested in making huggingface_hub
's cache system more robust especially when working with symlinks on Windows. Hope everything's fixed by now.
After a server-side configuration issue, we made huggingface_hub
more robust when getting Hub's Etags to be more future-proof.
HUGGINGFACE_HEADER_X_LINKED_ETAG
const by @julien-c in #1405Published by Wauplin over 1 year ago
Security patch to fix a vulnerability in huggingface_hub
. In some cases, downloading a file with hf_hub_download
or snapshot_download
could lead to overwriting any file on a Windows machine. With this fix, only files in the cache directory (or a user-defined directory) can be updated/overwritten.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.3...v0.13.4
Published by Wauplin over 1 year ago
Patch to fix symlinks in the cache directory. Relative paths are used by default whenever possible. Absolute paths are used only on Windows when creating a symlink betweenh 2 paths that are not on the same volume. This hot-fix reverts the logic to what it was in huggingface_hub<=0.12
given the issues that have being reported after the 0.13.2
release (https://github.com/huggingface/huggingface_hub/issues/1398, https://github.com/huggingface/diffusers/issues/2729 and https://github.com/huggingface/transformers/pull/22228)
Hotfix - use relative symlinks whenever possible https://github.com/huggingface/huggingface_hub/pull/1399 @Wauplin
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.2...v0.13.3
Published by Wauplin over 1 year ago
Patch to fix symlinks in the cache directory. All symlinks are now absolute paths.
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.1...v0.13.2
Published by Wauplin over 1 year ago
Patch to fix upload_folder
when passing path_in_repo="."
. That was a breaking change compared to 0.12.1
. Also added more validation around the path_in_repo
attribute to improve UX.
path_in_repo
validation when committing files by @Wauplin in https://github.com/huggingface/huggingface_hub/pull/1382
Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.13.0...v0.13.1