huggingface_hub

The official Python client for the Huggingface Hub.

APACHE-2.0 License

Downloads
43.3M
Stars
1.6K
Committers
197
huggingface_hub - [v0.24.3] Fix InferenceClient base_url for OpenAI compatibility Latest Release

Published by Wauplin 3 months ago

Fixing a bug in the chat completion URL to follow OpenAI standard https://github.com/huggingface/huggingface_hub/pull/2418. InferenceClient now works with urls ending with /, /v1 and /v1/chat/completions.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.24.2...v0.24.3

huggingface_hub - [v0.24.2] Fix create empty commit PR should not fail

Published by Wauplin 3 months ago

See https://github.com/huggingface/huggingface_hub/pull/2413 for more details.
Creating an empty commit on a PR was failing due to a revision parameter been quoted twice. This patch release fixes it.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.24.1...v0.24.2

huggingface_hub - [v0.24.1] Handle [DONE] signal from TGI + remove logic for "non-TGI servers"

Published by Wauplin 3 months ago

This release fixes 2 things:

See https://github.com/huggingface/huggingface_hub/pull/2410 for more details.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.24.0...v0.24.1

huggingface_hub - v0.24.0: Inference, serialization and optimizations

Published by Wauplin 3 months ago

⚡️ OpenAI-compatible inference client!

The InferenceClient's chat completion API is now fully compliant with OpenAI client. This means it's a drop-in replacement in your script:

- from openai import OpenAI
+ from huggingface_hub import InferenceClient

- client = OpenAI(
+ client = InferenceClient(
    base_url=...,
    api_key=...,
)


output = client.chat.completions.create(
    model="meta-llama/Meta-Llama-3-8B-Instruct",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Count to 10"},
    ],
    stream=True,
    max_tokens=1024,
)

for chunk in output:
    print(chunk.choices[0].delta.content)

Why switching to InferenceClient if you already use OpenAI then? Because it's better integrated with HF services, such as the Serverless Inference API and Dedicated Endpoints. Check out the more detailed answer in this HF Post.

For more details about OpenAI compatibility, check out this guide's section.

  • True OpenAI drop-in replacement by InferenceClient by @Wauplin in #2384
  • Promote chat_completion in inference guide by @Wauplin in #2366

(other) InferenceClient improvements

Some new parameters have been added to the InferenceClient, following the latest changes in our Inference API:

  • prompt_name, truncate and normalize in feature_extraction
  • model_id and response_format, in chat_completion
  • adapter_id in text_generation
  • hypothesis_template and multi_labels in zero_shot_classification

Of course, all of those changes are also available in the AsyncInferenceClient async equivalent 🤗

  • Support truncate and normalize in InferenceClient by @Wauplin in #2270
  • Add prompt_name to feature-extraction + update types by @Wauplin in #2363
  • Send model_id in ChatCompletion request by @Wauplin in #2302
  • improve client.zero_shot_classification() by @MoritzLaurer in #2340
  • [InferenceClient] Add support for adapter_id (text-generation) and response_format (chat-completion) by @Wauplin in #2383

Added helpers for TGI servers:

  • get_endpoint_info to get information about an endpoint (running model, framework, etc.). Only available on TGI/TEI-powered models.
  • health_check to check health status of the server. Only available on TGI/TEI-powered models and only for InferenceEndpoint or local deployment. For serverless InferenceAPI, it's better to use get_model_status.
  • Support /info and /health routes by @Wauplin in #2269

Other fixes:

  • image_to_text output type has been fixed
  • use wait-for-model to avoid been rate limited while model is not loaded
  • add proxies support
  • Fix InferenceClient.image_to_text output value by @Wauplin in #2285
  • Fix always None in text_generation output by @Wauplin in #2316
  • Add wait-for-model header when sending request to Inference API by @Wauplin in #2318
  • Add proxy support on async client by @noech373 in #2350
  • Remove jinja tips + fix typo in chat completion docstring by @Wauplin in #2368

💾 Serialization

The serialization module introduced in v0.22.x has been improved to become the preferred way to serialize a torch model to disk. It handles how of the box sharding and safe serialization (using safetensors) with subtleties to work with shared layers. This logic was previously scattered in libraries like transformers, diffusers, accelerate and safetensors. The goal of centralizing it in huggingface_hub is to allow any external library to safely benefit from the same naming convention, making it easier to manage for end users.

>>> from huggingface_hub import save_torch_model
>>> model = ... # A PyTorch model

# Save state dict to "path/to/folder". The model will be split into shards of 5GB each and saved as safetensors.
>>> save_torch_model(model, "path/to/folder")

# Or save the state dict manually
>>> from huggingface_hub import save_torch_state_dict
>>> save_torch_state_dict(model.state_dict(), "path/to/folder") 

More details in the serialization package reference.

  • Serialization: support saving torch state dict to disk by @Wauplin in #2314
  • Handle shared layers in save_torch_state_dict + add save_torch_model by @Wauplin in #2373

Some helpers related to serialization have been made public for reuse in external libraries:

  • get_torch_storage_id
  • get_torch_storage_size
  • Support max_shard_size as string in split_state_dict_into_shards_factory by @SunMarc in #2286
  • Make get_torch_storage_id public by @Wauplin in #2304

📁 HfFileSystem

The HfFileSystem has been improved to optimize calls, especially when listing files from a repo. This is especially useful for large datasets like HuggingFaceFW/fineweb for faster processing and reducing risk of being rate limited.

  • [HfFileSystem] Less /paths-info calls by @lhoestq in #2271
  • Update token type definition and arg description in hf_file_system.py by @lappemic in #2278
  • [HfFileSystem] Faster fs.walk() by @lhoestq in #2346

Thanks to @lappemic, HfFileSystem methods are now properly documented. Check it out here!

  • Document more HfFilesyStem Methods by @lappemic in #2380

✨ HfApi & CLI improvements

Commit API

A new mechanism has been introduced to prevent empty commits if no changes have been detected. Enabled by default in upload_file, upload_folder, create_commit and the huggingface-cli upload command. There is no way to force an empty commit.

  • Prevent empty commits if files did not change by @Wauplin in #2389

Resource groups

Resource Groups allow organizations administrators to group related repositories together, and manage access to those repos. It is now possible to specify a resource group ID when creating a repo:

from huggingface_hub import create_repo

create_repo("my-secret-repo", private=True, resource_group_id="66670e5163145ca562cb1988")
  • Support resource_group_id in create_repo by @Wauplin in #2324

Webhooks API

Webhooks allow you to listen for new changes on specific repos or to all repos belonging to particular set of users/organizations (not just your repos, but any repo). With the Webhooks API you can create, enable, disable, delete, update, and list webhooks from a script!

from huggingface_hub import create_webhook

# Example: Creating a webhook
webhook = create_webhook(
    url="https://webhook.site/your-custom-url",
    watched=[{"type": "user", "name": "your-username"}, {"type": "org", "name": "your-org-name"}],
    domains=["repo", "discussion"],
    secret="your-secret"
)
  • [wip] Implement webhooks API by @lappemic in #2209

Search API

The search API has been slightly improved. It is now possible to:

  • filter datasets by tags
  • filter which attributes should be returned in model_info/list_models (and similarly for datasets/Spaces). For example, you can ask the server to return downloadsAllTime for all models.
>>> from huggingface_hub import list_models

>>> for model in list_models(library="transformers", expand="downloadsAllTime", sort="downloads", limit=5):
...     print(model.id, model.downloads_all_time)
MIT/ast-finetuned-audioset-10-10-0.4593 1676502301
sentence-transformers/all-MiniLM-L12-v2 115588145
sentence-transformers/all-MiniLM-L6-v2 250790748
google-bert/bert-base-uncased 1476913254
openai/clip-vit-large-patch14 590557280
  • Support filtering datasets by tags by @Wauplin in #2266
  • Support expand parameter in xxx_info and list_xxxs (model/dataset/Space) by @Wauplin in #2333
  • Add InferenceStatus to ExpandModelProperty_T by @Wauplin in #2388
  • Do not mention gitalyUid in expand parameter by @Wauplin in #2395

CLI

It is now possible to delete files from a repo using the command line:

Delete a folder:

>>> huggingface-cli repo-files Wauplin/my-cool-model delete folder/  
Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo...

Use Unix-style wildcards to delete sets of files:

>>> huggingface-cli repo-files Wauplin/my-cool-model delete *.txt folder/*.bin 
Files correctly deleted from repo. Commit: https://huggingface.co/Wauplin/my-cool-mo...
  • fix/issue 2090 : Add a repo_files command, with recursive deletion. by @OlivierKessler01 in #2280

ModelHubMixin

The ModelHubMixin, allowing for quick integration of external libraries with the Hub have been updated to fix some existing bugs and ease its use. Learn how to integrate your library from this guide.

  • Don't override 'config' in model_kwargs by @alexander-soare in #2274
  • Support custom kwargs for model card in save_pretrained by @qubvel in #2310
  • ModelHubMixin: Fix attributes lost in inheritance by @Wauplin in #2305
  • Fix ModelHubMixin coders by @gorold in #2291
  • Hot-fix: do not share tags between ModelHubMixin siblings by @Wauplin in #2394
  • Fix: correctly encode/decode config in ModelHubMixin if custom coders by @Wauplin in #2337

🌐 📚 Documentation

Efforts from the Korean-speaking community continued to translate guides and package references to KO! Check out the result here.

  • 🌐 [i18n-KO] Translated package_reference/cards.md to Korean by @usr-bin-ksh in #2204
  • 🌐 [i18n-KO] Translated package_reference/community.md to Korean by @seoulsky-field in #2183
  • 🌐 [i18n-KO] Translated guides/collections.md to Korean by @usr-bin-ksh in #2192
  • 🌐 [i18n-KO] Translated guides/integrations.md to Korean by @cjfghk5697 in #2256
  • 🌐 [i18n-KO] Translated package_reference/environment_variables.md to Korean by @jungnerd in #2311
  • 🌐 [i18n-KO] Translated package_reference/webhooks_server.md to Korean by @fabxoe in #2344
  • 🌐 [i18n-KO] Translated guides/manage-cache.md to Korean by @cjfghk5697 in #2347

French documentation is also being updated, thanks to @JibrilEl!

  • [i18n-FR] Translated "Integrations" to french (sub PR of 1900) by @JibrilEl in #2329

A very nice illustration has been made by @severo to explain how hf:// urls works with the HfFileSystem object. Check it out here!

  • add a diagram about hf:// URLs by @severo in #2358

💔 Breaking changes

A few breaking changes have been introduced:

  • ModelFilter and DatasetFilter are completely removed. You can now pass arguments directly to list_models and list_datasets. This removes one level of complexity for the same result.
  • remove organization and name from update_repo_visibility. Please use a proper repo_id instead. This makes the method consistent with all other methods from HfApi.

These breaking changes have been announced with a regular deprecation cycle.

  • Bump to 0.24 + remove deprecated code by @Wauplin in #2287

The legacy_cache_layout parameter (in hf_hub_download/snapshot_download) as well as cached_download, filename_to_url and url_to_filename helpers are now deprecated and will be removed in huggingface_hub==0.26.x. The proper way to download files is to use the current cache system with hf_hub_download/snapshot_download that have been in place for 2 years already.

  • Deprecate legacy_cache_layout parameter in hf_hub_download by @Wauplin in #2317

Small fixes and maintenance

⚙️ fixes

  • Add comment to _send_telemetry_in_thread explaining it should not be removed by @freddyaboulton in #2264
  • feat: endpoints rename instances doc by @co42 in #2282
  • Fix FileNotFoundError in gitignore creation by @Wauplin in #2288
  • Close aiohttp client on error by @Wauplin in #2294
  • fix create_inference_endpoint by @nbroad1881 in #2292
  • Support custom_image in update_inference_endpoint by @Wauplin in #2306
  • Fix Repository if whoami call doesn't return an email by @Wauplin in #2320
  • print actual error message when failing to load a submodule by @kallewoof in #2342
  • Do not raise on .resume() if Inference Endpoint is already running by @Wauplin in #2335
  • Fix permission issue when downloading on root dir by @Wauplin in #2367
  • docs: update port for local doc preview in docs/README.md by @lappemic in #2382
  • Fix token=False not respected in file download by @Wauplin in #2386
  • Use extended path on Windows when downloading to local dir by @mlinke-ai in #2378
  • Add a default timeout for filelock by @edevil in #2391
  • Fix list_accepted_access_requests if grant user manually by @Wauplin in #2392
  • fix: Handle single return value. by 28Smiles in #2396

⚙️ internal

  • Fix test: rename to open-llm-leaderboard + some cleaning by @Wauplin in #2295
  • Fix windows tests (git security update) by @Wauplin in #2296
  • Print correct webhook url when running in Spaces by @Wauplin in #2298
  • changed from --local_dir to --local-dir by @rao-pathangi in #2303
  • Update download badges in README by @Wauplin in #2309
  • Fix progress bar not always closed in file_download.py by @Wauplin in #2308
  • Make raises sections consistent in docstrings by @Wauplin in #2313
  • feat(ci): add trufflehog secrets detection by @McPatate in #2321
  • fix(ci): remove unnecessary permissions by @McPatate in #2322
  • Update _errors.py by @qgallouedec in #2354
  • Update ruff in CI by @Wauplin in #2365
  • Removing shebangs from files which are not supposed to be executable by @jpodivin in #2345
  • safetensors[torch] by @qgallouedec in #2371

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @usr-bin-ksh
    • 🌐 [i18n-KO] Translated package_reference/cards.md to Korean (#2204)
    • 🌐 [i18n-KO] Translated guides/collections.md to Korean (#2192)
  • @seoulsky-field
    • 🌐 [i18n-KO] Translated package_reference/community.md to Korean (#2183)
  • @lappemic
    • Update token type definition and arg description in hf_file_system.py (#2278)
    • [wip] Implement webhooks API (#2209)
    • docs: update port for local doc preview in docs/README.md (#2382)
    • Document more HfFilesyStem Methods (#2380)
  • @rao-pathangi
    • changed from --local_dir to --local-dir (#2303)
  • @OlivierKessler01
    • fix/issue 2090 : Add a repo_files command, with recursive deletion. (#2280)
  • @qubvel
    • Support custom kwargs for model card in save_pretrained (#2310)
  • @gorold
    • Fix ModelHubMixin coders (#2291)
  • @cjfghk5697
    • 🌐 [i18n-KO] Translated guides/integrations.md to Korean (#2256)
    • 🌐 [i18n-KO] Translated guides/manage-cache.md to Korean (#2347)
  • @kallewoof
    • print actual error message when failing to load a submodule (#2342)
  • @jungnerd
    • 🌐 [i18n-KO] Translated package_reference/environment_variables.md to Korean (#2311)
  • @fabxoe
    • 🌐 [i18n-KO] Translated package_reference/webhooks_server.md to Korean (#2344)
  • @JibrilEl
    • [i18n-FR] Translated "Integrations" to french (sub PR of 1900) (#2329)
  • @noech373
    • Add proxy support on async client (#2350)
  • @jpodivin
    • Removing shebangs from files which are not supposed to be executable (#2345)
  • @mlinke-ai
    • Use extended path on Windows when downloading to local dir (#2378)
  • @edevil
    • Add a default timeout for filelock (#2391)
huggingface_hub - [v0.23.4] Patch: fix encoders issues in ModelHubMixin

Published by Wauplin 4 months ago

Includes:

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.23.3...v0.23.4

huggingface_hub - [v0.23.3] Patch: fix details not returned in `InferenceClient.text_generation`

Published by Wauplin 5 months ago

Release 0.23.0 introduced a breaking change in InferenceClient.text_generation. When details=True is passed, the details attribute in the output is always None. The patch release fixes this. See https://github.com/huggingface/huggingface_hub/pull/2316 for more details.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.23.2...v0.23.3

split_state_dict_into_shards_factory now accepts string values as max_shard_size (ex: "5MB"), in addition to integer values. Related PR: https://github.com/huggingface/huggingface_hub/pull/2286.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.23.1...v0.23.2

huggingface_hub - v0.23.0: LLMs with tools, seamless downloads, and much more!

Published by Wauplin 6 months ago

📁 Seamless download to local dir

The 0.23.0 release comes with a big revamp of the download process, especially when it comes to downloading to a local directory. Previously the process was still involving the cache directory and symlinks which led to misconceptions and a suboptimal user experience. The new workflow involves a .cache/huggingface/ folder, similar to the .git/ one, that keeps track of the progress of a download. The main features are:

  • no symlinks
  • no local copy
  • don't re-download when not necessary
  • same behavior on both Unix and Windows
  • unrelated to cache-system

Example to download q4 GGUF file for microsoft/Phi-3-mini-4k-instruct-gguf:

# Download q4 GGUF file from 
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-instruct-q4.gguf --local-dir=data/phi3

With this addition, interrupted downloads are now resumable! This applies both for downloads in local and cache directories which should greatly improve UX for users with slow/unreliable connections. In this regard, the resume_download parameter is now deprecated (not relevant anymore).

  • Revamp download to local dir process by @Wauplin in #2223
  • Rename .huggingface/ folder to .cache/huggingface/ by @Wauplin in #2262

💡 Grammar and Tools in InferenceClient

It is now possible to provide a list of tools when chatting with a model using the InferenceClient! This major improvement has been made possible thanks to TGI that handle them natively.

>>> from huggingface_hub import InferenceClient

# Ask for weather in the next days using tools
>>> client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")
>>> messages = [
...     {"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."},
...     {"role": "user", "content": "What's the weather like the next 3 days in San Francisco, CA?"},
... ]
>>> tools = [
...     {
...         "type": "function",
...         "function": {
...             "name": "get_current_weather",
...             "description": "Get the current weather",
...             "parameters": {
...                 "type": "object",
...                 "properties": {
...                     "location": {
...                         "type": "string",
...                         "description": "The city and state, e.g. San Francisco, CA",
...                     },
...                     "format": {
...                         "type": "string",
...                         "enum": ["celsius", "fahrenheit"],
...                         "description": "The temperature unit to use. Infer this from the users location.",
...                     },
...                 },
...                 "required": ["location", "format"],
...             },
...         },
...     },
...     ...
... ]
>>> response = client.chat_completion(
...     model="meta-llama/Meta-Llama-3-70B-Instruct",
...     messages=messages,
...     tools=tools,
...     tool_choice="auto",
...     max_tokens=500,
... )
>>> response.choices[0].message.tool_calls[0].function
ChatCompletionOutputFunctionDefinition(
    arguments={
        'location': 'San Francisco, CA',
        'format': 'fahrenheit',
        'num_days': 3
    },
    name='get_n_day_weather_forecast',
    description=None
)

It is also possible to provide grammar rules to the text_generation task. This ensures that the output follows a precise JSON Schema specification or matches a regular expression. For more details about it, check out the Guidance guide from Text-Generation-Inference docs.

  • Add support for Grammar/Tools + TGI-based specs in InferenceClient by @Wauplin in #2237

⚙️ Other

Mention more chat-completion task instead of conversation in documentation.

  • Add chat_completion and remove conversational from Inference guide by @Wauplin in #2215

chat-completion relies on server-side rendering in all cases, including when model is transformers-backed. Previously it was only the case for TGI-backed models and templates were rendered client-side otherwise.

  • Render chat-template server-side for transformers-backed models by @Wauplin in #2258

Improved logic to determine whether a model is served via TGI or transformers.

  • Raise error in chat completion when unprocessable by @Wauplin in #2257

  • Document more chat_completion by @Wauplin in #2260

🌐 📚 Korean community is on fire!

The PseudoLab team is a non-profit dedicated to make AI more accessible in the Korean-speaking community. In the past few weeks, their team of contributors managed to translated (almost) entirely the huggingface_hub documentation. Huge shout-out to the coordination on this task! Documentation can be accessed here.

  • 🌐 [i18n-KO] Translated guides/webhooks_server.md to Korean by @nuatmochoi in #2145
  • 🌐 [i18n-KO] Translated reference/login.md to Korean by @SeungAhSon in #2151
  • 🌐 [i18n-KO] Translated package_reference/tensorboard.md to Korean by @fabxoe in #2173
  • 🌐 [i18n-KO] Translated package_reference/inference_client.md to Korean by @cjfghk5697 in #2178
  • 🌐 [i18n-KO] Translated reference/inference_endpoints.md to Korean by @harheem in #2180
  • 🌐 [i18n-KO] Translated package_reference/file_download.md to Korean by @seoyoung-3060 in #2184
  • 🌐 [i18n-KO] Translated package_reference/cache.md to Korean by @nuatmochoi in #2191
  • 🌐 [i18n-KO] Translated package_reference/collections.md to Korean by @boyunJang in #2214
  • 🌐 [i18n-KO] Translated package_reference/inference_types.md to Korean by @fabxoe in #2171
  • 🌐 [i18n-KO] Translated guides/upload.md to Korean by @junejae in #2139
  • 🌐 [i18n-KO] Translated reference/repository.md to Korean by @junejae in #2189
  • 🌐 [i18n-KO] Translated package_reference/space_runtime.md to Korean by @boyunJang in #2213
  • 🌐 [i18n-KO] Translated guides/repository.md to Korean by @cjfghk5697 in #2124
  • 🌐 [i18n-KO] Translated guides/model_cards.md to Korean" by @SeungAhSon in #2128
  • 🌐 [i18n-KO] Translated guides/community.md to Korean by @seoulsky-field in #2126
  • 🌐 [i18n-KO] Translated guides/cli.md to Korean by @harheem in #2131
  • 🌐 [i18n-KO] Translated guides/search.md to Korean by @seoyoung-3060 in #2134
  • 🌐 [i18n-KO] Translated guides/inference.md to Korean by @boyunJang in #2130
  • 🌐 [i18n-KO] Translated guides/manage-spaces.md to Korean by @boyunJang in #2220
  • 🌐 [i18n-KO] Translating guides/hf_file_system.md to Korean by @heuristicwave in #2146
  • 🌐 [i18n-KO] Translated package_reference/hf_api.md to Korean by @fabxoe in #2165
  • 🌐 [i18n-KO] Translated package_reference/mixins.md to Korean by @fabxoe in #2166
  • 🌐 [i18n-KO] Translated guides/inference_endpoints.md to Korean by @usr-bin-ksh in #2164
  • 🌐 [i18n-KO] Translated package_reference/utilities.md to Korean by @cjfghk5697 in #2196
  • fix ko docs by @Wauplin (direct commit on main)
  • 🌐 [i18n-KO] Translated package_reference/serialization.md to Korean by @seoyoung-3060 in #2233
  • 🌐 [i18n-KO] Translated package_reference/hf_file_system.md to Korean by @SeungAhSon in #2174

🛠️ Misc improvements

User API

@bilgehanertan added support for 2 new routes:

  • get_user_overview to retrieve high-level information about a user: username, avatar, number of models/datasets/Spaces, number of likes and upvotes, number of interactions in discussion, etc.
  • User API endpoints by @bilgehanertan in #2147

CLI tag

@bilgehanertan added a new command to the CLI to handle tags. It is now possible to:

  • tag a repo
>>> huggingface-cli tag Wauplin/my-cool-model v1.0
You are about to create tag v1.0 on model Wauplin/my-cool-model
Tag v1.0 created on Wauplin/my-cool-model
  • retrieve the list of tags for a repo
>>> huggingface-cli tag Wauplin/gradio-space-ci -l --repo-type space
Tags for space Wauplin/gradio-space-ci:
0.2.2
0.2.1
0.2.0
0.1.2
0.0.2
0.0.1
  • delete a tag on a repo
>>> huggingface-cli tag -d Wauplin/my-cool-model v1.0
You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model

For more details, check out the CLI guide.

  • CLI Tag Functionality by @bilgehanertan in #2172

🧩 ModelHubMixin

This ModelHubMixin got a set of nice improvement to generate model cards and handle custom data types in the config.json file. More info in the integration guide.

  • ModelHubMixin: more metadata + arbitrary config types + proper guide by @Wauplin in #2230
  • Fix ModelHubMixin when class is a dataclass by @Wauplin in #2159
  • Do not document private attributes of ModelHubMixin by @Wauplin in #2216
  • Add support for pipeline_tag in ModelHubMixin by @Wauplin in #2228

⚙️ Other

In a shared environment, it is now possible to set a custom path HF_TOKEN_PATH as environment variable so that each user of the cluster has their own access token.

  • Support HF_TOKEN_PATH as environment variable by @Wauplin in #2185

Thanks to @Y4suyuki and @lappemic, most custom errors defined in huggingface_hub are now aggregated in the same module. This makes it very easy to import them from from huggingface_hub.errors import ....

  • Define errors in errors.py by @Y4suyuki in #2170
  • Define errors in errors file by @lappemic in #2202

Fixed HFSummaryWriter (class to seamlessly log tensorboard events to the Hub) to work with either tensorboardX or torch.utils implementation, depending on the user setup.

  • Import SummaryWriter from either tensorboardX or torch.utils by @Wauplin in #2205

Speed to list files using HfFileSystem has been drastically improved, thanks to @awgr. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by HfFileSystem, they would need to copy them before-hand. This is expected to be a very limited drawback.

  • fix: performance of _ls_tree by @awgr in #2103

Progress bars in huggingface_hub got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to logging.getLogger) and to enable/disable only some progress bars. More details in this guide.

>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")

# No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
...     pass

# But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
...     pass
100%|█████████████████| 5/5 [00:00<00:00, 117817.53it/s]
  • Implement hierarchical progress bar control in huggingface_hub by @lappemic in #2217

💔 Breaking changes

--local-dir-use-symlink and --resume-download

As part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:

  • a .cache/huggingface/ folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command.
  • --local-dir-use-symlink is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the .cache/huggingface/ folder, it shouldn't be needed anyway.
  • --resume-download has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use --force-download.

Inference Types

As part of #2237 (Grammar and Tools support), we've updated the return value from InferenceClient.chat_completion and InferenceClient.text_generation to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had from huggingface_hub import TextGenerationOutput in your code. This is however not the common usage since those objects are already instantiated by huggingface_hub directly.

Expected breaking changes

Some other breaking changes were expected (and announced since 0.19.x):

  • list_files_info is definitively removed in favor of get_paths_info and list_repo_tree
  • WebhookServer.run is definitively removed in favor of WebhookServer.launch
  • api_endpoint in ModelHubMixin push_to_hub's method is definitively removed in favor of the HF_ENDPOINT environment variable

Check #2156 for more details.

Small fixes and maintenance

⚙️ CI optimization

⚙️ fixes

  • Fix HF_ENDPOINT not handled correctly by @Wauplin in #2155
  • Fix proxy if dynamic endpoint by @Wauplin (direct commit on main)
  • Update the note message when logging in to make it easier to understand and clearer by @lh0x00 in #2163
  • Fix URL when uploading to proxy by @Wauplin in #2167
  • Fix SafeTensorsInfo initialization by @Wauplin in #2190
  • Doc cli download timeout by @zioalex in #2198
  • Fix Typos in CONTRIBUTION.md and Formatting in README.md by @lappemic in #2201
  • change default model card by @Wauplin (direct commit on main)
  • Add returns documentation for save_pretrained by @alexander-soare in #2226
  • Update cli.md by @QuinnPiers in #2242
  • add warning tip that list_deployed_models only searches over cache by @MoritzLaurer in #2241
  • Respect default timeouts in hf_file_system by @Wauplin in #2253
  • Update harmonized token param desc and type def by @lappemic in #2252
  • Better document download attribute by @Wauplin in #2250
  • Correctly check inference endpoint is ready by @Wauplin in #2229
  • Add support for updatedRefs in WebhookPayload by @Wauplin in #2169

⚙️ internal

  • prepare for 0.23 by @Wauplin in #2156
  • lint by @Wauplin (direct commit on main)
  • quick fix by @Wauplin (direct commit on main)
  • Fix CI (inference tests, dataset viewer user, mypy) by @Wauplin in #2208
  • link by @Wauplin (direct commit on main)
  • Fix circular imports in eager mode? by @Wauplin in #2211
  • Drop generic from InferenceAPI framework list by @Wauplin in #2240
  • Remove test sort by acsending likes by @Wauplin in #2243
  • Delete legacy tests in TestHfHubDownloadRelativePaths + implicit delete folder is ok by @Wauplin in #2259
  • small doc clarification by @julien-c #2261

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @lappemic
    • Fix Typos in CONTRIBUTION.md and Formatting in README.md (#2201)
    • Define errors in errors file (#2202)
    • [wip] Implement hierarchical progress bar control in huggingface_hub (#2217)
    • Update harmonized token param desc and type def (#2252)
  • @bilgehanertan
    • User API endpoints (#2147)
    • CLI Tag Functionality (#2172)
  • @cjfghk5697
    • 🌐 [i18n-KO] Translated guides/repository.md to Korean (#2124)
    • 🌐 [i18n-KO] Translated package_reference/inference_client.md to Korean (#2178)
    • 🌐 [i18n-KO] Translated package_reference/utilities.md to Korean (#2196)
  • @SeungAhSon
    • 🌐 [i18n-KO] Translated guides/model_cards.md to Korean" (#2128)
    • 🌐 [i18n-KO] Translated reference/login.md to Korean (#2151)
    • 🌐 [i18n-KO] Translated package_reference/hf_file_system.md to Korean (#2174)
  • @seoulsky-field
    • 🌐 [i18n-KO] Translated guides/community.md to Korean (#2126)
  • @Y4suyuki
    • Define errors in errors.py (#2170)
  • @harheem
    • 🌐 [i18n-KO] Translated guides/cli.md to Korean (#2131)
    • 🌐 [i18n-KO] Translated reference/inference_endpoints.md to Korean (#2180)
  • @seoyoung-3060
    • 🌐 [i18n-KO] Translated guides/search.md to Korean (#2134)
    • 🌐 [i18n-KO] Translated package_reference/file_download.md to Korean (#2184)
    • 🌐 [i18n-KO] Translated package_reference/serialization.md to Korean (#2233)
  • @boyunJang
    • 🌐 [i18n-KO] Translated guides/inference.md to Korean (#2130)
    • 🌐 [i18n-KO] Translated package_reference/collections.md to Korean (#2214)
    • 🌐 [i18n-KO] Translated package_reference/space_runtime.md to Korean (#2213)
    • 🌐 [i18n-KO] Translated guides/manage-spaces.md to Korean (#2220)
  • @nuatmochoi
    • 🌐 [i18n-KO] Translated guides/webhooks_server.md to Korean (#2145)
    • 🌐 [i18n-KO] Translated package_reference/cache.md to Korean (#2191)
  • @fabxoe
    • 🌐 [i18n-KO] Translated package_reference/tensorboard.md to Korean (#2173)
    • 🌐 [i18n-KO] Translated package_reference/inference_types.md to Korean (#2171)
    • 🌐 [i18n-KO] Translated package_reference/hf_api.md to Korean (#2165)
    • 🌐 [i18n-KO] Translated package_reference/mixins.md to Korean (#2166)
  • @junejae
    • 🌐 [i18n-KO] Translated guides/upload.md to Korean (#2139)
    • 🌐 [i18n-KO] Translated reference/repository.md to Korean (#2189)
  • @heuristicwave
    • 🌐 [i18n-KO] Translating guides/hf_file_system.md to Korean (#2146)
  • @usr-bin-ksh
    • 🌐 [i18n-KO] Translated guides/inference_endpoints.md to Korean (#2164)
huggingface_hub - [v0.22.1] Hot-fix: correctly handle dataclasses in ModelHubMixin

Published by Wauplin 7 months ago

Fixed a bug breaking the SetFit integration.

What's Changed

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.22.0...v0.22.1

huggingface_hub - v0.22.0: Chat completion, inference types and hub mixins!

Published by Wauplin 7 months ago

Discuss about the release in our Community Tab. Feedback is welcome!! 🤗

✨ InferenceClient

Support for inference tools continues to improve in huggingface_hub. At the menu in this release? A new chat_completion API and fully typed inputs/outputs!

Chat-completion API!

A long-awaited API has just landed in huggingface_hub! InferenceClient.chat_completion follows most of OpenAI's API, making it much easier to integrate with existing tools.

Technically speaking it uses the same backend as the text-generation task but requires a preprocessing step to format the list of messages into a single text prompt. The chat template is rendered server-side when models are powered by TGI, which is the case for most LLMs: Llama, Zephyr, Mistral, Gemma, etc. Otherwise, the templating happens client-side which requires minijinja package to be installed. We are actively working on bridging this gap, aiming at rendering all templates server-side in the future.

>>> from huggingface_hub import InferenceClient
>>> messages = [{"role": "user", "content": "What is the capital of France?"}]
>>> client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")

# Batch completion
>>> client.chat_completion(messages, max_tokens=100)
ChatCompletionOutput(
    choices=[
        ChatCompletionOutputChoice(
            finish_reason='eos_token',
            index=0,
            message=ChatCompletionOutputChoiceMessage(
                content='The capital of France is Paris. The official name of the city is "Ville de Paris" (City of Paris) and the name of the country\'s governing body, which is located in Paris, is "La République française" (The French Republic). \nI hope that helps! Let me know if you need any further information.'
            )
        )
    ],
    created=1710498360
)

# Stream new tokens one by one
>>> for token in client.chat_completion(messages, max_tokens=10, stream=True):
...     print(token)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content='The', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' capital', role='assistant'), index=0, finish_reason=None)], created=1710498504)
(...)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=' may', role='assistant'), index=0, finish_reason=None)], created=1710498504)
ChatCompletionStreamOutput(choices=[ChatCompletionStreamOutputChoice(delta=ChatCompletionStreamOutputDelta(content=None, role=None), index=0, finish_reason='length')], created=1710498504)

Inference types

We are currently working towards more consistency in tasks definitions across the Hugging Face ecosystem. This is no easy job but a major milestone has recently been achieved! All inputs and outputs of the main ML tasks are now fully specified as JSONschema objects. This is the first brick needed to have consistent expectations when running inference across our stack: transformers (Python), transformers.js (Typescript), Inference API (Python), Inference Endpoints (Python), Text Generation Inference (Rust), Text Embeddings Inference (Rust), InferenceClient (Python), Inference.js (Typescript), etc.

Integrating those definitions will require more work but huggingface_hub is one of the first tools to integrate them. As a start, all InferenceClient return values are now typed dataclasses. Furthermore, typed dataclasses have been generated for all tasks' inputs and outputs. This means you can now integrate them in your own library to ensure consistency with the Hugging Face ecosystem. Specifications are open-source (see here) meaning anyone can access and contribute to them. Python's generated classes are documented here.

Here is a short example showcasing the new output types:

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> client.object_detection("people.jpg"):
[
    ObjectDetectionOutputElement(
        score=0.9486683011054993,
        label='person',
        box=ObjectDetectionBoundingBox(xmin=59, ymin=39, xmax=420, ymax=510)
    ),
...
]

Note that those dataclasses are backward-compatible with the dict-based interface that was previously in use. In the example above, both ObjectDetectionBoundingBox(...).xmin and ObjectDetectionBoundingBox(...)["xmin"] are correct, even though the former should be the preferred solution from now on.

  • Generate inference types + start using output types by @Wauplin in #2036
  • Add = None at optional parameters by @LysandreJik in #2095
  • Fix inference types shared between tasks by @Wauplin in #2125

🧩 ModelHubMixin

ModelHubMixin is an object that can be used as a parent class for the objects in your library in order to provide built-in serialization methods to upload and download pretrained models from the Hub. This mixin is adapted into a PyTorchHubMixin that can serialize and deserialize any Pytorch model. The 0.22 release brings its share of improvements to these classes:

  1. Better support of init values. If you instantiate a model with some custom arguments, the values will be automatically stored in a config.json file and restored when reloading the model from pretrained weights. This should unlock integrations with external libraries in a much smoother way.
  2. Library authors integrating the hub mixin can now define custom metadata for their library: library name, tags, document url and repo url. These are to be defined only once when integrating the library. Any model pushed to the Hub using the library will then be easily discoverable thanks to those tags.
  3. A base modelcard is generated for each saved model. This modelcard includes default tags (e.g. model_hub_mixin) and custom tags from the library (see 2.). You can extend/modify this modelcard by overwriting the generate_model_card method.
>>> import torch
>>> import torch.nn as nn
>>> from huggingface_hub import PyTorchModelHubMixin


# Define your Pytorch model exactly the same way you are used to
>>> class MyModel(
...         nn.Module,
...         PyTorchModelHubMixin, # multiple inheritance
...         library_name="keras-nlp",
...         tags=["keras"],
...         repo_url="https://github.com/keras-team/keras-nlp",
...         docs_url="https://keras.io/keras_nlp/",
...         # ^ optional metadata to generate model card
...     ):
...     def __init__(self, hidden_size: int = 512, vocab_size: int = 30000, output_size: int = 4):
...         super().__init__()
...         self.param = nn.Parameter(torch.rand(hidden_size, vocab_size))
...         self.linear = nn.Linear(output_size, vocab_size)

...     def forward(self, x):
...         return self.linear(x + self.param)

# 1. Create model
>>> model = MyModel(hidden_size=128)

# Config is automatically created based on input + default values
>>> model._hub_mixin_config
{"hidden_size": 128, "vocab_size": 30000, "output_size": 4}

# 2. (optional) Save model to local directory
>>> model.save_pretrained("path/to/my-awesome-model")

# 3. Push model weights to the Hub
>>> model.push_to_hub("my-awesome-model")

# 4. Initialize model from the Hub => config has been preserved
>>> model = MyModel.from_pretrained("username/my-awesome-model")
>>> model._hub_mixin_config
{"hidden_size": 128, "vocab_size": 30000, "output_size": 4}

# Model card has been correctly populated
>>> from huggingface_hub import ModelCard
>>> card = ModelCard.load("username/my-awesome-model")
>>> card.data.tags
["keras", "pytorch_model_hub_mixin", "model_hub_mixin"]
>>> card.data.library_name
"keras-nlp"

For more details on how to integrate these classes, check out the integration guide.

  • Fix ModelHubMixin: pass config when __init__ accepts **kwargs by @Wauplin in #2058
  • [PyTorchModelHubMixin] Fix saving model with shared tensors by @NielsRogge in #2086
  • Correctly inject config in PytorchModelHubMixin by @Wauplin in #2079
  • Fix passing kwargs in PytorchHubMixin by @Wauplin in #2093
  • Generate modelcard in ModelHubMixin by @Wauplin in #2080
  • Fix ModelHubMixin: save config only if doesn't exist by @Wauplin in #2105
  • Fix ModelHubMixin - kwargs should be passed correctly when reloading by @Wauplin in #2099
  • Fix ModelHubMixin when kwargs and config are both passed by @Wauplin in #2138
  • ModelHubMixin overwrite config if preexistant by @Wauplin in #2142

🛠️ Misc improvements

HfFileSystem download speed was limited by some internal logic in fsspec. We've now updated the get_file and read implementations to improve their download speed to a level similar to hf_hub_download.

  • Fast download in hf file system by @Wauplin in #2143

We are aiming at moving all errors raised by huggingface_hub into a single module huggingface_hub.errors to ease the developer experience. This work has been started as a community contribution from @Y4suyuki.

  • Start defining custom errors in one place by @Y4suyuki in #2122

HfApi class now accepts a headers parameters that is then passed to every HTTP call made to the Hub.

  • Allow passing custom headers to HfApi by @Wauplin in #2098

📚 More documentation in Korean!

  • [i18n-KO] Translated package_reference/overview.md to Korean by @jungnerd in #2113

💔 Breaking changes

  • The new types returned by InferenceClient methods should be backward compatible, especially to access values either as attributes (.my_field) or as items (i.e. ["my_field"]). However, dataclasses and dicts do not always behave exactly the same so might notice some breaking changes. Those breaking changes should be very limited.

  • ModelHubMixin internals changed quite a bit, breaking some use cases. We don't think those use cases were in use and changing them should really benefit 99% of integrations. If you witness any inconsistency or error in your integration, please let us know and we will do our best to mitigate the problem. One of the biggest change is that the config values are not attached to the mixin instance as instance.config anymore but as instance._model_hub_mixin. The .config attribute has been mistakenly introduced in 0.20.x so we hope it has not been used much yet.

  • huggingface_hub.file_download.http_user_agent has been removed in favor of the officially document huggingface_hub.utils.build_hf_headers. It was a deprecated method since 0.18.x.

Small fixes and maintenance

⚙️ CI optimization

The CI pipeline has been greatly improved, especially thanks to the efforts from @bmuskalla. Most tests are now passing in under 3 minutes, against 8 to 10 minutes previously. Some long-running tests have been greatly simplified and all tests are now ran in parallel with python-xdist, thanks to a complete decorrelation between them.

We are now also using the great uv installer instead of pip in our CI, which saves around 30-40s per pipeline.

  • More optimized tests by @Wauplin in #2054
  • Enable python-xdist on all tests by @bmuskalla in #2059
  • do not list all models by @Wauplin in #2061
  • update ruff by @Wauplin in #2071
  • Use uv in CI to speed-up requirements install by @Wauplin in #2072

⚙️ fixes

  • Fix Space variable when updatedAt is missing by @Wauplin in #2050
  • Fix tests involving temp directory on macOS by @bmuskalla in #2052
  • fix glob no magic by @lhoestq in #2056
  • Point out that the token must have write scope by @bmuskalla in #2053
  • Fix commonpath in read-only filesystem by @stevelaskaridis in #2073
  • rm unnecessary early makedirs by @poedator in #2092
  • Fix unhandled filelock issue by @Wauplin in #2108
  • Handle .DS_Store files in _scan_cache_repos by @sealad886 in #2112
  • Fix REPO_API_REGEX by @Wauplin in #2119
  • Fix uploading to HF proxy by @Wauplin in #2120
  • Fix --delete in huggingface-cli upload command by @Wauplin in #2129
  • Explicitly fail on Keras3 by @Wauplin in #2107
  • Fix serverless naming by @Wauplin in #2137

⚙️ internal

  • tag as 0.22.0.dev + remove deprecated code by @Wauplin in #2049
  • Some cleaning by @Wauplin in #2070
  • Fix test test_delete_branch_on_missing_branch_fails by @Wauplin in #2088

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @Y4suyuki
    • Start defining custom errors in one place (#2122)
  • @bmuskalla
    • Enable python-xdist on all tests by @bmuskalla in #2059
huggingface_hub - [v0.21.4] Hot-fix: Fix saving model with shared tensors

Published by Wauplin 8 months ago

Release v0.21 introduced a breaking change make it impossible to save a PytorchModelHubMixin-based model that has shared tensors. This has been fixed in https://github.com/huggingface/huggingface_hub/pull/2086.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.21.3...v0.21.4

Discuss about the release in our Community Tab. Feedback welcome!! 🤗

🖇️ Dataclasses everywhere!

All objects returned by the HfApi client are now dataclasses!

In the past, objects were either dataclasses, typed dictionaries, non-typed dictionaries and even basic classes. This is now all harmonized with the goal of improving developer experience.

Kudos goes to the community for the implementation and testing of all the harmonization process. Thanks again for the contributions!

  • Use dataclasses for all objects returned by HfApi #1911 by @Ahmedniz1 in #1974
  • Updating HfApi objects to use dataclass by @Ahmedniz1 in #1988
  • Dataclasses for objects returned hf api by @NouamaneELGueddarii in #1993

💾 FileSystem

The HfFileSystem class implements the fsspec interface to allow loading and writing files with a filesystem-like interface. The interface is highly used by the datasets library and this release will improve further the efficiency and robustness of the integration.

  • Pass revision in path to AbstractBufferedFile init by @albertvillanova in #1948
  • [HfFileSystem] Fix rm on branch by @lhoestq in #1957
  • Retry fetching data on 502 error in HfFileSystem by @mariosasko in #1981
  • Add HfFileSystemStreamFile by @lhoestq in #1967
  • [HfFileSystem] Copy non lfs files by @lhoestq in #1996
  • Add HfFileSystem.url method by @mariosasko in #2027

🧩 Pytorch Hub Mixin

The PyTorchModelHubMixin class let's you upload ANY pytorch model to the Hub in a few lines of code. More precisely, it is a class that can be inherited in any nn.Module class to add the from_pretrained, save_pretrained and push_to_hub helpers to your class. It handles serialization and deserialization of weights and configs for you and enables download counts on the Hub.

With this release, we've fixed 2 pain points holding back users from using this lib:

  1. Configs are now better handled. The mixin automatically detects if the base class defines a config, saves it on the Hub and then injects it at load time, either as a dictionary or a dataclass depending on the base class's expectations.
  2. Weights are now saved as .safetensors files instead of pytorch pickles for safety reasons. Loading from previous pytorch pickles is still supported but we are moving toward completely deprecating them (in a mid to long term plan).
  • Better config support in ModelHubMixin by @Wauplin in #2001
  • Use safetensors by default for PyTorchModelHubMixin by @bmuskalla in #2033

✨ InferenceClient improvements

Audio-to-audio task is now supported by both by the InferenceClient!

>>> from huggingface_hub import InferenceClient
>>> client = InferenceClient()
>>> audio_output = client.audio_to_audio("audio.flac")
>>> for i, item in enumerate(audio_output):
>>>     with open(f"output_{i}.flac", "wb") as f:
            f.write(item["blob"])
  • Added audio to audio in inference client by @Ahmedniz1 in #2020

Also fixed a few things:

  • Fix intolerance for new field in TGI stream response: 'index' by @danielpcox in #2006
  • Fix optional model in tabular tasks by @Wauplin in #2018
  • Added best_of to non-TGI ignored parameters by @dopc in #1949

📤 Model serialization

With the aim of harmonizing repo structures and file serialization on the Hub, we added a new module serialization with a first helper split_state_dict_into_shards that takes a state dict and split it into shards. Code implementation is mostly taken from transformers and aims to be reused by other libraries in the ecosystem. It seamlessly supports torch, tensorflow and numpy weights, and can be easily extended to other frameworks.

This is a first step in the harmonization process and more loading/saving helpers will be added soon.

  • Framework-agnostic split_state_dict_into_shards helper by @Wauplin in #1938

📚 Documentation

🌐 Translations

Community is actively getting the job done to translate the huggingface_hub to other languages. We now have docs available in Simplified Chinese (here) and in French (here) to help democratize good machine learning!

  • [i18n-CN] Translated some files to simplified Chinese #1915 by @2404589803 in #1916
  • Update .github workflow to build cn docs on PRs by @Wauplin in #1931
  • [i18n-FR] Translated files in french and reviewed them by @JibrilEl in #2024

Docs misc

  • Document base_model in modelcard metadata by @Wauplin in #1936
  • Update the documentation of add_collection_item by @FremyCompany in #1958
  • Docs[i18n-en]: added pkgx as an installation method to the docs by @michaelessiet in #1955
  • Added hf_transfer extra into setup.py and docs/ by @jamesbraza in #1970
  • Documenting CLI default for download --repo-type by @jamesbraza in #1986
  • Update repository.md by @xmichaelmason in #2010

Docs fixes

  • Fix URL in get_safetensors_metadata docstring by @Wauplin in #1951
  • Fix grammar by @Anthonyg5005 in #2003
  • Fix doc by @jordane95 in #2013
  • typo fix by @Decryptu in #2035

🛠️ Misc improvements

Creating a commit with an invalid README will fail early instead of uploading all LFS files before failing to commit.

  • Fail early on invalid metadata by @Wauplin in #1934

Added a revision_exists helper, working similarly to repo_exists and file_exists:

>>> from huggingface_hub import revision_exists
>>> revision_exists("google/gemma-7b", "float16")
True
>>> revision_exists("google/gemma-7b", "not-a-revision")
False
  • Add revision_exists helper by @Wauplin in #2042

InferenceClient.wait(...) now raises an error if the endpoint is in a failed state.

  • raise on failed inference endpoint by @Wauplin in #1935

Improved progress bar when downloading a file

  • improve http_get by @Wauplin in #1954

Other stuff:

  • added will not echo message to the login token message by @vtrenton in #1925
  • Raise if repo is disabled by @Wauplin in #1965
  • Fix timezone in datetime parsing by @Wauplin in #1982
  • retry on any 5xx on upload by @Wauplin in #2026

💔 Breaking changes

  • Classes ModelFilter and DatasetFilter are deprecated when listing models and datasets in favor of a simpler API that lets you pass the parameters directly to list_models and list_datasets.
>>> from huggingface_hub import list_models, ModelFilter

# use
>>> list_models(language="zh")
# instead of 
>>> list_models(filter=ModelFilter(language="zh"))

Cleaner, right? ModelFilter and DatasetFilter will still be supported until v0.24 release.

  • Deprecate ModelFilter/DatasetFilter by @druvdub in #2028
  • List models tweaks by @julien-c in #2044
  • In the inference client, ModelStatus.compute_type is not a string anymore but a dictionary with more detailed information (instance type + number of replicas). This breaking change reflects a server-side update.
  • Fix ModelStatus compute type by @Wauplin in #2047

Small fixes and maintenance

⚙️ fixes

  • Make GitRefs backward comp by @Wauplin in #1960
  • Fix pagination when listing discussions by @Wauplin in #1962
  • Fix inconsistent warnings.warn in repocard.py by @Wauplin in #1980
  • fix: actual error won't be raised while force_download=True by @scruel in #1983
  • Fix download from private renamed repo by @Wauplin in #1999
  • Disable tqdm progress bar if no TTY attached by @mssalvatore in #2000
  • Deprecate legacy parameters in update_repo_visibility by @Wauplin in #2014
  • Fix getting widget_data from model_info by @Wauplin in #2041

⚙️ internal

  • prepare for 0.21.0 by @Wauplin in #1928
  • Remove PRODUCTION_TOKEN by @Wauplin in #1937
  • Add reminder for model card consistency by @Wauplin in #1979
  • Finished migration from setup.cfg to pyproject.toml by @jamesbraza in #1971
  • Newer pre-commit by @jamesbraza in #1987
  • Removed now unnecessary setup.cfg path variable by @jamesbraza in #1990
  • Added toml-sort tool by @jamesbraza in #1972
  • update name of dummy dataset user by @Wauplin in #2019

Significant community contributions

The following contributors have made significant changes to the library over the last release:

  • @2404589803
    • [i18n-CN] Translated some files to implified Chinese #1915 (#1916)
  • @jamesbraza
    • Added hf_transfer extra into setup.py and docs/ (#1970)
    • Finished migration from setup.cfg to pyproject.toml (#1971)
    • Documenting CLI default for download --repo-type (#1986)
    • Newer pre-commit (#1987)
    • Removed now unnecessary setup.cfg path variable (#1990)
    • Added toml-sort tool (#1972)
  • @Ahmedniz1
    • Use dataclasses for all objects returned by HfApi #1911 (#1974)
    • Updating HfApi objects to use dataclass (#1988)
    • Added audio to audio in inference client (#2020)
  • @druvdub
    • Deprecate ModelFilter/DatasetFilter (#2028)
  • @JibrilEl
    • [i18n-FR] Translated files in french and reviewed them (#2024)
  • @bmuskalla
    • Use safetensors by default for PyTorchModelHubMixin (#2033)
huggingface_hub - 0.20.3 hot-fix: Fix HfFolder login when env variable not set

Published by Wauplin 9 months ago

This patch release fixes an issue when retrieving the locally saved token using huggingface_hub.HfFolder.get_token. For the record, this is a "planned to be deprecated" method, in favor of huggingface_hub.get_token which is more robust and versatile. The issue came from a breaking change introduced in https://github.com/huggingface/huggingface_hub/pull/1895 meaning only 0.20.x is affected.

For more details, please refer to https://github.com/huggingface/huggingface_hub/pull/1966.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.20.2...v0.20.3

huggingface_hub - 0.20.2 hot-fix: Fix concurrency issues in google colab login

Published by Wauplin 10 months ago

A concurrency issue when using userdata.get to retrieve HF_TOKEN token led to deadlocks when downloading files in parallel. This hot-fix release fixes this issue by using a global lock before trying to get the token from the secrets vault. More details in https://github.com/huggingface/huggingface_hub/pull/1953.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.20.1...v0.20.2

huggingface_hub - 0.20.1: hot-fix Fix circular import

Published by Wauplin 10 months ago

This hot-fix release fixes a circular import error happening when import login or logout helpers from huggingface_hub.

Related PR: https://github.com/huggingface/huggingface_hub/pull/1930

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.20.0...v0.20.1