serve

Serve, optimize and scale PyTorch models in production

APACHE-2.0 License

Downloads
1.1M
Stars
4.2K
Committers
192

Bot releases are hidden (Show)

serve - TorchServe v0.2.0 Release Notes (Beta)

Published by maaquib about 4 years ago

This is the release of TorchServe v0.2.0

Highlights:

  • Kubernetes Support - Torchserve deployment in Kubernetes using Helm Charts and a Persistent Volume
  • Prometheus metrics - Added Prometheus as the default metrics framework
  • Requirements.txt support​ - Added support to specify model specific dependencies as a requirements file within a mar archive; Cleanup of unused parameters and addition of relevant ones for torch-model-archiver
  • Pytorch Scripted Models Support - Scripted model versions added to model zoo; Added testing for scripted models
  • Default Handler Refactor: (breaking changes) The default handlers have been refactored for code reuse and enhanced post-processing support. More details in Backwards Incompatible Changes section below
  • Windows Support - Added support for torchserve on windows subsystem for Linux
  • AWS Cloud Formation Support - Added support for multi-node AutoScaling Group deployment, behind an Elastic Load Balancer using Elastic File System as the backing store
  • Benchmark and Testing Enhancements - Added models in benchmark and sanity tests, support for throughput with batch processing in benchmarking, support docker for jmeter and apache benchmark tests
  • Regression Suite Enhancements - Added new POSTMAN based test cases for API and pytest based intrusive test cases
  • Docker Improvements - Consolidated dev and codebuild dockerfiles
  • Install and Build Script Streamlining - Unified install scripts, added code coverage and sanity script
  • Python Linting - More exhaustive python linting checks across Torchserve and Model Archiver

Backwards Incompatible Changes

  • Default Handler Refactor:
    • The default handlers have been refactored for code reuse and enhanced post-processing support. The output format for some of the following examples/models has been enhanced to include additional details like score/class probability.
    • The following default-handlers have been equipped with batch support. Due to batch support, resnet_152_batch example is not a custom handler example anymore.
      • image_classifier
      • object_detector
      • image_segmenter
    • The index_to_name.json file use for the class to name mapping has been standardized across vision/text related default handlers
    • Refactoring and code reuse have resulted into reduced boilerplate code in all the serve/examples.
    • Custom handler documentation has been restructured and enhanced to facilitate the different possible ways to build simple or complex custom handlers

Other PRs since v0.1.1

Bug Fixes:

  • Fixed NameError in default image_classifier handler #489
  • Fixed timeout errors during build #420 and unit tests #493
  • Fixed model loading error on cpu which was saved on gpu #444
  • Fixed Snapshot not being emitted after unregistering model with no workers #491
  • Inference API description conformant to OpenAPI #372
  • Removed duplicate snapshot server property #318
  • Fixed tag for latest CPU version in README #452
  • Added check for no objects detected in object detector #447
  • Fixed incorrect set up of default workers per model #513
  • Fixed model-archiver to accept handler name or handler_name:entry_pnt_func combinations #472

Others

  • Netty dependencies update #487
  • Updates to install documentation and contribution guidelines #527

Platform Support

Ubuntu 16.04, Ubuntu 18.04, MacOS 10.14+, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4)

Getting Started with TorchServe

Additionally, you can get started at https://pytorch.org/serve/ with installation instructions, tutorials and docs.
Lastly, if you have questions, please drop it into the PyTorch discussion forums using the ‘deployment’ tag or file an issue on GitHub with a way to reproduce.

serve - TorchServe v0.1.1 Release Notes (Experimental)

Published by mycpuorg over 4 years ago

This is the release of TorchServe v0.1.1

Highlights:

  • HuggingFace BERT Example - Support for HuggingFace Models demonstrated with examples under examples/ directory.
  • Waveglow Example - Support for Nvidia Waveglow model demonstrated with examples under examples/ directory.
  • Model Zoo - Model Zoo with model archives created from popular pre-trained models from PyTorch Model Zoo
  • AWS Cloud Formation Support - Support added for spinning up TorchServe Model Server on an EC2 instance via the convenience of AWS Cloud Formation Template.
  • Snakeviz Profiler - Support for Profiling TorchServe Python execution via snakevize profiler for detailed execution time reporting.
  • Docker improvements - Docker image size optimization, detailed docs for running docker.
  • Regression Test Suite - Detailed Regression Test Suite to allow comprehensive tests for all supported REST APIs. Automating this test helps faster regression detection.
  • Detailed Unit Test Reporting - Detailed breakdown of Unit Test Reports from gradle build system.
  • Installation Process Streamlining - Easier user onboarding with detailed documentation for installation
  • Documentation Clean up - Refactored documentation with clear instructions
  • GPU Device Assignment - Object Detection Model now correctly runs on multiple GPU devices
  • Model Store Clean-up - Clean up Model store of all artifacts for a deleted model

Other PRs since v0.1.0

Bug Fixes:

  • Fixes Incorrect Version number reporting #360
  • Validation for correct port range 0-65535 #304
  • Gradle build failures for new Gradle version-6.4 #352
  • Standardize "Model version not found." response for all applicable Api's with Http status code 404. #282
  • The --model-store should point to a user-relative directory. #248
  • Corrected query parameter name in OpenApi description for registration api. #328
  • psutil install de-duplication #329
  • Maven tests should output only errors and not info / stack traces #326
  • Fixed installation issues for Python VirtualEnv #341

Documentation

  • Using GPU in Docker #205

Others

  • Github Issue templates #273

Platform Support

Ubuntu 16.04, Ubuntu 18.04, MacOS 10.14+

Getting Started with TorchServe

Additionally, you can get started at pytorch.org/serve with installation instructions, tutorials and docs.
Lastly, if you have questions, please drop it into the PyTorch discussion forums using the ‘deployment’ tag or file an issue on GitHub with a way to reproduce.

serve - TorchServe v0.1.0

Published by mycpuorg over 4 years ago

TorchServe (Experimental) v0.1.0 Release Notes

This is the first release of TorchServe (Experimental), a new open-source model serving framework under the PyTorch project (RFC #27610).

Highlights

  • Clean APIs - Support for an Inference API for predictions and a Management API for managing the model server.

  • Secure Deployment - Includes HTTPS support for secure deployment.

  • Robust model management capabilities - Allows full configuration of models, versions, and individual worker threads via command line interface, config file, or run-time API.

  • Model archival - Provides tooling to perform a ‘model archive’, a process of packaging a model, parameters, and supporting files into a single, persistent artifact. Using a simple command-line interface, you can package and export in a single ‘.mar’ file that contains everything you need for serving a PyTorch model. This `.mar’ file can be shared and reused. Learn more here.

  • Built-in model handlers - Support for model handlers covering the most common use-cases (image classification, object detection, text classification, image segmentation). TorchServe also supports custom handlers

  • Logging and Metrics - Support for robust logging and real-time metrics to monitor inference service and endpoints, performance, resource utilization, and errors. You can also generate custom logs and define custom metrics.

  • Model Management - Support for management of multiple models or multiple versions of the same model at the same time. You can use model versions to roll back to earlier versions or route traffic to different versions for A/B testing.

  • Prebuilt Images - Ready to go Dockerfiles and Docker images for deploying TorchServe on CPU and NVIDIA GPU based environments. The latest Dockerfiles and images can be found here.

Platform Support

      - Ubuntu 16.04, Ubuntu 18.04, MacOS 10.14+

Known Issues

  • The default object detection handler only works on cuda:0 device on GPU machines #104
  • For torchtext based models, the sentencepiece dependency fails for MacOS with python 3.8 #232

Getting Started with TorchServe

  • Additionally, you can get started at pytorch.org/serve with installation instructions, tutorials and docs.
  • Lastly, if you have questions, please drop it into the PyTorch discussion forums using the ‘deployment’ tag or file an issue on GitHub with a way to reproduce.