TorchServe (Experimental) v0.1.0 Release Notes
This is the first release of TorchServe (Experimental), a new open-source model serving framework under the PyTorch project ([RFC 27610](https://github.com/pytorch/pytorch/issues/27610)).
Highlights
+ **Clean APIs** - Support for an [Inference API](https://github.com/pytorch/serve/blob/master/docs/inference_api.md) for predictions and a [Management API](https://github.com/pytorch/serve/blob/master/docs/management_api.md) for managing the model server.
+ **Secure Deployment** - Includes HTTPS support for secure deployment.
+ **Robust model management capabilities** - Allows full configuration of models, versions, and individual worker threads via command line interface, config file, or run-time API.
+ **Model archival** - Provides tooling to perform a ‘model archive’, a process of packaging a model, parameters, and supporting files into a single, persistent artifact. Using a simple command-line interface, you can package and export in a single ‘.mar’ file that contains everything you need for serving a PyTorch model. This `.mar’ file can be shared and reused. Learn more [here](https://github.com/pytorch/serve/tree/master/model-archiver).
+ **Built-in model handlers** - Support for [model handlers](https://github.com/pytorch/serve/tree/master/model-archiver#handler) covering the most common use-cases (image classification, object detection, text classification, image segmentation). TorchServe also supports [custom handlers](https://github.com/pytorch/serve/blob/master/docs/custom_service.md)
+ **Logging and Metrics** - Support for robust [logging](https://github.com/pytorch/serve/blob/master/docs/logging.md) and real-time [metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md) to monitor inference service and endpoints, performance, resource utilization, and errors. You can also generate custom logs and define [custom metrics](https://github.com/pytorch/serve/blob/master/docs/metrics.md#custom-metrics-api).
+ **Model Management** - Support for [management of multiple models](https://github.com/pytorch/serve/blob/master/docs/server.md#serving-multiple-models-with-torchserve) or multiple versions of the same model at the same time. You can use model versions to roll back to earlier versions or route traffic to different versions for A/B testing.
+ **Prebuilt Images** - Ready to go Dockerfiles and Docker images for deploying TorchServe on CPU and NVIDIA GPU based environments. The latest Dockerfiles and images can be found [here](https://hub.docker.com/r/pytorch/torchserve/).
Platform Support
- Ubuntu 16.04, Ubuntu 18.04, MacOS 10.14+
Known Issues
+ The default object detection handler only works on cuda:0 device on GPU machines [104](https://github.com/pytorch/serve/issues/104)
+ For torchtext based models, the sentencepiece dependency fails for MacOS with python 3.8 [232](https://github.com/pytorch/serve/issues/232)
Getting Started with TorchServe
+ Additionally, you can get started at [pytorch.org/serve](https://pytorch.org/serve/) with installation instructions, tutorials and docs.
+ Lastly, if you have questions, please drop it into the [PyTorch discussion forums](https://discuss.pytorch.org/c/deployment/) using the ‘deployment’ tag or file an issue on [GitHub](https://github.com/pytorch/serve) with a way to reproduce.