Bentoml

Latest version: v1.2.18

Safety actively analyzes 638845 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 21

0.7.4

* Added support for [Fasttext](https://fasttext.cc/) models, contributed by GCHQResearcher83493
* Fixed Windows compatibility while packaging model, contributed by codeslord
* Added benchmark using Tensorflow-based Bert model
* Fixed an issue with pip installing a BentoService saved bundle with the new release of pip `pip==20.1`

Documentation:
* AWS ECS deployment guide https://docs.bentoml.org/en/latest/deployment/aws_ecs.html
* Heroku deployment guide: https://docs.bentoml.org/en/latest/deployment/heroku.html
* Knative deployment guide: https://docs.bentoml.org/en/latest/deployment/knative.html

0.7.3

Improvements:

* Added `--timeout` option to SageMaker deployment creation command
* Fixed an issue with the new GRPCIO PyPI release when deploying to AWS Lambda


Documentation:

* Revamped the Core Concept walk-through documentation
* Added notes on using micro-batching and deploying YataiService

0.7.2

Introducing 2 Major New Features
* Adaptive micro-batching mode in API server
* Web UI for model and deployment management

Adaptive Micro Batching

Adaptive micro-batching is a technique used in advanced serving system, where prediction requests coming in are grouped into small batches for inference. With version 0.7.2, we've implemented Micro Batching mode for API server, and all existing BentoService can benefit from this by simply enable it via the `--enable-microbatch` flag or `BENTOML_ENABLE_MICROBATCH` environment variable when running API server docker image:

bash
$ bentoml serve-gunicorn IrisClassifier:latest --enable-microbatch


bash
$ docker run -p 5000:5000 -e BENTOML_ENABLE_MICROBATCH=True iris-classifier:latest


Currently, the micro-batch mode is only effective for DataframeHandler, JsonHandler, and TensorflowTensorHandler. We are working on support for ImageHandler, along with a few new handler types coming in the next release.

Model Management Web UI

BentoML has a standalone component YataiService that handles model storage and deployment via gRPC calls. By default, BentoML launches a local YataiService instance when being imported. This local YataiService instance saves BentoService files to `~/bentoml/repository/` directory and other metadata to `~/bentoml/storage.db`.

In release 0.7.x, we introduced a new CLI command for running YataiService as a standalone service that can be shared by multiple bentoml clients. This makes it easy to share, use and discover models and serving deployments created by others in your team.

To play with the YataiService gRPC & Web server, run the following command:


$ bentoml yatai-service-start


$ docker run -v ~/bentoml:/bentoml -p 3000:3000 -p 50051:50051 bentoml/yatai-service:0.7.2 --db-url=sqlite:///bentoml/storage.db --repo-base-url=/bentoml/repository


For team settings, we recommend using a remote database instance and cloud storage such as s3 for storage. E.g.:

$ docker run -p 3000:3000 -p 50051:50051 \
-e AWS_SECRET_ACCESS_KEY=... -e AWS_ACCESS_KEY_ID=... \
bentoml/yatai-service:0.7.2 \
--db-url postgresql://scott:tigerlocalhost:5432/bentomldb \
--repo-base-url s3://my-bentoml-repo/


<img width="1288" alt="yatai-service-web-ui-repository" src="https://user-images.githubusercontent.com/489344/79260404-13e67380-7e43-11ea-9911-293297dd08c7.png">
<img width="962" alt="yatai-service-web-ui-repository-detail" src="https://user-images.githubusercontent.com/489344/79260408-1648cd80-7e43-11ea-9367-26fc3372dbca.png">



Documentation Updates

* Added a new section working through all the main concepts and best practices using BentoML, we recommend it as a must-read for new BentoML users
* BentoML Core Concepts: https://docs.bentoml.org/en/latest/concepts.html#core-concepts

---
Version 0.7.0 and 0.7.1 are not recommended due to an issue with including the Benchmark directory in its PyPI distribution. But other than that, they are identical to version 0.7.2.

0.6.3

New Features:
* Automatically discover all pip dependencies via `env(auto_pip_dependencies=True)`
* CLI command auto-completion support

Beta Features:
Contact us via [Slack](https://join.slack.com/t/bentoml/shared_invite/enQtNjcyMTY3MjE4NTgzLTU3ZDc1MWM5MzQxMWQxMzJiNTc1MTJmMzYzMTYwMjQ0OGEwNDFmZDkzYWQxNzgxYWNhNjAxZjk4MzI4OGY1Yjg) for early access and documentation related to these features.
* Adaptive micro-batching in BentoML API server, including performance tracing and benchmark
* Standalone YataiService gRPC server for model management and deployment

Improvements & Bug fixes
* Improved end-to-end tests, covering entire BentoML workflow
* Fixed issues with using YataiService with PostgreSQL databases as storage
* `bentoml delete` command now supports deleting multiple BentoService at once, see `bentoml delete --help`

0.6.2

Improvements:
* [ISSUE-505] Make "application/json" the default Content-Type in DataframeHandler 507
* CLI improvement - Add bento service as column for deployment list 514
* SageMaker deployment - error reading Azure user role info 510 by HenryDashwood
* BentoML cli improvments 520, 519
* Add handler configs to BentoServiceMetadata proto and bentoml.yml file 517
* Add support for list by labels 521

Bug fixes:
* [ISSUE-512] Fix appending saved path to sys.path when loading BentoService 518
* Lambda deployment - ensure requirement dir in PYTHONPATH 508
* SageMaker deployment delete - fix error when endpoint already deleted 522

0.6.1

* Bugfix: `bentoml serve-gunicorn` command was broken in 0.6.0, which also breaks the API Server docker container. This is a minor release including a fix this issue https://github.com/bentoml/BentoML/issues/499

Page 16 of 21

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.