Bentoml

Latest version: v1.2.18

Safety actively analyzes 638884 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 17 of 21

0.6.0

The biggest change in release 0.6.0 is revamped BentoML CLI, introducing new model/deployment management commands and new syntax for CLI inferencing.

1. New commands for managing your model repository:

> bentoml list

BENTO_SERVICE CREATED_AT APIS ARTIFACTS
IrisClassifier:20200123004254_CB6865 2020-01-23 08:43 predict::DataframeHandler model::SklearnModelArtifact
IrisClassifier:20200122010013_E0292E 2020-01-22 09:00 predict::DataframeHandler clf::PickleArtifact

> bentoml get IrisClassifier

> bentoml get IrisClassifier:20200123004254_CB6865

> bentoml get IrisClassifier:latest



2. Add support for using saved BentoServices by `name:version` tag instead of {saved_path}, here are some example commands:


> bentoml serve {saved_path}

> bentoml serve IrisClassifier:latest

> bentoml serve IrisClassifier:20200123004254_CB6865

0.5.8

* Fixed an issue with API server docker image build, where updating conda to newly released version causes the build to fail
* Documentation updates
* Removed the option to configure API endpoint output format by setting the HTTP header

0.5.7

* SageMaker model serving deployment improvements:
* Added num_of_gunicorn_workers_per_instance deployment option
* Gunicorn worker count can be set automatically based on host CPU now
* Improved testing for SageMaker model serving deployment

0.5.6

Minor bug fixes:

* AWS Lambda deployment - fix default namespace validation error (452)
* Tensorflow SavedModel artifact - use concrete_function instead of input auto-reshape (451)

0.5.5

* Minor bug fixes for AWS Lambda deployment creation and error handling

0.5.4

* Prometheus metrics improvements in API server
* metrics now work in multi-process mode when running with gunicorn
* added 3 default metrics including to BentoAPIServer:
* request latency
* request count total labeled by status code
* request gauge for monitoring concurrent prediction requests

* New Tensorflow TensorHandler!
* Receive tf.Tensor data structure within your API function for your Tensorflow and Keras model
* TfSavedModel now can automatically transform input tensor shape based on tf.Function input signature

* Largely improved error handling in API Server
* Proper HTTP error code and message
* Protect user error details being leaked to client-side
* Introduced Exception classes for users to use when customizing BentoML handlers

* Deployment guide on Google Cloud Run

* Deployment guide on AWS Fargate ECS

* AWS Lambda deployment improvements

Page 17 of 21

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.