Bentoml

Latest version: v1.3.14

Safety actively analyzes 682457 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 18 of 25

0.8.5

Not secure
Bug fixes

* API server show blank index page 977 975
* Failed to package pip installed dependencies in some edge cases 978 979

0.8.4

Not secure
What's New

Breaking Change: JsonInput migrating to batch API 860,953

We are officially changing JsonInput to use the batch-oriented syntax. By now(release 0.8.4), all input adapters in BentoML have migrated to this design. The main difference is that for the user-defined API function, the input parameter is now a list of JSONSerializable objects(Dict, List, Integer, Float, Str) instead of one JSONSerializable object. And the expected return value of the user-defined API function is an Iterable with the exact same length. This makes it possible for API endpoints using JsonInput adapter to take advantage of BentoML's adaptive micro-batching capability.

Here is an example of how JsonInput(formerly JsonHandler) used to work:

python
bentoml.api(input=LegacyJsonInput())
def predict(self, parsed_json):
results = self.artifacts.classifier([parsed_json['text']])
return results[0]


And here is an example with the new JsonInput class:
python
bentoml.api(input=JsonInput())
def predict(self, parsed_json_list):
texts = [j['text'] for j in parsed_json_list])
return self.artifacts.classifier(texts)


The old non-batching JsonInput is still available to help with the transition, simply use `from bentoml.adapters import LegacyJsonInput as JsonInput` to replace the JsonInput or JsonHandler in your code before BentoML 0.8.4. The `LegacyJsonInput` behaves exactly the same as JsonInput in previous releases. We will keep supporting it until BentoML version 1.0.

Custom Web UI support in API Server (839)

Custom web UI can be added to your API server now! Here is an example project: https://github.com/bentoml/gallery/tree/master/scikit-learn/iris-classifier

![bentoml custom web ui](https://raw.githubusercontent.com/bentoml/gallery/master/scikit-learn/iris-classifier/webui.png)

Add your web frontend project directory to your BentoService class and BentoML will automatically bundle all the web UI files and host them when starting the API server:
python
env(auto_pip_dependencies=True)
artifacts([SklearnModelArtifact('model')])
web_static_content('./static')
class IrisClassifier(BentoService):

api(input=DataframeInput())
def predict(self, df):
return self.artifacts.model.predict(df)


Artifact packing & loading workflow 911, 921, 949

We have refactored the Artifact API, which brings more flexibility to how users package their trained models with BentoML's API.

The most noticeable thing a user can do now is to separate model training job and BentoML model serving development - the user can now use the Artifact API to save a trained model from their training job and load it later for creating BentoService class for model serving. e.g.:

Step 1, model training:
python
from sklearn import svm
from sklearn import datasets
from bentoml.artifact import SklearnModelArtifact

if __name__ == "__main__":
Load training data
iris = datasets.load_iris()
X, y = iris.data, iris.target

Model Training
clf = svm.SVC(gamma='scale')
clf.fit(X, y)

save just the trained model with the SklearnModelArtifact to a specific directory
btml_model_artifact = SklearnModelArtifact('model')
btml_model_artifact.pack(clf)
btml_model_artifact.save('/tmp/temp_bentoml_artifact')


Step 2: Build BentoService class with the saved artifact:
python
from bentoml import env, artifacts, api, BentoService
from bentoml.adapters import DataframeInput
from bentoml.artifact import SklearnModelArtifact

env(auto_pip_dependencies=True)
artifacts([SklearnModelArtifact('model')])
class IrisClassifier(BentoService):

api(input=DataframeInput())
def predict(self, df):
Optional pre-processing, post-processing code goes here
return self.artifacts.model.predict(df)

if __name__ == "__main__":
Create a iris classifier service instance
iris_classifier_service = IrisClassifier()

load the previously saved artifact
iris_classifier_service.artifacts.get('model').load('/tmp/temp_bentoml_artifact')

saved_path = iris_classifier_service.save()


This workflow makes developing and debugging BentoService code a lot easier, user no longer needs to retrain their model every time they change something in the BentoService class definition and wants to try it out.

* Note that the old BentoService class method 'pack' has now been deprecated in this release 915

Add `bentoml containerize` command (847,884,941)
bash
$ bentoml containerize --help
Usage: bentoml containerize [OPTIONS] BENTO

Containerizes given Bento into a ready-to-use Docker image.

Options:
-p, --push
-t, --tag TEXT Optional image tag. If not specified, Bento will
generate one from the name of the Bento.


Support multiple images in the same request (828)

A new input adapter class `MultiImageInput` https://docs.bentoml.org/en/latest/api/adapters.html#multiimageinput has been added. It is designed for prediction services that require multiple image files as its input:

python
from bentoml import BentoService
import bentoml

class MyService(BentoService):

bentoml.api(input=MultiImageInput(input_names=('imageX', 'imageY')))
def predict(self, image_groups):
for image_group in image_groups:
image_array_x = image_group['imageX']
image_array_y = image_group['imageY']



Add FileInput adapter(734)

A new input adapter class `FileInput` for handling arbitrary binary files as the input for your prediction service https://github.com/bentoml/BentoML/blob/v0.8.4/bentoml/adapters/file_input.py#L33

Added Ngrok support (917)

Expose your local development model API server over a public URL endpoint, using Ngrok under the hood. To try it out, simply add the `--run-with-ngrok` flag to your `bentoml serve` CLI command, e.g.:

bash
bentoml serve IrisClassifier:latest --run-with-ngrok


Add support for CoreML (939)

Serving CoreML model on Mac OS is now supported! Users can also convert their models trained with other frameworks to the CoreML format, for better performance on Mac OS platforms. Here's an example with Pytorch model serving with CoreML and BentoML:

python
import torch
from torch import nn

class PytorchModel(nn.Module):
def __init__(self):
super().__init__()

self.linear = nn.Linear(5, 1, bias=False)
torch.nn.init.ones_(self.linear.weight)

def forward(self, x):
x = self.linear(x)

return x

------

import numpy
import pandas as pd

from coremltools.models import MLModel pylint: disable=import-error

import bentoml
from bentoml.adapters import DataframeInput
from bentoml.artifact import CoreMLModelArtifact

bentoml.env(auto_pip_dependencies=True)
bentoml.artifacts([CoreMLModelArtifact('model')])
class CoreMLClassifier(bentoml.BentoService):
bentoml.api(input=DataframeInput())
def predict(self, df: pd.DataFrame) -> float:
model: MLModel = self.artifacts.model
input_data = df.to_numpy().astype(numpy.float32)
output = model.predict({"input": input_data})
return next(iter(output.values())).item()


def convert_pytorch_to_coreml(pytorch_model: PytorchModel) -> ct.models.MLModel:
"""CoreML is not for training ML models but rather for converting pretrained models
and running them on Apple devices. Therefore, in this train we convert the
pretrained PytorchModel from the tests.integration.test_pytorch_model_artifact
module into a CoreML module."""
pytorch_model.eval()
traced_pytorch_model = torch.jit.trace(pytorch_model, torch.Tensor(test_df.values))
model: MLModel = ct.convert(
traced_pytorch_model, inputs=[ct.TensorType(name="input", shape=test_df.shape)]
)
return model


------

if __name__ == '__main__':
svc = CoreMLClassifier()
pytorch_model = PytorchModel()
model = convert_pytorch_to_coreml(pytorch_model)
svc.pack('model', model)
svc.save()


Breaking Change: Remove CLI --with-conda option 898

Run inference job within an automatically generated conda environment seems like a good idea at first but we realized it introduces more problems than it solves. We are removing this option and encourage users to use docker for running inference jobs instead.

Improvements:
* 966, 968 Faster `save` by improving python local module parsing code
* 878, 879 Faster `import bentoml` with lazy module loader
* 872 Add BentoService API name validation
* 887 Set a smaller page limit for `bentoml list`
* 916 Do not cache pip requirements in Dockerfile
* 918 Improve error handling when micro batching service is unavailable
* 925 Artifact refactoring: set_dependencies method
* 932 Add warning for SavedBundle Python version mismatch
* 904 JsonInput handle AWS Lambda event should ignore content type header
* 951 Add openjdk to H2O artifact default conda dependencies
* 958 Fix typo in cli default argument help message

Bug fixes:

* 864 Fix decode headers with latin1
* 867 Fix DataFrameInput passing NaN values over HTTP JSON request
* 869 Change the default mb_max_latency value to avoid flaky micro-batching initialization
* 897 Fix yatai web client import
* 907 Fix CORS option in AWS Lambda SAM config
* 922 Fix lambda deployment when using AWS assumed-role ARN
* 959 Fix `RecursionError: maximum recursion depth exceeded` when saving BentoService bundle
* 969 Fix error in CLI command `bentoml --version`

Internal & Testing

* 870 Add docs for using BentoML's built-in benchmark client
* 855, 871, 877 Add integration tests for dockerized BentoML API server workflow
* 876, 937 Add integration test for Tensorflow SavedModel artifact
* 951 H2O artifact integration test
* 939 CoreML artifact integration test
* 865 add makefile for BentoML developers
* 868 API Server "/feedback" endpoint refactor
* 908 BentoService base class refactoring and docstring improvements
* 909 Refactor API Server startup
* 910 Refactor API server performance tracing
* 906 Fix yatai web ui startup script
* 875 Increate micro batching server test coverage
* 935 Fix list deployments error response

Community Announcements:

We have enabled __Github Discussions__ https://github.com/bentoml/BentoML/discussions feature🎉

This will be a new place for community members to connect, ask questions, and share anything related to model serving and BentoML.

Contributors

Thank you, everyone, for contributing to this amazing release loaded with new features and improvements! bojiang joshuacwnewton guy4261 Sharathmk99 co42 jackyzha0 Korusuke akainth015 omrihar yubozhao

0.8.3

Not secure
* Fix: 500 Error without message when micro-batch enabled 857
* Fix: port conflict with --debug flag 858
* Permission issue while building docker image for BentoService created under Windows OS 851

0.8.2

Not secure
What's New?

* Support Debian-slim docker images for containerizing model server, 822 by jackyzha0. User can choose to use :
python
env(
auto_pip_dependencies=True,
docker_base_image="bentoml/model-server:0.8.2-slim-py37"
)


* New `bentoml retrieve` command for downloading saved bundle from remote YataiService model registry, 810 by iancoffey
bash
bentoml retrieve ModelServe:20200610145522_D08399 --target_dir /tmp/modelserve


* Added `--print-location` option to `bentoml get` command to print the saved path, 825 by jackyzha0
bash
$ bentoml get IrisClassifier:20200625114130_F3480B --print-location
/Users/chaoyu/bentoml/repository/IrisClassifier/20200625114130_F3480B


* Support Dataframe input JSON format orient parameter. DataframeInput now supports all pandas JSON orient options: records, columns, values split, index. 809 815, by bojiang

For example, with `orient="records"`:
python
api(input=DataframeInput(orient="records"))
def predict(self, df):
...

The API endpoint will be expecting HTTP request with JSON payload in the following format:
json
[{"col 1":"a","col 2":"b"},{"col 1":"c","col 2":"d"}]

Or with `orient="index"`:
json
'{"row 1":{"col 1":"a","col 2":"b"},"row 2":{"col 1":"c","col 2":"d"}}'

See pandas's documentation on the orient option of to_json/from_json function for more detail: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html

* Support Azure Functions deployment (beta). A new fully automated cloud deployment option that BentoML provides in addition to AWS SageMaker and AWS Lambda. See usage documentation here: https://docs.bentoml.org/en/latest/deployment/azure_functions.html


* ModelServer API Swagger schema improvements including the ability to specify example HTTP request, 807 by Korusuke
* Add prediction logging when deploying with AWS Lambda, 790 by jackyzha0
* Artifact string name validation, 817 by AlexDut
* Fixed micro batching parameter(max latency and max batch size) not applied, 818 by bojiang
* Fixed issue with handling CSV file input by following RFC4180. 814 by bojiang
* Fixed TfTensorOutput casts floats as ints 813, in 823 by bojiang

Announcements:

* The BentoML team has created a new [mailing list](https://groups.google.com/forum/#!forum/bentoml) for future announcements, community-related discussions. Join now [here](https://groups.google.com/forum/#!forum/bentoml)!
* For those interested in contributing to BentoML, there is a new [contributing docs](https://github.com/bentoml/BentoML/blob/master/CONTRIBUTING.md) now, be sure to check it out.
* We are starting a bi-weekly community meeting for community members to demo new features they are building, discuss the roadmap and gather feedback, etc. More details will be announced soon.

0.8.1

Not secure
What's New?

* Service API Input/Output adapter 783 784 789, by bojiang
* A new API for defining service input and output data types and configs
* The new `InputAdapter` is essentially the `API Handler` concept in BentoML prior to version 0.8.x release
* The old `API Handler` syntax is being deprecated, it will continue to be supported until version 1.0
* The main motivation for this change, is to enable us to build features such as new API output types(such as file/image as service output), add gRPC support, better OpenAPI support, and more performance optimizations in online serving down the line

* Model server docker image build improvements 761
* Reduced docker build time by using a pre-built BentoML model server docker image as the base image
* Removed the dependency on `apt-get` and `conda` from the custom docker base image
* Added alpine based docker image for model server deployment

* Improved Image Input handling:
* Add micro-batching support for ImageInput (former ImageHandler) 717, by bojiang
* Add support for using a list of images as input from CLI prediction run 731, by bojiang
* In the new Input Adapter API introduced in 0.8.0, the `LegacyImageInput` is identical to the previous `ImageHandler`
* The new `ImageInput` works only for single image input, unlike the old `ImageHandler`
* For users using the old `ImageHandler`, we recommend migrating to the new `ImageInput` if it is only used to handle single image input
* For users using `ImageHanlder` for multiple images input, wait until the `MultiImageInput` is added, which will be a separate input adapter type

* Added CORS support for AWS Lambda serving 752, by omrihar
* Added JsonArtifact for storing configuration and JsonSerializable data 746, by lemontheme

Bug Fixes & Improvements:
* Fixed Sagemaker deployment `ModuleNotFounderError` due to wrong gevent version 785 by flosincapite
* Fixed SpacyModelArtifact not exposed in `bentoml.artifacts` 782, by docteurZ
* Fixed errors when inheriting handler 767, by bojiang
* Removed `future` statements for py2 support, 756, by jjmachan
* Fixed bundled_pip_dependencies installation on AWS Lambda deployment 794
* Removed `aws.region` config, use AWS CLI's own config instead 740
* Fixed SageMaker deployment CLI: delete deployment with namespace specified 741
* Removed `pandas` from BentoML dependencies list, it is only required when using DataframeInput 738


Internal, CI, Testing:
* Added docs watch script for Linux 781, by akainth015
* Improved build bash scripts 774, by akainth015, flosincapite
* Fixed YataiService end-to-end tests 773
* Added PyTorch integration tests 762, by jjmachan
* Added ONNX integration tests 726, by yubozhao
* Added linter and formatting check to Travis CI
* Codebase cleanup, reorganized deployment and repository module 768 769 771


Announcements:

* The BentoML team is planning to start a bi-weekly community meeting to demo new features, discuss the roadmap and gather feedback. Join the BentoML slack channel for more details: [click to join BentoML slack](https://join.slack.com/t/bentoml/shared_invite/enQtNjcyMTY3MjE4NTgzLTU3ZDc1MWM5MzQxMWQxMzJiNTc1MTJmMzYzMTYwMjQ0OGEwNDFmZDkzYWQxNzgxYWNhNjAxZjk4MzI4OGY1Yjg).
* There are a few issues with PyPI release `0.8.0` that made it not usable. The newer `0.8.1` release has those issues fixed. Please do not use version `0.8.0`.

0.7.8

Not secure
What's New?
* ONNX model support with onnxruntime backend. More example notebooks and tutorials are coming soon!
* Added Python 3.8 support

Documentation:
* BentoML API Server architecture overview https://docs.bentoml.org/en/latest/guides/micro_batching.html
* Deploying YataiService behind Nginx https://docs.bentoml.org/en/latest/guides/yatai_service.html

Internal:
* [benchmark] moved benchmark notebooks it a separate repo: https://github.com/bentoml/benchmark
* [CI] Enabled Linting style check test on Travis CI, contributed by kautukkundan
* [CI] Fixed all existing linting errors in bentoml and tests module, contributed by kautukkundan
* [CI] Enabled Python 3.8 on Travis CI

Announcements:
* There will be breaking changes in the coming 0.8.0 release, around ImageHandler, custom Handler and custom Artifacts. If you're using those features in production, please reach out.
* Help us promote BentoML on [Twitter bentomlai](https://twitter.com/bentomlai) and [Linkedin Page](https://www.linkedin.com/company/bentoml/)!
* Be sure to join the BentoML slack channel for roadmap discussions and development updates, [click to join BentoML slack](https://join.slack.com/t/bentoml/shared_invite/enQtNjcyMTY3MjE4NTgzLTU3ZDc1MWM5MzQxMWQxMzJiNTc1MTJmMzYzMTYwMjQ0OGEwNDFmZDkzYWQxNzgxYWNhNjAxZjk4MzI4OGY1Yjg).

Page 18 of 25

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.