Bentoml

Latest version: v1.3.14

Safety actively analyzes 681844 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 17 of 25

0.10.1

Not secure
Bug Fix

This is a minor release containing one bug fix for issue 1318, where the docker build process for the BentoML API model server was broken due to an error in the init shell script. The issue has been fixed in 1319 and included in this new release.

The reason our integration tests did not catch this issue was due to the fact that we are bundling the "dirty" BentoML installation in the generated docker file in the development environment and CI/Test environment, whereas the production release version of BentoML, uses the BentoML installed from PyPI. And the issue in 1318 was an edge case that can be triggered only when using the released version of BentoML and published docker image. We are investigating ways to run all our integration tests with a preview release before making a final release, as part of our QA process, which should help us prevent this type of bugs from getting into final releases in the future.

0.10.0

Not secure
New Features & Improvements

* Improved Model Management APIs 1126 1241 by yubozhao
Python APIs for model management:
python
from bentoml.yatai.client import get_yatai_client

bento_service.save() Save and register the bento service locally

push to save bento service to remote yatai service.
yc = get_yatai_client('http://staging.yatai.mycompany.com:50050')
yc.repository.push(
f'{bento_service.name}:{bento_service.version}',
)

Pull bento service from remote yatai server and register locally
yc = get_yatai_client('http://staging.yatai.mycompany.com:50050')
yc.repository.pull(
'bento_name:version',
)

delete in local yatai
yatai_client = get_yatai_client()
yatai_client.repository.delete('name:version')

delete in batch by labels
yatai_client = get_yatai_client()
yatai_client.prune(labels='cicd=failed, framework In (sklearn, xgboost)')

Get bento service metadata
yatai_client.repository.get('bento_name:version', yatai_url='http://staging.yatai.mycompany.com:50050')

List bento services by label
yatai_client.repositorylist(labels='label_key In (value1, value2), label_key2 Exists', yatai_url='http://staging.yatai.mycompany.com:50050')

New CLI commands for model management:
Push local bento service to remote yatai service:

$ bentoml push bento_service_name:version --yatai-url http://staging.yatai.mycompany.com:50050

Added `--yatai-url` option for the following CLI commands to interact with remote yatai service directly:

bentoml get
bentoml list
bentoml delete
bentoml retrieve
bentoml run
bentoml serve
bentoml serve-gunicorn
bentoml info
bentoml containerize
bentoml open-api-spec


* Model Metadata API 1179 shoutout to jackyzha0 for designing and building this feature!
Ability to save additional metadata for any artifact type, e.g.:
python
model_metadata = {
'k1': 'v1',
'job_id': 'ABC',
'score': 0.84,
'datasets': ['A', 'B'],
}
svc.pack("model", test_model, metadata=model_metadata)

svc.save_to_dir(str(tmpdir))
loaded_service = bentoml.load(str(tmpdir))
print(loaded_service.artifacts.get('model').metadata)


* Improved Tensorflow Support, by bojiang
* Make the packed model behave the same as after the model was saved and loaded again 1231
* TfTensorOutput raise TypeError when micro-batch enabled 1251
* Opt auto casting of TfSavedModelArtifact & clearer feedback
* Improve KerasModelArtifact to work with tf2 1295

* Automated AWS EC2 deployment 1160 massive 3800+ line PR by mayurnewase
* Create auto-scaling endpoint on AWS EC2 with just one command, see documentation here https://docs.bentoml.org/en/latest/deployment/aws_ec2.html

* Add MXNet Gluon support 1264 by liusy182
* Enable input & output data capture in Sagemaker deployment 1189 by j-hartshorn
* Faster docker image rebuild when only model artifacts are updated 1199
* Support URL location prefix in yatai-service gRPC/Web server 1063 1184
* Support relative path for showing Swagger UI page in the model server 1207
* Add onnxruntime gpu as supported backend 1213
* Add option to disable swagger UI 1244 by liusy182
* Add label and artifact metadata display to yatai web ui 1249
![](https://user-images.githubusercontent.com/670949/99325418-2dadc600-282b-11eb-963c-0ae2728d812f.png)
* Make bentoml module executable 1274

python -m bentoml <subcommand>

* Allow setting micro batching parameters from CLI 1282 by jsemric

bentoml serve-gunicorn --enable-microbatch --mb-max-latency 3333 --mb-max-batch-size 3333 IrisClassifier:20201202154246_C8DC0A


Bug fixes
* Allow deleting bento that was previously deleted with the same name and version 1211
* Construct docker API client from env 1233
* Pin-down SqlAlchemy version 1238
* Avoid potential TypeError in batching server 1252
* Fix inference API docstring override by default 1302

Documentation
* Add examples of queries with requests for adapters 1202
* Update import paths to reflect fastai2->fastai rename 1227
* Add model artifact metadata information to the core concept page 1259
* Update adapters.rst to include new input adapters 1269
* Update quickstart guide 1262
* Docs for gluon support 1271
* Fix CURL commands for posting files in input adapters doc string 1307

Internal, CI, and Tests
* Fix installing bundled pip dependencies in Azure and Sagemaker deployments 1214 (affects bentoml developers only)
* Add Integration test for Fasttext 1221
* Add integration test for spaCy 1236
* Add integration test for models using tf native API 1245
* Add tests for run_api_server_docker_container microbatch 1247
* Add integration test for LightGBM 1243
* Update Yatai web ui node dependencies version 1256
* Add integration test for bento management 1263
* Add yatai server integration tests to Github CI 1265
* Update e2e yatai service tests 1266
* Include additional information for EC2 test 1270
* Refactor CI for TensorFlow2 1277
* Make tensorflow integration tests run faster 1278
* Fix overrided protobuf version in CI 1286
* Add integration test for tf1 1285
* Refactor yatai service integration test 1290
* Refactor Saved Bundle Loader 1291
* Fix flaky yatai service integration tests 1298
* Refine KerasModelArtifact & its integration test 1295
* Improve API server integration tests 1299
* Add integration tests for ragged_tensor 1303

Announcements
* We have started using [Github Projects](https://github.com/bentoml/BentoML/projects) feature to track roadmap items for BentoML, you can find it here: https://github.com/bentoml/BentoML/projects/1
* We are hiring senior engineers and a lead developer advocate to join our team, let us know if you or someone you know might be interested 👉 contactbentoml.ai
* Apologize for the long wait between 0.9 and 0.10 releases, we are getting back to doing our bi-weekly release schedule now! We need help with documenting new features, writing release notes as well as QA new release before it went out, let us know if you'd be interested in helping out!

Thank you everyone for contributing to this release! j-hartshorn withsmilo yubozhao bojiang changhw01 mayurnewase telescopic jackyzha0 pncnmnp kishore-ganesh rhbian liusy182 awalvie cathy-kim jsemric 🎉🎉🎉

0.9.2

Not secure
Bug fixes

* Fixed retrieving BentoService from S3/MinIO based storage 1174 https://github.com/bentoml/BentoML/pull/1175
* Fixed an issue when using inference API function optional parameter `tasks` / `task` 1171

0.9.1

Not secure

0.9.0

Not secure
What's New

TLDR;
* New input/output adapter design that let's user choose between batch or non-batch implementation
* Speed up the API model server docker image build time
* Changed the recommended import path of artifact classes, now artifact classes should be imported from `bentoml.frameworks.*`
* Improved python pip package management
* Huggingface/Transformers support!!
* Managed packaged models with Labels API
* Support GCS(Google Cloud Storage) as model storage backend in YataiService
* Current Roadmap for feedback: https://github.com/bentoml/BentoML/discussions/1128


New Input/Output adapter design

A massive refactoring on BentoML's inference API and input/output adapter redesign, lead by bojiang with help from akainth015.

**BREAKING CHANGE:** API definition now requires declaring if it is a batch API or non-batch API:

python
from typings import List
from bentoml import env, artifacts, api, BentoService
from bentoml.adapters import JsonInput
from bentoml.types import JsonSerializable type annotations are optional

env(infer_pip_packages=True)
artifacts([SklearnModelArtifact('classifier')])
class MyPredictionService(BentoService):

api(input=JsonInput(), batch=True)
def predict_batch(self, parsed_json_list: List[JsonSerializable]):
results = self.artifacts.classifier([j['text'] for j in parsed_json_list])
return results

api(input=JsonInput()) default batch=False
def predict_non_batch(self, parsed_json: JsonSerializable):
results = self.artifacts.classifier([parsed_json['text']])
return results[0]


For APIs with `batch=True`, the user-defined API function will be required to process a list of input item at a time, and return a list of results of the same length. On the contrary, `api` by default uses `batch=False`, which processes one input item at a time. Implementing a batch API allow your workload to benefit from BentoML's adaptive micro-batching mechanism when serving online traffic, and also will speed up offline batch inference job. We recommend using `batch=True` if performance & throughput is a concern. Non-batch APIs are usually easier to implement, good for quick POC, simple use cases, and deploying on Serverless platforms such as AWS Lambda, Azure function, and Google KNative.

Read more about this change and example usage here: https://docs.bentoml.org/en/latest/api/adapters.html

**BREAKING CHANGE:** For `DataframeInput` and `TfTensorInput` users, it is now required to add `batch=True`

DataframeInput and TfTensorInput are special input types that only support accepting a batch of input at one time.

Input data validation while handling batch input

When the API function received a list of input, it is now possible to reject a subset of the input data and return an error code to the client, if the input data is invalid or malformated. Users can do this via the `InferenceTaskdiscard` API, here's an example:

python
from typings import List
from bentoml import env, artifacts, api, BentoService
from bentoml.adapters import JsonInput
from bentoml.types import JsonSerializable, InferenceTask type annotations are optional

env(infer_pip_packages=True)
artifacts([SklearnModelArtifact('classifier')])
class MyPredictionService(BentoService):

api(input=JsonInput(), batch=True)
def predict_batch(self, parsed_json_list: List[JsonSerializable], tasks: List[InferenceTask]):
model_input = []
for json, task in zip(parsed_json_list, tasks):
if "text" in json:
model_input.append(json['text'])
else:
task.discard(http_status=400, err_msg="input json must contain `text` field")

results = self.artifacts.classifier(model_input)

return results


The number of tasks got discarded plus the length of the results array returned, should be equal to the length of the input list, this will allow BentoML to match the results back to tasks that have not yet been discarded.

Allow fine-grained control of the HTTP response, CLI inference job output, etc. E.g.:

python
import bentoml
from bentoml.types import JsonSerializable, InferenceTask, InferenceError type annotations are optional

class MyService(bentoml.BentoService):

bentoml.api(input=JsonInput(), batch=False)
def predict(self, parsed_json: JsonSerializable, task: InferenceTask) -> InferenceResult:
if task.http_headers['Accept'] == "application/json":
predictions = self.artifact.model.predict([parsed_json])
return InferenceResult(
data=predictions[0],
http_status=200,
http_headers={"Content-Type": "application/json"},
)
else:
return InferenceError(err_msg="application/json output only", http_status=400)


Or when batch=True:
python
import bentoml
from bentoml.types import JsonSerializable, InferenceTask, InferenceError type annotations are optional

class MyService(bentoml.BentoService):

bentoml.api(input=JsonInput(), batch=True)
def predict(self, parsed_json_list: List[JsonSerializable], tasks: List[InferenceTask]) -> List[InferenceResult]:
rv = []
predictions = self.artifact.model.predict(parsed_json_list)
for task, prediction in zip(tasks, predictions):
if task.http_headers['Accept'] == "application/json":
rv.append(
InferenceResult(
data=prediction,
http_status=200,
http_headers={"Content-Type": "application/json"},
))
else:
rv.append(InferenceError(err_msg="application/json output only", http_status=400))
or task.discard(err_msg="application/json output only", http_status=400)
return rv


Other adapter changes:

* Added a 3 base adapters for implementing advanced adapters: FileInput, StringInput, MultiFileInput

* Implementing new adapters that support micro-batching is a lot easier now: https://github.com/bentoml/BentoML/blob/v0.9.0.pre/bentoml/adapters/base_input.py

* Per inference task prediction log 1089

* More adapters support launching batch inference job from BentoML CLI run command now, see API reference for detailed examples: https://docs.bentoml.org/en/latest/api/adapters.html

Docker Build Improvements

* Optimize docker image build time (1081) kudos to ZeyadYasser!!
* Per python minor version base image to speed up image building 1101 1096, thanks gregd33!!
* Add "latest" tag to all user-facing docker base images (1046)

Improved pip package management

Setting pip install options in BentoService `env` specification

As suggested here: https://github.com/bentoml/BentoML/issues/1036#issuecomment-682179282, Thanks danield137 for suggesting the `pip_extra_index_url` option!

python
env(
auto_pip_dependencies=True,
pip_index_url='my_pypi_host_url',
pip_trusted_host='my_pypi_host_url',
pip_extra_index_url='extra_pypi_index_url'
)
artifacts([SklearnModelArtifact('model')])
class IrisClassifier(BentoService):
...


**BREAKING CHANGE** Due to this change, we have now removed the previous docker build arg PIP_INDEX_URL and ARG PIP_TRUSTED_HOST, due to it may be conflicting with settings in base image 1036


* Support passing a conda environment.yml file to `env`, as suggested in 725 https://github.com/bentoml/BentoML/issues/725

* When a version is not specified in pip_packages list, it is expected to pin to the version found in the current python session. Now it is doing the same for packages added from adapter and artifact classes

* Support specifying package requirement range now, e.g.:
python
env(pip_packages=["abc==1.3", "foo>1.2,<=1.4"])

It can be any pip version requirement specifier https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers

* Renamed `pip_dependencies` to `pip_packages` and `auto_pip_dependencies` to `infer_pip_packages`, the old API still works but will eventually be deprecated.

GCS support in YataiService

Adding Google Cloud Storage (GCS) support in YataiService, as the storage backend. This is an alternative to AWS S3, MiniIO, or POSIX file system. 1017 - Thank you Korusuke PrabhanshuAttri for creating the GCS support!

YataiService Labels API for model management

Managed packaged models in YataiService with labels API implemented in 1064

1. Add labels to `BentoService.save`
python
svc = MyBentoService()
svc.save(labels={'my_key': 'my_value', 'test': 'passed'})

2. Add label query for CLI commands
* `bentoml get BENTO_NAME`, `bentoml list`, `bentoml deployment list`, `bentoml lambda list`, `bentoml sagemaker list`, `bentoml azure-functions list`

* label query supports `=`, `!=`, `In`, `NotIn`, `Exists`, `DoesNotExists` operator
- e.g. key1=value1, key2!=value2, env In (prod, staging), Key Exists, Another_key DoesNotExist

*Simple key/value label selector*
<img width="1329" alt="Screen Shot 2020-09-03 at 5 38 21 PM" src="https://user-images.githubusercontent.com/670949/92186634-4867c580-ee0c-11ea-8dc8-55c28d6a5130.png">

*Use Exists operator*
<img width="1123" alt="Screen Shot 2020-09-03 at 5 40 57 PM" src="https://user-images.githubusercontent.com/670949/92186755-a3012180-ee0c-11ea-8f68-cf30e95ba482.png">

*Use DoesNotExist operator*
<img width="1327" alt="Screen Shot 2020-09-03 at 5 41 41 PM" src="https://user-images.githubusercontent.com/670949/92186785-bc09d280-ee0c-11ea-9465-a10a8411612a.png">

*Use In operator*
<img width="1348" alt="Screen Shot 2020-09-03 at 5 48 42 PM" src="https://user-images.githubusercontent.com/670949/92187108-b6f95300-ee0d-11ea-9744-45ed182d3ab1.png">

*Use multiple label query*
<img width="1356" alt="Screen Shot 2020-09-03 at 7 07 23 PM" src="https://user-images.githubusercontent.com/670949/92191498-caf68200-ee18-11ea-9679-9f4ea06a5484.png">

3. Roadmap - add web UI for filtering and searching with labels API

New framework support: Huggingface/Transformers

1090 1094 thanks vedashree29296 for contributing this!

Usage & docs: https://docs.bentoml.org/en/stable/frameworks.html#transformers


Bug Fixes:

* Fixed 1030 - bentoml serve fails when packaged on Windows and deployed on Linux 1044
* Handle missing region during SageMaker deployment updates 1049

Internal & Testing:

* Re-organize artifacts related modules 1082, 1085
* Refactoring & improvements around dependency management 1084, 1086
* [TEST/CI] Add tests covering XgboostModelArtifact (1079)
* [TEST/CI] Fix AWS moto related unit tests (1077)
* Lock SQLAlchemy-utils version (1078)

Contributors of 0.9.0 release

Thank you all for contributing to this release!! danield137 ericmand ssakhavi aviaviavi dinakar29 umihui vedashree29296 joerg84 gregd33 mayurnewase narennadig akainth015 yubozhao bojiang

0.8.6

Not secure
What's New

Yatai service helm chart for Kubernetes deployment [945](https://github.com/bentoml/BentoML/pull/945) jackyzha0

Helm chart offers a convenient way to deploy YataiService to a Kubernetes cluster

bash
Download BentoML source
$ git clone https://github.com/bentoml/BentoML.git
$ cd BentoML

1. Install an ingress controller if your cluster doesn't already have one, Yatai helm chart installs nginx-ingress by default:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm dependencies build helm/YataiService

2. Install YataiService helm chart to the Kubernetes cluster:
$ helm install -f helm/YataiService/values/postgres.yaml yatai-service YataiService

3. To uninstall the YataiService from your cluster:
$ helm uninstall yatai-service


jackyzha0 added a great tutorial about YataiService helm chart deployment. You can find the guide at https://docs.bentoml.org/en/latest/guides/helm.html

[Experimental] AnnotatedImageInput adapter for image plus additional JSON data [973](https://github.com/bentoml/BentoML/pull/973) ecrows

The AnnotatedImageInput adapter is designed for the common use-cases of image input to include additional information such as object detection bounding boxes, segmentation masks, etc. for prediction. This new adapter significantly improves the developer experience over the previous workaround solution.

**Warning:** Input adapters are currently under refactoring [1002](https://github.com/bentoml/BentoML/issues/1002), we may change the API for AnnotatedImageInput in future releases.

python
from bentoml.adapters import AnnotatedImageInput
from bentoml.artifact import TensorflowSavedModelArtifact
import bentoml

CLASS_NAMES = ['cat', 'dog']

bentoml.artifacts([TensorflowSavedModelArtifact('classifier')]
class PetClassification(bentoml.BentoService):
api(input=AnnotatedImageInput)
def predict(self, image, annotations):
cropped_pets = some_pet_finder(image, annotations)
results = self.artifacts.classifier.predict(cropped_pets)
return [CLASS_NAMES[r] for r in results]


Making a request using `curl`

bash
$ curl -F image=image.png -F annotations=annotations.json http://localhost:5000/predict


You can find the current API reference at https://docs.bentoml.org/en/latest/api/adapters.html#annotatedimageinput

Improvements:

* [992](https://github.com/bentoml/BentoML/pull/992) Make the prediction and feedback loggers log to console by default - jackyzha0
* [952](https://github.com/bentoml/BentoML/pull/952) Add tutorial for deploying BentoService to Azure SQL server to the documentation yashika51

Bug Fixes:

* [987](https://github.com/bentoml/BentoML/pull/987) & [#991](https://github.com/bentoml/BentoML/pull/945) Better AWS IAM roles handles for Sagemaker Deployment - dinakar29
* [995](https://github.com/bentoml/BentoML/pull/995) Fix an edge case for encountering RecursionError when running gunicorn server with `--enable-microbatch` on MacOS bojiang
* [1012](https://github.com/bentoml/BentoML/pull/1012) Fix ruamel.yaml missing issue when using containerized BentoService with Conda. parano

Internal & Testing:

* [983](https://github.com/bentoml/BentoML/pull/983) Move CI tests to Github Actions

Contributors:

Thank you, everyone, for contributing to this exciting release!

bojiang jackyzha0 ecrows dinakar29 yashika51 akainth015

Page 17 of 25

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.