Bentoml

Latest version: v1.2.18

Safety actively analyzes 638845 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 13 of 21

0.12.1

Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.12.0...v0.12.1

PaddlePaddle Support

We are thrilled to announce that BentoML now fully supports the [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) framework from Baidu. Users can easily serve their own models created with Paddle via [Paddle Inference](https://paddle-inference.readthedocs.io/en/latest/index.html) and serve pre-trained models from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), which contains over 300 production-grade pre-trained models.

Tutorial notebooks for using BentoML with PaddlePaddle:
* Paddle Inference: https://github.com/bentoml/gallery/blob/master/paddlepaddle/LinearRegression/LinearRegression.ipynb
* PaddleHub: https://github.com/bentoml/gallery/blob/master/paddlehub/image-segmentation/image-segmentation.ipynb

See the announcement and release note from PaddleHub: https://github.com/PaddlePaddle/PaddleHub/releases/tag/v2.1.0

Thank you cqvu deehrlic for contributing this feature in BentoML.

Bug fixes
* 1532 Fix zipkin module not found exception
* 1557 Fix aiohttp import issue on Windows
* 1566 Fix bundle load in docker when using the requirement_txt_file env parameter

0.12.0

Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.11.0...v0.12.0

New Features

- **Breaking Change:** Default Model Worker count is set to **one** 1454
- Please use the `--worker` CLI argument for specifying a number of workers of your deployment
- For heavy production workload, we recommend experiment with different worker count and benchmark test your BentoML service in API server in your target hardware to get a better understanding of the model server performance

- **Breaking Change:** Micro-batching layer(Marshal Server) is now enabled by default 1498
- For Inference APIs defined with`batch=True`, this will enable micro-batching behavior when serving. User can disable with the `--diable-microbatch` flag
- For Inference APIs with `batch=False`, API requests are now being queued in Marshal and then forwarded to the model backend server

- **New:** Use non-root user in BentoML's API server docker image

- **New:** API/CLI for bulk delete of BentoML bundle in Yatai 1313

- Easier dependency management for PyPI and conda
- Support all pip install options via a user-provided `requirements.txt` file
- **Breaking Change:** when `requirements_txt_file` option is in use, other pip package options will be ignored
- `conda_override_channels` option for using explicit conda channel for conda dependencies: https://docs.bentoml.org/en/latest/concepts.html#conda-packages

---


- Better support for pip install options and remote python dependencies 1421

1. Let BentoML do it for you:

python
bentoml.env(infer_pip_packages=True)


2. use the existing "pip_packages" API, to specify list of dependencies:

python
bentoml.env(
pip_packages=[
'scikit-learn',
'pandas https://github.com/pypa/pip/archive/1.3.1.zip',
]
)


3. use a requirements.txt file to specify all dependencies:

python
bentoml.env(requirements_txt_file='./requirements.txt')


In the `./requirements.txt` file, all pip install options can be used:

python

These requirements were autogenerated by pipenv
To regenerate from the project's Pipfile, run:

pipenv lock --requirements


-i https://pypi.org/simple

scikit-learn==0.20.3
aws-sam-cli==0.33.1
psycopg2-binary
azure-cli
bentoml
pandas https://github.com/pypa/pip/archive/1.3.1.zip

https://[username[:password]]pypi.company.com/simple
https://user:he%2F%2Fopypi.company.com

git+https://myvcs.com/some_dependencysometag#egg=SomeDependency


- API/CLI for bulk delete 1313

CLI command for delete:

python
Delete all saved Bento with specific name
bentoml delete --name IrisClassifier
bentoml delete --name IrisClassifier -y do it without confirming with user
bentoml delete --name IrisClassifier --yatai-url=yatai.mycompany.com delete in remote Yatai

Delete all saved Bento with specific tag
bentoml delete --labels "env=dev"
bentoml delete --labels "env=dev, user=foobar"
bentoml delete --labels "key1=value1, key2!=value2, key3 In (value3, value3a), key4 DoesNotExist"

Delete multiple saved Bento by their name:version tag
bentoml delete --tag "IrisClassifier:v1, MyService:v3, FooBar:20200103_Lkj81a"

Delete all
bentoml delete --all


Yatai Client Python API:

python
yc = get_yatai_client() local Yatai
yc = get_yatai_client('remote.yatai.com:50051') remoate Yatai

yc.repository.delete(prune, labels, bento_tag, bento_name, bento_version, require_confirm)

"""
Params:
prune: boolean, Set true to delete all bento services
bento_tag: Bento tag
labels: string, label selector to filter bento services to delete
bento_name: string
bento_version: string,
require_confirm: boolean require user confirm interactively in CLI
"""


- 1334 Customize route of an API endpoint

python
env(infer_pip_packages=True)
artifacts([...])
class MyPredictionService(BentoService)

api(route="/my_url_route/foo/bar", batch=True, input=DataframeInput())
def predict(self, df):
instead of "/predict", the URL for this API endpoint will be "/my_url_route/foo/bar"
...


- 1416 Support custom authentication header in Yatai gRPC server
- 1284 Add health check endpoint to Yatai web server
- 1409 Fix Postgres disconnect issue with Yatai server

0.11.0

New Features

Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.10.1...v0.11.0

Interactively start and stop Model API Server during development

A new API was introduced in 0.11.0 for users to start and test an API server while developing their BentoService class:
python
service = MyPredictionService()
service.pack("model", model)

Start an API model server in the background
service.start_dev_server(port=5000)

Send test request to the server or open the URL in browser
requests.post(f'http://localhost:5000/predict', data=review, headers=headers)

Stop the dev server
service.stop_dev_server()

Modify code and repeat ♻️


Here's an [example notebook](https://github.com/bentoml/gallery/blob/48552c6f569ff1d8a18b1445c641cbaff4d6f8e8/tensorflow/imdb/imdb_text_classification.ipynb) showcasing this new feature.


More PyTorch eco-system Integrations

* PyTorch JIT traced model support 1293
* PyTorch Lightening support 1293
* Detectron2 support 1272

Logging is fully customizable now!

Users can now use one single YAML file to customize the logging behavior in BentoML, including the prediction logs and feedback logs.

https://docs.bentoml.org/en/latest/guides/logging.html

Two new configs are also introduced for quickly turning on/off console logging and file logging:

https://github.com/bentoml/BentoML/blob/v0.11.0/bentoml/configuration/default_bentoml.cfg#L29

[logging]
console_logging_enabled = true
file_logging_enabled = true

If you are not sure how this config works, here's a new guide on how BentoML's configuration works: https://docs.bentoml.org/en/latest/guides/configuration.html

More model management APIs

All model management CLI and Yatai client python API now supports the `yatai_url` parameter, making it easy to interact with a remote YataiService, for centrally manage all your BentoML packaged ML models:

<img width="946" alt="Screen Shot 2021-01-13 at 10 39 49 PM" src="https://user-images.githubusercontent.com/489344/104553880-400c6c00-55f0-11eb-9d13-023c9c84d6a6.png">

Support bundling zipimport modules 1261

Bundling `zipmodules` with BentoML is possible now with this newly added API:
python
bentoml.env(zipimport_archives=['nested_zipmodule.zip'])
bentoml.artifacts([SklearnModelArtifact('model')])
class IrisClassifier(bentoml.BentoService):
...

BentoML also manages the `sys.path` when loading a saved BentoService with zipimport archives, making sure the zip modules can be imported in user code.


Announcements

Monthly Community Meeting

Thank you again for everyone coming to the first community meeting this week! If you are not invited to the community meeting calendar yet, make sure to join it here: https://github.com/bentoml/BentoML/discussions/1396

Hiring

BentoML team is hiring multiple Software Engineer roles to help build the future of this open-source project and the business behind it - we are looking for someone with experience in one of the following areas: ML infrastructure, backend systems, data engineering, SRE, full-stack, and technical writing. Feel free to pass along the message to anyone you know who might be interested, we'd really appreciate that!

0.10.1

Bug Fix

This is a minor release containing one bug fix for issue 1318, where the docker build process for the BentoML API model server was broken due to an error in the init shell script. The issue has been fixed in 1319 and included in this new release.

The reason our integration tests did not catch this issue was due to the fact that we are bundling the "dirty" BentoML installation in the generated docker file in the development environment and CI/Test environment, whereas the production release version of BentoML, uses the BentoML installed from PyPI. And the issue in 1318 was an edge case that can be triggered only when using the released version of BentoML and published docker image. We are investigating ways to run all our integration tests with a preview release before making a final release, as part of our QA process, which should help us prevent this type of bugs from getting into final releases in the future.

0.10.0

New Features & Improvements

* Improved Model Management APIs 1126 1241 by yubozhao
Python APIs for model management:
python
from bentoml.yatai.client import get_yatai_client

bento_service.save() Save and register the bento service locally

push to save bento service to remote yatai service.
yc = get_yatai_client('http://staging.yatai.mycompany.com:50050')
yc.repository.push(
f'{bento_service.name}:{bento_service.version}',
)

Pull bento service from remote yatai server and register locally
yc = get_yatai_client('http://staging.yatai.mycompany.com:50050')
yc.repository.pull(
'bento_name:version',
)

delete in local yatai
yatai_client = get_yatai_client()
yatai_client.repository.delete('name:version')

delete in batch by labels
yatai_client = get_yatai_client()
yatai_client.prune(labels='cicd=failed, framework In (sklearn, xgboost)')

Get bento service metadata
yatai_client.repository.get('bento_name:version', yatai_url='http://staging.yatai.mycompany.com:50050')

List bento services by label
yatai_client.repositorylist(labels='label_key In (value1, value2), label_key2 Exists', yatai_url='http://staging.yatai.mycompany.com:50050')

New CLI commands for model management:
Push local bento service to remote yatai service:

$ bentoml push bento_service_name:version --yatai-url http://staging.yatai.mycompany.com:50050

Added `--yatai-url` option for the following CLI commands to interact with remote yatai service directly:

bentoml get
bentoml list
bentoml delete
bentoml retrieve
bentoml run
bentoml serve
bentoml serve-gunicorn
bentoml info
bentoml containerize
bentoml open-api-spec


* Model Metadata API 1179 shoutout to jackyzha0 for designing and building this feature!
Ability to save additional metadata for any artifact type, e.g.:
python
model_metadata = {
'k1': 'v1',
'job_id': 'ABC',
'score': 0.84,
'datasets': ['A', 'B'],
}
svc.pack("model", test_model, metadata=model_metadata)

svc.save_to_dir(str(tmpdir))
loaded_service = bentoml.load(str(tmpdir))
print(loaded_service.artifacts.get('model').metadata)


* Improved Tensorflow Support, by bojiang
* Make the packed model behave the same as after the model was saved and loaded again 1231
* TfTensorOutput raise TypeError when micro-batch enabled 1251
* Opt auto casting of TfSavedModelArtifact & clearer feedback
* Improve KerasModelArtifact to work with tf2 1295

* Automated AWS EC2 deployment 1160 massive 3800+ line PR by mayurnewase
* Create auto-scaling endpoint on AWS EC2 with just one command, see documentation here https://docs.bentoml.org/en/latest/deployment/aws_ec2.html

* Add MXNet Gluon support 1264 by liusy182
* Enable input & output data capture in Sagemaker deployment 1189 by j-hartshorn
* Faster docker image rebuild when only model artifacts are updated 1199
* Support URL location prefix in yatai-service gRPC/Web server 1063 1184
* Support relative path for showing Swagger UI page in the model server 1207
* Add onnxruntime gpu as supported backend 1213
* Add option to disable swagger UI 1244 by liusy182
* Add label and artifact metadata display to yatai web ui 1249
![](https://user-images.githubusercontent.com/670949/99325418-2dadc600-282b-11eb-963c-0ae2728d812f.png)
* Make bentoml module executable 1274

python -m bentoml <subcommand>

* Allow setting micro batching parameters from CLI 1282 by jsemric

bentoml serve-gunicorn --enable-microbatch --mb-max-latency 3333 --mb-max-batch-size 3333 IrisClassifier:20201202154246_C8DC0A


Bug fixes
* Allow deleting bento that was previously deleted with the same name and version 1211
* Construct docker API client from env 1233
* Pin-down SqlAlchemy version 1238
* Avoid potential TypeError in batching server 1252
* Fix inference API docstring override by default 1302

Documentation
* Add examples of queries with requests for adapters 1202
* Update import paths to reflect fastai2->fastai rename 1227
* Add model artifact metadata information to the core concept page 1259
* Update adapters.rst to include new input adapters 1269
* Update quickstart guide 1262
* Docs for gluon support 1271
* Fix CURL commands for posting files in input adapters doc string 1307

Internal, CI, and Tests
* Fix installing bundled pip dependencies in Azure and Sagemaker deployments 1214 (affects bentoml developers only)
* Add Integration test for Fasttext 1221
* Add integration test for spaCy 1236
* Add integration test for models using tf native API 1245
* Add tests for run_api_server_docker_container microbatch 1247
* Add integration test for LightGBM 1243
* Update Yatai web ui node dependencies version 1256
* Add integration test for bento management 1263
* Add yatai server integration tests to Github CI 1265
* Update e2e yatai service tests 1266
* Include additional information for EC2 test 1270
* Refactor CI for TensorFlow2 1277
* Make tensorflow integration tests run faster 1278
* Fix overrided protobuf version in CI 1286
* Add integration test for tf1 1285
* Refactor yatai service integration test 1290
* Refactor Saved Bundle Loader 1291
* Fix flaky yatai service integration tests 1298
* Refine KerasModelArtifact & its integration test 1295
* Improve API server integration tests 1299
* Add integration tests for ragged_tensor 1303

Announcements
* We have started using [Github Projects](https://github.com/bentoml/BentoML/projects) feature to track roadmap items for BentoML, you can find it here: https://github.com/bentoml/BentoML/projects/1
* We are hiring senior engineers and a lead developer advocate to join our team, let us know if you or someone you know might be interested 👉 contactbentoml.ai
* Apologize for the long wait between 0.9 and 0.10 releases, we are getting back to doing our bi-weekly release schedule now! We need help with documenting new features, writing release notes as well as QA new release before it went out, let us know if you'd be interested in helping out!

Thank you everyone for contributing to this release! j-hartshorn withsmilo yubozhao bojiang changhw01 mayurnewase telescopic jackyzha0 pncnmnp kishore-ganesh rhbian liusy182 awalvie cathy-kim jsemric 🎉🎉🎉

0.9.2

Bug fixes

* Fixed retrieving BentoService from S3/MinIO based storage 1174 https://github.com/bentoml/BentoML/pull/1175
* Fixed an issue when using inference API function optional parameter `tasks` / `task` 1171

Page 13 of 21

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.