Bentoml

Latest version: v1.3.14

Safety actively analyzes 682457 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 25

1.0.0a2

Not secure
This is a preview release for BentoML 1.0, check out the quick start guide here: https://docs.bentoml.org/en/latest/quickstart.html and documentation at http://docs.bentoml.org

0.13.1

Not secure
Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.13.0...v0.13.1

Overview

BentoML 0.13.1 is a minor release containing mostly bug fixes and internal changes.

Changelog

* feat: SLO - API server max latency (1583)
* feat: Save OpenAPI Spec Json in BentoML bundle (1686)
* fix: BentoService loading user-provided env.yml file in runtime (1695)
* fix: BentoArtifact initialize with parameter issue (1696)
* fix: Use $BENTOML_PORT as Dockerfile default port (1706)
* fix: Fix missing s3_endpoint_url (1708)
* fix: Wrap request in sagemaker model_server (1716)

* refactor: Add deprecation warnings for deployment CLI commands (1718)
* refactor replace di framework (1697)
* ci: PaddlePaddle Intergration test (1739)

0.13.0

Not secure
Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.12.1...v0.13.0

Overview

BentoML 0.13.0 is here! It's a release packed with lots of new features and important bug fixes. We encourage all users to upgrade.

❤️ Contributors

Thanks to aarnphm andrewsi-z larme gregd33 bojiang ssheng henrywu2019 yubozhao jack1902 illy sencenan parano soeque1 elia-secchi Shumpei-Kikuta StevenReitsma dsherry AnvithaGadagi joaquincabezas for the contributions!

📢 Breaking Changes

* Configuration revamp
* The `bentoml config` CLI command has been fully deprecated in this release
* New config system was introduced for configuring BentoML api server, yatai,
tracing and more (1543, 1595, 1615, 1667)
* Documentation: https://docs.bentoml.org/en/latest/guides/configuration.html
* Add --do-not-track CLI option and environment variable (1534)

* Deprecated --enable-microbatch flag
* Use the `api(batch=True|False)` option to choose between microbatch enabled
API vs. non-batch API
* For API defined in batch mode but requires serving online traffic without
batching behavior, use `--mb-max-batch-size=1` instead

🎉 New Features

* GPU Support
* GPU serving guide https://docs.bentoml.org/en/latest/guides/gpu_serving.html
* Added docker base image optimized for GPU serving (1653)

* Add support for EvalML (1603)

* Add support for ONNX-MLIR model (1545)

* Add full CORS support for bento API server (1576)

* Monitoring with Prometheus Gudie
* https://docs.bentoml.org/en/latest/guides/monitoring.html

* Optimize BentoML import delay (1608)

* Support upload/download for Yatai backed by local file system storage (1586)


🐞 Bug Fixes and Other Changes

* Add `ensure_ascii` option in JsonOutput (1578, 1580)

* Fix StringInput with batch=True API (1581)

* Fix docs.json link in API server UI (1633)

* Fix uploading to remote path (1601)

* Fix label missing after uploading Bento to remote Yatai (1598)

* Fixes /metrics endpoints with serve-gunicorn (1666)

* Upgrade conda to 4.9.2 in default docker base image (1525)

* Internal:
* Add locking mechanism to yatai server (1567)
* refactor: YataiService Store Abstraction (1541)

0.12.1

Not secure
Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.12.0...v0.12.1

PaddlePaddle Support

We are thrilled to announce that BentoML now fully supports the [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) framework from Baidu. Users can easily serve their own models created with Paddle via [Paddle Inference](https://paddle-inference.readthedocs.io/en/latest/index.html) and serve pre-trained models from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), which contains over 300 production-grade pre-trained models.

Tutorial notebooks for using BentoML with PaddlePaddle:
* Paddle Inference: https://github.com/bentoml/gallery/blob/master/paddlepaddle/LinearRegression/LinearRegression.ipynb
* PaddleHub: https://github.com/bentoml/gallery/blob/master/paddlehub/image-segmentation/image-segmentation.ipynb

See the announcement and release note from PaddleHub: https://github.com/PaddlePaddle/PaddleHub/releases/tag/v2.1.0

Thank you cqvu deehrlic for contributing this feature in BentoML.

Bug fixes
* 1532 Fix zipkin module not found exception
* 1557 Fix aiohttp import issue on Windows
* 1566 Fix bundle load in docker when using the requirement_txt_file env parameter

0.12.0

Not secure
Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.11.0...v0.12.0

New Features

- **Breaking Change:** Default Model Worker count is set to **one** 1454
- Please use the `--worker` CLI argument for specifying a number of workers of your deployment
- For heavy production workload, we recommend experiment with different worker count and benchmark test your BentoML service in API server in your target hardware to get a better understanding of the model server performance

- **Breaking Change:** Micro-batching layer(Marshal Server) is now enabled by default 1498
- For Inference APIs defined with`batch=True`, this will enable micro-batching behavior when serving. User can disable with the `--diable-microbatch` flag
- For Inference APIs with `batch=False`, API requests are now being queued in Marshal and then forwarded to the model backend server

- **New:** Use non-root user in BentoML's API server docker image

- **New:** API/CLI for bulk delete of BentoML bundle in Yatai 1313

- Easier dependency management for PyPI and conda
- Support all pip install options via a user-provided `requirements.txt` file
- **Breaking Change:** when `requirements_txt_file` option is in use, other pip package options will be ignored
- `conda_override_channels` option for using explicit conda channel for conda dependencies: https://docs.bentoml.org/en/latest/concepts.html#conda-packages

---


- Better support for pip install options and remote python dependencies 1421

1. Let BentoML do it for you:

python
bentoml.env(infer_pip_packages=True)


2. use the existing "pip_packages" API, to specify list of dependencies:

python
bentoml.env(
pip_packages=[
'scikit-learn',
'pandas https://github.com/pypa/pip/archive/1.3.1.zip',
]
)


3. use a requirements.txt file to specify all dependencies:

python
bentoml.env(requirements_txt_file='./requirements.txt')


In the `./requirements.txt` file, all pip install options can be used:

python

These requirements were autogenerated by pipenv
To regenerate from the project's Pipfile, run:

pipenv lock --requirements


-i https://pypi.org/simple

scikit-learn==0.20.3
aws-sam-cli==0.33.1
psycopg2-binary
azure-cli
bentoml
pandas https://github.com/pypa/pip/archive/1.3.1.zip

https://[username[:password]]pypi.company.com/simple
https://user:he%2F%2Fopypi.company.com

git+https://myvcs.com/some_dependencysometag#egg=SomeDependency


- API/CLI for bulk delete 1313

CLI command for delete:

python
Delete all saved Bento with specific name
bentoml delete --name IrisClassifier
bentoml delete --name IrisClassifier -y do it without confirming with user
bentoml delete --name IrisClassifier --yatai-url=yatai.mycompany.com delete in remote Yatai

Delete all saved Bento with specific tag
bentoml delete --labels "env=dev"
bentoml delete --labels "env=dev, user=foobar"
bentoml delete --labels "key1=value1, key2!=value2, key3 In (value3, value3a), key4 DoesNotExist"

Delete multiple saved Bento by their name:version tag
bentoml delete --tag "IrisClassifier:v1, MyService:v3, FooBar:20200103_Lkj81a"

Delete all
bentoml delete --all


Yatai Client Python API:

python
yc = get_yatai_client() local Yatai
yc = get_yatai_client('remote.yatai.com:50051') remoate Yatai

yc.repository.delete(prune, labels, bento_tag, bento_name, bento_version, require_confirm)

"""
Params:
prune: boolean, Set true to delete all bento services
bento_tag: Bento tag
labels: string, label selector to filter bento services to delete
bento_name: string
bento_version: string,
require_confirm: boolean require user confirm interactively in CLI
"""


- 1334 Customize route of an API endpoint

python
env(infer_pip_packages=True)
artifacts([...])
class MyPredictionService(BentoService)

api(route="/my_url_route/foo/bar", batch=True, input=DataframeInput())
def predict(self, df):
instead of "/predict", the URL for this API endpoint will be "/my_url_route/foo/bar"
...


- 1416 Support custom authentication header in Yatai gRPC server
- 1284 Add health check endpoint to Yatai web server
- 1409 Fix Postgres disconnect issue with Yatai server

0.11.0

Not secure
New Features

Detailed Changelog: https://github.com/bentoml/BentoML/compare/v0.10.1...v0.11.0

Interactively start and stop Model API Server during development

A new API was introduced in 0.11.0 for users to start and test an API server while developing their BentoService class:
python
service = MyPredictionService()
service.pack("model", model)

Start an API model server in the background
service.start_dev_server(port=5000)

Send test request to the server or open the URL in browser
requests.post(f'http://localhost:5000/predict', data=review, headers=headers)

Stop the dev server
service.stop_dev_server()

Modify code and repeat ♻️


Here's an [example notebook](https://github.com/bentoml/gallery/blob/48552c6f569ff1d8a18b1445c641cbaff4d6f8e8/tensorflow/imdb/imdb_text_classification.ipynb) showcasing this new feature.


More PyTorch eco-system Integrations

* PyTorch JIT traced model support 1293
* PyTorch Lightening support 1293
* Detectron2 support 1272

Logging is fully customizable now!

Users can now use one single YAML file to customize the logging behavior in BentoML, including the prediction logs and feedback logs.

https://docs.bentoml.org/en/latest/guides/logging.html

Two new configs are also introduced for quickly turning on/off console logging and file logging:

https://github.com/bentoml/BentoML/blob/v0.11.0/bentoml/configuration/default_bentoml.cfg#L29

[logging]
console_logging_enabled = true
file_logging_enabled = true

If you are not sure how this config works, here's a new guide on how BentoML's configuration works: https://docs.bentoml.org/en/latest/guides/configuration.html

More model management APIs

All model management CLI and Yatai client python API now supports the `yatai_url` parameter, making it easy to interact with a remote YataiService, for centrally manage all your BentoML packaged ML models:

<img width="946" alt="Screen Shot 2021-01-13 at 10 39 49 PM" src="https://user-images.githubusercontent.com/489344/104553880-400c6c00-55f0-11eb-9d13-023c9c84d6a6.png">

Support bundling zipimport modules 1261

Bundling `zipmodules` with BentoML is possible now with this newly added API:
python
bentoml.env(zipimport_archives=['nested_zipmodule.zip'])
bentoml.artifacts([SklearnModelArtifact('model')])
class IrisClassifier(bentoml.BentoService):
...

BentoML also manages the `sys.path` when loading a saved BentoService with zipimport archives, making sure the zip modules can be imported in user code.


Announcements

Monthly Community Meeting

Thank you again for everyone coming to the first community meeting this week! If you are not invited to the community meeting calendar yet, make sure to join it here: https://github.com/bentoml/BentoML/discussions/1396

Hiring

BentoML team is hiring multiple Software Engineer roles to help build the future of this open-source project and the business behind it - we are looking for someone with experience in one of the following areas: ML infrastructure, backend systems, data engineering, SRE, full-stack, and technical writing. Feel free to pass along the message to anyone you know who might be interested, we'd really appreciate that!

Page 16 of 25

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.