Bentoml

Latest version: v1.4.7

Safety actively analyzes 723400 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 17 of 28

1.0.3

Not secure
🍱 BentoML v1.0.3 release has brought a list of performance and feature improvement.

- Improved Runner IO performance by enhancing the underlying serialization and deserialization, especially in models with large input and output sizes. Our image input benchmark showed a 100% throughput improvement.
- v1.0.2 🐌
![image](https://user-images.githubusercontent.com/861225/183492460-0cf31ca0-e4bd-4852-bcac-8dd23343eaa5.png)

- v1.0.3 💨
![image](https://user-images.githubusercontent.com/861225/183492584-7d12fbe6-cc67-4828-a002-755942bc5f21.png)


- Added support for specifying URLs to exclude from tracing.
- Added support custom components for OpenAPI generation.
![image](https://user-images.githubusercontent.com/861225/183492667-877a6a5f-cea9-43d0-a4fd-52bf02fd2a8d.png)

🙌 We continue to receive great engagement and support from the BentoML community.

- Shout out to [Ben Kessler](https://github.com/udevnl) for helping benchmarking performance.
- Shout out to [Jiew Peng Lim](https://github.com/jiewpeng) for adding the support for configuring URLs to exclude from tracing.
- Shout out to [Susana Bouchardet](https://github.com/sbouchardet) for add the support for JSON IO Descriptor to return empty response body.
- Thanks to [Keming](https://github.com/kemingy) and [mplk](https://github.com/mplk) for contributing their first PRs in BentoML.

What's Changed
* chore(deps): bump actions/setup-node from 2 to 3 by dependabot in https://github.com/bentoml/BentoML/pull/2846
* fix: extend --cache-from consumption to python tuple by anwang2009 in https://github.com/bentoml/BentoML/pull/2847
* feat: add support for excluding urls from tracing by jiewpeng in https://github.com/bentoml/BentoML/pull/2843
* docs: update notice about buildkit by aarnphm in https://github.com/bentoml/BentoML/pull/2837
* chore: add CODEOWNERS by aarnphm in https://github.com/bentoml/BentoML/pull/2842
* doc(frameworks): tensorflow by bojiang in https://github.com/bentoml/BentoML/pull/2718
* feat: add support for specifying urls to exclude from tracing as a list by jiewpeng in https://github.com/bentoml/BentoML/pull/2851
* fix(configuration): merging global runner config to runner specific config by jjmachan in https://github.com/bentoml/BentoML/pull/2849
* fix: Setting status code and cookies by ssheng in https://github.com/bentoml/BentoML/pull/2854
* chore: README typo by kemingy in https://github.com/bentoml/BentoML/pull/2859
* chore: gallery links to `bentoml/examples` by aarnphm in https://github.com/bentoml/BentoML/pull/2858
* fix(runner): use pickle instead for multi payload parameters by aarnphm in https://github.com/bentoml/BentoML/pull/2857
* doc(framework): pytorch guide by bojiang in https://github.com/bentoml/BentoML/pull/2735
* docs: add missing `output` to Runner docs by mplk in https://github.com/bentoml/BentoML/pull/2868
* chore: fix push and load interop by aarnphm in https://github.com/bentoml/BentoML/pull/2863
* fix: Usage stats by ssheng in https://github.com/bentoml/BentoML/pull/2876
* fix: `JSON(IODescriptor[JSONType]).to_http_response` returns empty body when the response is `None`. by sbouchardet in https://github.com/bentoml/BentoML/pull/2874
* chore: Address comments in the 2874 by ssheng in https://github.com/bentoml/BentoML/pull/2877
* fix: debugger breaks on circus process by aarnphm in https://github.com/bentoml/BentoML/pull/2875
* feat: support custom components for OpenAPI generation by aarnphm in https://github.com/bentoml/BentoML/pull/2845

New Contributors
* anwang2009 made their first contribution in https://github.com/bentoml/BentoML/pull/2847
* jiewpeng made their first contribution in https://github.com/bentoml/BentoML/pull/2843
* kemingy made their first contribution in https://github.com/bentoml/BentoML/pull/2859
* mplk made their first contribution in https://github.com/bentoml/BentoML/pull/2868
* sbouchardet made their first contribution in https://github.com/bentoml/BentoML/pull/2874

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.2...v1.0.3

1.0.2

Not secure
🍱 We have just released BentoML v1.0.2 with a number of features and bug fixes requested by the community.

- Added support for custom model versions, e.g. `bentoml.tensorflow.save_model("model_name:1.2.4", model)`.
- Fixed PyTorch Runner payload serialization issue due to tensor not on CPU.

text
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first


- Fixed Transformers GPU device assignment due to kwargs handling.
- Fixed excessive Runner thread spawning issue under high load.
- Fixed PyTorch Runner inference error due to saving tensor during inference mode.

text
RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd.


- Fixed Keras Runner error when the input has only a single element.
- Deprecated the `validate_json` option in JSON IO descriptor and recommended specifying validation logic natively in the [Pydantic model](https://github.com/bentoml/BentoML/blob/main/examples/pydantic_validation/service.py#L30).

🎨 We added an [examples](https://github.com/bentoml/BentoML/tree/main/examples) directory and in it you will find interesting sample projects demonstrating various applications of BentoML. We welcome your contribution if you have a project idea and would like to share with the community.

💡 We continue to update the documentation on every release to help our users unlock the full power of BentoML.

- Did you know BentoML service supports [mounting and calling runners from custom FastAPI and Flask apps](https://docs.bentoml.org/en/latest/guides/server.html)?
- Did you know IO descriptor supports [input and output validation](https://docs.bentoml.org/en/latest/concepts/service.html#schema-and-validation) of schema, shape, and data types?

What's Changed
* chore: remove all `--pre` from documentation by aarnphm in https://github.com/bentoml/BentoML/pull/2738
* chore(framework): onnx guide minor improvements by larme in https://github.com/bentoml/BentoML/pull/2744
* fix(framework): fix how pytorch DataContainer convert GPU tensor by larme in https://github.com/bentoml/BentoML/pull/2739
* doc: add missing variable by robsonpeixoto in https://github.com/bentoml/BentoML/pull/2752
* chore(deps): `cattrs>=22.1.0` in setup.cfg by sugatoray in https://github.com/bentoml/BentoML/pull/2758
* fix(transformers): kwargs and migrate to framework tests by ssheng in https://github.com/bentoml/BentoML/pull/2761
* chore: add type hint for run and async_run by aarnphm in https://github.com/bentoml/BentoML/pull/2760
* docs: fix typo in SECURITY.md by parano in https://github.com/bentoml/BentoML/pull/2766
* chore: use pypa/build as PEP517 backend by aarnphm in https://github.com/bentoml/BentoML/pull/2680
* chore(e2e): capture log output by aarnphm in https://github.com/bentoml/BentoML/pull/2767
* chore: more robust prometheus directory ensuring by bojiang in https://github.com/bentoml/BentoML/pull/2526
* doc(framework): add scikit-learn section to ONNX documentation by larme in https://github.com/bentoml/BentoML/pull/2764
* chore: clean up dependencies by sauyon in https://github.com/bentoml/BentoML/pull/2769
* docs: misc docs reorganize and cleanups by parano in https://github.com/bentoml/BentoML/pull/2768
* fix(io descriptors): finish removing init_http_response by sauyon in https://github.com/bentoml/BentoML/pull/2774
* chore: fix typo by aarnphm in https://github.com/bentoml/BentoML/pull/2776
* feat(model): allow custom model versions by sauyon in https://github.com/bentoml/BentoML/pull/2775
* chore: add watchfiles as bentoml dependency by aarnphm in https://github.com/bentoml/BentoML/pull/2777
* doc(framework): keras guide by larme in https://github.com/bentoml/BentoML/pull/2741
* docs: Update service schema and validation by ssheng in https://github.com/bentoml/BentoML/pull/2778
* doc(frameworks): fix pip package syntax by larme in https://github.com/bentoml/BentoML/pull/2782
* fix(runner): thread limiter doesn't take effect by bojiang in https://github.com/bentoml/BentoML/pull/2781
* feat: add additional env var configuring num of threads in Runner by parano in https://github.com/bentoml/BentoML/pull/2786
* fix(templates): sharing variables at template level by aarnphm in https://github.com/bentoml/BentoML/pull/2796
* bug: fix JSON io_descriptor validate_json option by parano in https://github.com/bentoml/BentoML/pull/2803
* chore: improve error message when failed importing user service code by parano in https://github.com/bentoml/BentoML/pull/2806
* chore: automatic cache action version update and remove stale bot by aarnphm in https://github.com/bentoml/BentoML/pull/2798
* chore(deps): bump actions/checkout from 2 to 3 by dependabot in https://github.com/bentoml/BentoML/pull/2810
* chore(deps): bump codecov/codecov-action from 2 to 3 by dependabot in https://github.com/bentoml/BentoML/pull/2811
* chore(deps): bump github/codeql-action from 1 to 2 by dependabot in https://github.com/bentoml/BentoML/pull/2813
* chore(deps): bump actions/cache from 2 to 3 by dependabot in https://github.com/bentoml/BentoML/pull/2812
* chore(deps): bump actions/setup-python from 2 to 4 by dependabot in https://github.com/bentoml/BentoML/pull/2814
* fix(datacontainer): pytorch to_payload should disable gradient by aarnphm in https://github.com/bentoml/BentoML/pull/2821
* fix(framework): fix keras single input edge case by larme in https://github.com/bentoml/BentoML/pull/2822
* fix(framework): keras GPU handling by larme in https://github.com/bentoml/BentoML/pull/2824
* docs: update custom bentoserver guide by parano in https://github.com/bentoml/BentoML/pull/2809
* fix(runner): bind limiter to runner_ref instead by bojiang in https://github.com/bentoml/BentoML/pull/2826
* fix(pytorch): inference_mode context is thead local by bojiang in https://github.com/bentoml/BentoML/pull/2828
* fix: address multiple tags for containerize by aarnphm in https://github.com/bentoml/BentoML/pull/2797
* chore: Add gallery projects under examples by ssheng in https://github.com/bentoml/BentoML/pull/2833
* chore: running formatter on examples folder by aarnphm in https://github.com/bentoml/BentoML/pull/2834
* docs: update security auth middleware by g0nz4rth in https://github.com/bentoml/BentoML/pull/2835
* fix(io_descriptor): DataFrame columns check by alizia in https://github.com/bentoml/BentoML/pull/2836
* fix: examples directory structure by ssheng in https://github.com/bentoml/BentoML/pull/2839
* revert: "fix: address multiple tags for containerize (2797)" by ssheng in https://github.com/bentoml/BentoML/pull/2840

New Contributors
* robsonpeixoto made their first contribution in https://github.com/bentoml/BentoML/pull/2752
* sugatoray made their first contribution in https://github.com/bentoml/BentoML/pull/2758
* g0nz4rth made their first contribution in https://github.com/bentoml/BentoML/pull/2835
* alizia made their first contribution in https://github.com/bentoml/BentoML/pull/2836

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.0...v1.0.1

1.0.0

Not secure
🍱 The wait is over. [BentoML](https://github.com/bentoml/BentoML) has officially released v1.0.0. We are excited to share with you the notable features improvements.

- Introduced BentoML Runner, an abstraction for parallel model inference. It allows the compute intensive model inference step to scale separately from the transformation and business logic. The Runner is easily instantiated and invoked, but behind the scenes, BentoML is optimizing for micro-batching and fanning out inference if needed. Here’s a simple example of instantiating a Runner. Learn more about [using runners](https://docs.bentoml.org/en/latest/concepts/runner.html).
- Redesigned how models are saved, moved, and loaded with BentoML. We introduced new primitives which allow users to call a save_model() method which saves the model in the most optimal way based the recommended practices of the ML framework. The model is then stored in a flexible local repository where users can use “import” and “export” functionality to push and pull “finalized” models from remote locations like S3. Bentos can be built locally or remotely with these models. Once built, [Yatai](https://github.com/bentoml/Yatai) or [bentoctl](https://github.com/bentoml/bentoctl) can easily deploy to the cloud service of your choice. Learn more about [preparing models](https://docs.bentoml.org/en/latest/concepts/model.html) and [building bentos](https://docs.bentoml.org/en/latest/concepts/bento.html).
- Enhanced micro-batching capability with the new runner abstraction, batching is even more powerful. When incoming data is spread to different transformation processes, the runner will fan in inferences when inference is invoked. Multiple inputs will be batched into a single inference call. Most ML frameworks implement some form of vectorization which improves performance for multiple inputs at once. Our adaptive batching not only batches inputs as they are received, but also regresses the time of the last several groups of inputs in order to optimize the batch size and latency windows.
- Improved reproducibility of the model by recording and locking the dependent library versions. We use the versions to package the correct dependencies so that the environment in which the model runs in production is identical to the environment it was trained in. All direct and transitive dependencies are recorded and deployed with the model when running in production. In our 1.0 version we now support Conda as well as several different ways to customize your pip packages when “building your Bento”. Learn more about [building bentos](https://docs.bentoml.org/en/latest/concepts/bento.html).
- Simplified Docker image creation during containerization to generate the right image for you depending on the features that you’ve decided to implement in your service. For example, if your runner specifies that it can run on a GPU, we will automatically choose the right Nvidia docker image as a base when containerizing your service. If needed, we also provide the flexibility to customize your docker image as well. Learn more about [containerization](https://docs.bentoml.org/en/latest/guides/containerization.html).
- Improved input and output validation with native type validation rules. Numpy and Pandas DataFrame can specify a static shape or even dynamically infer schema by providing sample data. The Pydantic schema that is produced per endpoint also integrates with our Swagger UI so that each endpoint is better documented for sharing. Learn more about [service APIs and IO Descriptors](https://docs.bentoml.org/en/latest/concepts/service.html#service-apis).

⚠️ BentoML v1.0.0 is backward incompatible with v0.13.1. If you wish to stay on the v0.13.1 LTS version, please lock the dependency with `bentoml==0.13.1`. We have also prepared a [migration guide](https://docs.bentoml.org/en/latest/guides/migration.html) from v0.13.1 to v1.0.0 to help with your project migration. We are committed to supporting the v0.13-LTS versions with critical bug fixes and security patches.

🎉 After years of seeing hundreds of model serving use cases, we are proud to present the official release of BentoML 1.0. We could not have done it without the growth and support of our community.

1.0.0rc3

Not secure
We have just released BentoML `1.0.0rc3` with a number of highly anticipated features and improvements. Check it out with the following command!

bash
$ pip install -U bentoml --pre


⚠️ BentoML will release the official `1.0.0` version next week and remove the need to use `--pre` tag to install BentoML versions after `1.0.0`. If you wish to stay on the `0.13.1` LTS version, please lock the dependency with `bentoml==0.13.1`.

- Added support for framework runners in the following ML frameworks.
- [fast.ai](https://www.fast.ai/)
- [CatBoost](https://catboost.ai/)
- [ONNX](https://onnx.ai/)
- Added support for Huggingface Transformers custom pipelines.
- Fixed a logging issue causing the api_server and runners to not generate error logs.
- Optimized Tensorflow inference procedure.
- Improved resource request configuration for runners.
- Resource request can be now configured in the BentoML configuration. If unspecified, runners will be scheduled to best utilized the available system resources.

yaml
runners:
resources:
cpu: 8.0
nvidia.com/gpu: 4.0


- Updated the API for custom runners to declare the types of supported resources.

python
import bentoml

class MyRunnable(bentoml.Runnable):
SUPPORTS_CPU_MULTI_THREADING = True Deprecated SUPPORT_CPU_MULTI_THREADING
SUPPORTED_RESOURCES = ("nvidia.com/gpu", "cpu") Deprecated SUPPORT_NVIDIA_GPU
...

my_runner = bentoml.Runner(
MyRunnable,
runnable_init_params={"foo": foo, "bar": bar},
name="custom_runner_name",
...
)


- Deprecated the API for specifying resources from the framework `to_runner()` and custom Runner APIs. For better flexibility at runtime, it is recommended to specifying resources through configuration.

What's Changed
* fix(dependencies): require pyyaml>=5 by sauyon in https://github.com/bentoml/BentoML/pull/2626
* refactor(server): merge contexts; add yatai headers by bojiang in https://github.com/bentoml/BentoML/pull/2621
* chore(pylint): update pylint configuration by sauyon in https://github.com/bentoml/BentoML/pull/2627
* fix: Transformers NVIDIA_VISIBLE_DEVICES value type casting by ssheng in https://github.com/bentoml/BentoML/pull/2624
* fix: Server silently crash without logging exceptions by ssheng in https://github.com/bentoml/BentoML/pull/2635
* fix(framework): some GPU related fixes by larme in https://github.com/bentoml/BentoML/pull/2637
* tests: minor e2e test cleanup by sauyon in https://github.com/bentoml/BentoML/pull/2643
* docs: Add model in bentoml.pytorch.save_model() pytorch integration example by AlexandreNap in https://github.com/bentoml/BentoML/pull/2644
* chore(ci): always enable actions on PR by sauyon in https://github.com/bentoml/BentoML/pull/2646
* chore: updates ci by aarnphm in https://github.com/bentoml/BentoML/pull/2650
* fix(docker): templates bash heredoc should pass `-ex` by aarnphm in https://github.com/bentoml/BentoML/pull/2651
* feat: CatBoost integration by yetone in https://github.com/bentoml/BentoML/pull/2615
* feat: FastAI by aarnphm in https://github.com/bentoml/BentoML/pull/2571
* feat: Support Transformers custom pipeline by ssheng in https://github.com/bentoml/BentoML/pull/2640
* feat(framework): onnx support by larme in https://github.com/bentoml/BentoML/pull/2629
* chore(tensorflow): optimize inference procedure by bojiang in https://github.com/bentoml/BentoML/pull/2567
* fix(runner): validate runner names by sauyon in https://github.com/bentoml/BentoML/pull/2588
* fix(runner): lowercase runner names and add tests by sauyon in https://github.com/bentoml/BentoML/pull/2656
* style: github naming by aarnphm in https://github.com/bentoml/BentoML/pull/2659
* tests(framework): add new framework tests by sauyon in https://github.com/bentoml/BentoML/pull/2660
* docs: missing code annotation by jjmachan in https://github.com/bentoml/BentoML/pull/2654
* perf(templates): cache python installation via conda by aarnphm in https://github.com/bentoml/BentoML/pull/2662
* fix(ci): destroy the runner after init_local by bojiang in https://github.com/bentoml/BentoML/pull/2665
* fix(conda): python installation order by aarnphm in https://github.com/bentoml/BentoML/pull/2668
* fix(tensorflow): casting error on kwargs by bojiang in https://github.com/bentoml/BentoML/pull/2664
* feat(runner): implement resource configuration by sauyon in https://github.com/bentoml/BentoML/pull/2632

New Contributors
* AlexandreNap made their first contribution in https://github.com/bentoml/BentoML/pull/2644

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.0-rc2...v1.0.0-rc3

1.0.0rc2

Not secure
We have just released BentoML 1.0.0rc2 with an exciting lineup of improvements. Check it out with the following command!

bash
$ pip install -U bentoml --pre


- Standardized logging configuration and improved logging performance.
- If imported as a library, BentoML will no longer configure logging explicitly and will respect the logging configuration of the importing Python process. To customize BentoML logging as a library, configurations can be added for the `bentoml` logger.

bash

formatters:
...
handlers:
...
loggers:
...
bentoml:
handlers: [...]
level: INFO
...


- If started as a server, BentoML will continue to configure logging format and output to `stdout` at `INFO` level. All third party libraries will be configured to log at the `WARNING` level.
- **Added LightGBM framework support.**
- Updated model and bento creation timestamps CLI display to use the local timezone for better use experience, while timestamps in metadata will remain in the UTC timezone.
- Improved the reliability of bento build with advanced options including base_image and dockerfile_template.

Beside all the exciting product work, we also started a blog at [modelserving.com](https://modelserving.com/) sharing our learnings gained from building BentoML and supporting the MLOps community. Checkout our latest blog [Breaking up with Flask & FastAPI: Why ML model serving requires a specialized framework] (share your thoughts with us on our [LinkedIn post](https://www.linkedin.com/posts/activity-6943273635138740224-6jm0/).

Lastly, a big shoutout to Mike Kuhlen for adding the LightGBM framework support. 🥂

What's Changed
* feat(cli): output times in the local timezone by sauyon in https://github.com/bentoml/BentoML/pull/2572
* fix(store): use >= for time checking by sauyon in https://github.com/bentoml/BentoML/pull/2574
* fix(build): use subprocess to call pip-compile by sauyon in https://github.com/bentoml/BentoML/pull/2573
* docs: fix wrong variable name in comment by kim-sardine in https://github.com/bentoml/BentoML/pull/2575
* feat: improve logging by sauyon in https://github.com/bentoml/BentoML/pull/2568
* fix(service): JsonIO doesn't return a pydantic model by bojiang in https://github.com/bentoml/BentoML/pull/2578
* fix: update conda env yaml file name and default channel by parano in https://github.com/bentoml/BentoML/pull/2580
* chore(runner): add shcedule shortcuts to runners by bojiang in https://github.com/bentoml/BentoML/pull/2576
* fix(cli): cli encoding error on Windows by bojiang in https://github.com/bentoml/BentoML/pull/2579
* fix(bug): Make `model.with_options()` additive by ssheng in https://github.com/bentoml/BentoML/pull/2519
* feat: dockerfile templates advanced guides by aarnphm in https://github.com/bentoml/BentoML/pull/2548
* docs: add setuptools to docs dependencies by parano in https://github.com/bentoml/BentoML/pull/2586
* test(frameworks): minor test improvements by sauyon in https://github.com/bentoml/BentoML/pull/2590
* feat: Bring LightGBM back by mqk in https://github.com/bentoml/BentoML/pull/2589
* fix(runner): pass init params to runnable by sauyon in https://github.com/bentoml/BentoML/pull/2587
* fix: propagate should be false by aarnphm in https://github.com/bentoml/BentoML/pull/2594
* fix: Remove starlette request log by ssheng in https://github.com/bentoml/BentoML/pull/2595
* fix: Bug fix for 2596 by timc in https://github.com/bentoml/BentoML/pull/2597
* chore(frameworks): update framework template with new checks and remove old framework code by sauyon in https://github.com/bentoml/BentoML/pull/2592
* docs: Update streaming.rst by ssheng in https://github.com/bentoml/BentoML/pull/2605
* bug: Fix Yatai client push bentos with model options by ssheng in https://github.com/bentoml/BentoML/pull/2604
* docs: allow running tutorial from docker by parano in https://github.com/bentoml/BentoML/pull/2611
* fix(model): lock attrs to >=21.1.0 by bojiang in https://github.com/bentoml/BentoML/pull/2610
* docs: Fix documentation links and formats by ssheng in https://github.com/bentoml/BentoML/pull/2612
* fix(model): load ModelOptions lazily by sauyon in https://github.com/bentoml/BentoML/pull/2608
* feat: install.sh for python packages by aarnphm in https://github.com/bentoml/BentoML/pull/2555
* fix/routing path by aarnphm in https://github.com/bentoml/BentoML/pull/2606
* qa: build config by aarnphm in https://github.com/bentoml/BentoML/pull/2581
* fix: invalid build option python_version="None" when base_image is used by parano in https://github.com/bentoml/BentoML/pull/2623

New Contributors
* kim-sardine made their first contribution in https://github.com/bentoml/BentoML/pull/2575
* timc made their first contribution in https://github.com/bentoml/BentoML/pull/2597

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.0-rc1...v1.0.0rc2

1.0.0rc1

Not secure
We are very excited to share that BentoML 1.0.0rc1 has just been released with a number of dev experience improvements and bug fixes.

- Enabled users to run just `bentoml serve` from a project directory containing a bentofile.yaml build file.
- Added request contexts and opening access to request and response headers.
- Introduced new runner design to [simplify creation of custom runners](https://docs.bentoml.org/en/latest/concepts/runner.html#custom-runner) and framework `to_runner` API to [simplify runner creation from model](https://docs.bentoml.org/en/latest/concepts/model.html#using-model-runner).

bash
import numpy as np
import bentoml
from bentoml.io import NumpyNdarray

iris_clf_runner = bentoml.sklearn.get("iris_clf:latest").to_runner()

svc = bentoml.Service("iris_classifier", runners=[iris_clf_runner])

svc.api(input=NumpyNdarray(), output=NumpyNdarray())
def classify(input_series: np.ndarray) -> np.ndarray:
result = iris_clf_runner.predict.run(input_series)
return result


- Introduced framework `save_model`, `load_model`, and `to_runnable` APIs to complement the new `to_runner` API in the following frameworks. Other ML frameworks are still being migrated to the new Runner API at the moment. Coming in the next release are Onnx, FastAI, MLFlow and Catboost.
- PyTorch (TorchScript, PyTorch Lightning)
- Tensorflow
- Keras
- Scikit Learn
- XGBoost
- Huggingface Transformers
- Introduced a refreshing documentation website with more contents, see [https://docs.bentoml.org/](https://docs.bentoml.org/en/latest/index.html).
- Enhanced `bentoml containerize` command to include the following capabilities.
- Support multi-platform docker image build with [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/).
- Support for defining Environment Variables in generated docker images.
- Support for installing system packages via `bentofile.yaml`
- Support for customizing the generated Dockerfile via user-provided templates.

A big shout out to all the contributors for getting us a step closer to the BentoML 1.0 release. 🎉

What's Changed
* docs: update readme installation --pre flag by parano in https://github.com/bentoml/BentoML/pull/2515
* chore(ci): quit immediately for errors e2e tests by bojiang in https://github.com/bentoml/BentoML/pull/2517
* fix(ci): cover sync endpoints; cover cors by bojiang in https://github.com/bentoml/BentoML/pull/2520
* docs: fix cuda_version string value by rapidrabbit76 in https://github.com/bentoml/BentoML/pull/2523
* fix(framework): fix tf2 and keras class variable names by larme in https://github.com/bentoml/BentoML/pull/2525
* chore(ci): add more edge cases; boost e2e tests by bojiang in https://github.com/bentoml/BentoML/pull/2521
* fix(docker): remove backslash in comments by aarnphm in https://github.com/bentoml/BentoML/pull/2527
* fix(runner): sync remote runner uri schema with runner_app by larme in https://github.com/bentoml/BentoML/pull/2531
* fix: major bugs fixes about serving and GPU placement by bojiang in https://github.com/bentoml/BentoML/pull/2535
* chore(sdk): allowed single int value as the batch_dim by bojiang in https://github.com/bentoml/BentoML/pull/2536
* chore(ci): cover add_asgi_middleware in e2e tests by bojiang in https://github.com/bentoml/BentoML/pull/2537
* chore(framework): Add api_version for current implemented frameworks by larme in https://github.com/bentoml/BentoML/pull/2522
* doc(server): remove unnecessary `svc.asgi` lines by bojiang in https://github.com/bentoml/BentoML/pull/2543
* chore(server): lazy load meters; cover asgi app mounting in e2e test by bojiang in https://github.com/bentoml/BentoML/pull/2542
* feat: push runner to yatai by yetone in https://github.com/bentoml/BentoML/pull/2528
* style(runner): revert b14919db(factor out batching) by bojiang in https://github.com/bentoml/BentoML/pull/2549
* chore(ci): skip unsupported frameworks for now by bojiang in https://github.com/bentoml/BentoML/pull/2550
* doc: fix github action CI badge link by parano in https://github.com/bentoml/BentoML/pull/2554
* doc(server): fix header div by bojiang in https://github.com/bentoml/BentoML/pull/2557
* fix(metrics): filter out non-API endpoints in metrics by parano in https://github.com/bentoml/BentoML/pull/2559
* fix: Update SwaggerUI config by parano in https://github.com/bentoml/BentoML/pull/2560
* fix(server): wrong status code format in metrics by bojiang in https://github.com/bentoml/BentoML/pull/2561
* fix(server): metrics name issue under specify service names by bojiang in https://github.com/bentoml/BentoML/pull/2556
* fix: path for custom dockerfile templates by aarnphm in https://github.com/bentoml/BentoML/pull/2547
* feat: include env build options in bento.yaml by parano in https://github.com/bentoml/BentoML/pull/2562
* chore: minor fixes and docs change from QA by parano in https://github.com/bentoml/BentoML/pull/2564
* fix(qa): allow cuda_version when distro is None with default by aarnphm in https://github.com/bentoml/BentoML/pull/2565
* fix(qa): bento runner resource should limit to user provided configs by parano in https://github.com/bentoml/BentoML/pull/2566

New Contributors
* rapidrabbit76 made their first contribution in https://github.com/bentoml/BentoML/pull/2523

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.0-rc0...v1.0.0-rc1

Page 17 of 28

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.