Bentoml

Latest version: v1.4.7

Safety actively analyzes 723400 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 28

1.0.10

Not secure
🍱 BentoML `v1.0.10` is released to address a recurring `broken pipe` reported by the community. Also included in this release, is a list of improvements we’d like to share with the community.

- Fixed an `aiohttp.client_exceptions.ClientOSError` caused by asymmetrical keep alive timeout settings between the API Server and Runner.

bash
aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe


- Added multi-output support for ONNX and TensorFlow frameworks.
- Added `from_sample` support to all [IO Descriptors](https://docs.bentoml.org/en/latest/reference/api_io_descriptors.html) in addition to just `bentoml.io.NumpyNdarray` and the sample is reflected in the Swagger UI.

python
Pandas Example
svc.api(
input=PandasDataFrame.from_sample(
pd.DataFrame([1,2,3,4])
),
output=PandasDataFrame(),
)

JSON Example
svc.api(
input=JSON.from_sample(
{"foo": 1, "bar": 2}
),
output=JSON(),
)

![image](https://user-images.githubusercontent.com/861225/200722853-e46cbe94-88f5-47e4-82eb-c56f620082e6.png)


💡 We continue to update the documentation and examples on every release to help the community unlock the full power of BentoML.

- Check out the updated [multi-model inference graph guide](https://docs.bentoml.org/en/latest/guides/graph.html) and [example](https://github.com/bentoml/BentoML/tree/main/examples/inference_graph) to learn how to compose multiple models in the same Bento service.
- Did you know BentoML support [OpenTelemetry tracing](https://opentelemetry.io/docs/concepts/signals/traces/) out-of-the-box? Checkout the [Tracing guide](https://docs.bentoml.org/en/latest/guides/tracing.html) for tracing support for OTLP, Jaeger, and Zipkin.

What's Changed
* feat(cli): log conditional environment variables by aarnphm in https://github.com/bentoml/BentoML/pull/3156
* fix: ensure conda not use pipefail and unset variables by aarnphm in https://github.com/bentoml/BentoML/pull/3171
* fix(templates): ensure to use python3 and pip3 by aarnphm in https://github.com/bentoml/BentoML/pull/3170
* fix(sdk): montioring log output by bojiang in https://github.com/bentoml/BentoML/pull/3175
* feat: make quickstart batchable by sauyon in https://github.com/bentoml/BentoML/pull/3172
* fix: lazy check for stubs via path when install local wheels by aarnphm in https://github.com/bentoml/BentoML/pull/3180
* fix(openapi): remove summary field under Info by aarnphm in https://github.com/bentoml/BentoML/pull/3178
* docs: Inference graph example by ssheng in https://github.com/bentoml/BentoML/pull/3183
* docs: remove whitespaces in migration guides by wellshs in https://github.com/bentoml/BentoML/pull/3185
* fix(build_config): validation when NoneType by aarnphm in https://github.com/bentoml/BentoML/pull/3187
* fix(docs): indentation in migration.rst by aarnphm in https://github.com/bentoml/BentoML/pull/3186
* doc(example): monitoring example for classification tasks by bojiang in https://github.com/bentoml/BentoML/pull/3176
* refactor(sdk): separate default monitoring impl by bojiang in https://github.com/bentoml/BentoML/pull/3189
* fix(ssl): provide default values in configuration by aarnphm in https://github.com/bentoml/BentoML/pull/3191
* fix: don't ignore logging conf by sauyon in https://github.com/bentoml/BentoML/pull/3192
* feat: tensorflow multi outputs support by larme in https://github.com/bentoml/BentoML/pull/3115
* docs: cleanup whitespace and typo by aarnphm in https://github.com/bentoml/BentoML/pull/3195
* chore: cleanup deadcode by aarnphm in https://github.com/bentoml/BentoML/pull/3196
* fix(runner): set uvicorn keep-alive by sauyon in https://github.com/bentoml/BentoML/pull/3198
* perf: refine onnx implementation by larme in https://github.com/bentoml/BentoML/pull/3166
* feat: `from_sample` for IO descriptor by aarnphm in https://github.com/bentoml/BentoML/pull/3143

New Contributors
* wellshs made their first contribution in https://github.com/bentoml/BentoML/pull/3185

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.8...v1.0.9

What's Changed
* fix: from_sample override logic by aarnphm in https://github.com/bentoml/BentoML/pull/3202


**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.9...v1.0.10

1.0.8

Not secure
🍱 BentoML `v1.0.8` is released with a list of improvement we hope that you’ll find useful.

- Introduced Bento Client for easy access to the BentoML service over HTTP. Both sync and async calls are supported. See the [Bento Client Guide](https://bentoml--3154.org.readthedocs.build/en/3154/guides/client.html) for more details.

python
from bentoml.client import Client

client = Client.from_url("http://localhost:3000")

Sync call
response = client.classify(np.array([[4.9, 3.0, 1.4, 0.2]]))

Async call
response = await client.async_classify(np.array([[4.9, 3.0, 1.4, 0.2]]))


- Introduced custom metrics support for easy instrumentation of custom metrics over Prometheus. See [Metrics Guide](https://bentoml--3154.org.readthedocs.build/en/3154/guides/metrics.html) for more details.

python
Histogram metric
inference_duration = bentoml.metrics.Histogram(
name="inference_duration",
documentation="Duration of inference",
labelnames=["nltk_version", "sentiment_cls"],
)

Counter metric
polarity_counter = bentoml.metrics.Counter(
name="polarity_total",
documentation="Count total number of analysis by polarity scores",
labelnames=["polarity"],
)


Full Prometheus style syntax is supported for instrumenting custom metrics inside API and Runner definitions.

python
Histogram
inference_duration.labels(
nltk_version=nltk.__version__, sentiment_cls=self.sia.__class__.__name__
).observe(time.perf_counter() - start)

Counter
polarity_counter.labels(polarity=is_positive).inc()


- Improved health checking to also cover the status of runners to avoid returning a healthy status before runners are ready.
- Added SSL/TLS support to gRPC serving.

bash
bentoml serve-grpc --ssl-certfile=credentials/cert.pem --ssl-keyfile=credentials/key.pem --production --enable-reflection


- Added channelz support for easy debugging gRPC serving.
- Allowed nested requirements with the `-r` syntax.

bash
requirements.txt
-r nested/requirements.txt

pydantic
Pillow
fastapi


- Improved the [adaptive batching](https://docs.bentoml.org/en/latest/guides/batching.html) dispatcher auto-tuning ability to avoid sporadic request failures due to batching in the beginning of the runner lifecycle.
- Fixed a bug such that runners will raise a `TypeError` when overloaded. Now an `HTTP 503 Service Unavailable` will be returned when runner is overloaded.

prolog
File "python3.9/site-packages/bentoml/_internal/runner/runner_handle/remote.py", line 188, in async_run_method
return tuple(AutoContainer.from_payload(payload) for payload in payloads)
TypeError: 'Response' object is not iterable



💡 We continue to update the documentation and examples on every release to help the community unlock the full power of BentoML.

- Check out the updated [PyTorch Framework Guide](https://docs.bentoml.org/en/latest/frameworks/pytorch.html#saving-a-trained-model) on how to use `external_modules` to save classes or utility functions required by the model.
- See the [Metrics Guide](https://docs.bentoml.org/en/latest/guides/metrics.html) on how to add custom metrics to your API and custom Runners.
- Learn more about how to use the [Bento Client](https://docs.bentoml.org/en/latest/guides/client.html) to call your BentoML service with Python easily.
- Check out the latest blog post on [why model serving over gRPC matters to data scientists](https://modelserving.com/blog/3-reasons-for-grpc).

🥂 We’d like to thank the community for your continued support and engagement.

- Shout out to judahrand for multiple contributions to BentoML and bentoctl.
- Shout out to phildamore-phdata, quandollar, 2JooYeon, and fortunto2 for their first contribution to BentoML.

1.0.7

Not secure
🍱 BentoML released `v1.0.7` as a patch to quickly fix a critical module import issue introduced in `v1.0.6`. The import error manifests in the import of any modules under `io.*` or `models.*`. The following is an example of a typical error message and traceback. Please upgrade to `v1.0.7` to address this import issue.

bash
packages/anyio/_backends/_asyncio.py", line 21, in <module>
from io import IOBase
ImportError: cannot import name 'IOBase' from 'bentoml.io'


What's Changed
* test(grpc): e2e + unit tests by aarnphm in https://github.com/bentoml/BentoML/pull/2984
* feat: support multipart upload for large bento and model by yetone in https://github.com/bentoml/BentoML/pull/3044
* fix(config): respect `api_server.workers` by judahrand in https://github.com/bentoml/BentoML/pull/3049
* chore(lint): remove unused import by aarnphm in https://github.com/bentoml/BentoML/pull/3051
* fix(import): namespace collision by aarnphm in https://github.com/bentoml/BentoML/pull/3058

New Contributors
* judahrand made their first contribution in https://github.com/bentoml/BentoML/pull/3049

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.6...v1.0.7

1.0.6

Not secure
🍱 BentoML has just released `v1.0.6` featuring the [gRPC](https://grpc.io/) preview! Without changing a line of code, you can now serve your Bentos as a gRPC service. Similar to serving over HTTP, BentoML gRPC supports all the [ML frameworks](https://docs.bentoml.org/en/latest/frameworks/index.html), [observability features](https://docs.bentoml.org/en/latest/guides/tracing.html), [adaptive batching](https://docs.bentoml.org/en/latest/guides/batching.html), and more out-of-the-box, simply by calling the `serve-grpc` CLI command.

bash
> pip install "bentoml[grpc]"
> bentoml serve-grpc iris_classifier:latest --production


- Checkout our updated [tutorial](https://docs.bentoml.org/en/latest/tutorial.html) for a quick 10-minute crash course of BentoML gRPC.
- Review the standardized [Protobuf definition of service APIs and IO types](https://docs.bentoml.org/en/latest/guides/grpc.html#protobuf-definition), NDArray, DataFrame, File/Image, JSON, etc.
- Learn more about [multi-language client support (Python, Go, Java, Node.js, etc)](https://docs.bentoml.org/en/latest/guides/grpc.html#client-implementation) with working examples.
- Customize gRPC service by [mounting new servicers](https://docs.bentoml.org/en/latest/guides/grpc.html#mounting-servicer) and [interceptors](https://docs.bentoml.org/en/latest/guides/grpc.html#mounting-grpc-interceptors).

⚠️ gRPC is current under preview. The public APIs may undergo incompatible changes in the future patch releases until the official `v1.1.0` minor version release.

- Enhanced [access logging format](https://docs.bentoml.org/en/latest/guides/logging.html#logging-formatting) to output Trace and Span IDs in the more standard hex encoding by default.
- Added request total, duration, and in-progress metrics to Runners, in addition to API Servers.
- Added support for XGBoost SKLearn models.
- Added support for restricting image mime types in the [Image IO descriptor](https://docs.bentoml.org/en/latest/reference/api_io_descriptors.html#images).

🥂 We’d like to thank our community for their contribution and support.

- Shout out to benjamintanweihao for fixing a BentoML CLI bug.
- Shout out to lsh918 for mixing a PyTorch framework issue.
- Shout out to jeffthebear for enhancing the Pandas DataFrame OpenAPI schema.
- Shout out to jiewpeng for adding the support for customizing access logs with Trace and Span ID formats.

What's Changed
* fix: log runner errors explicitly by ssheng in https://github.com/bentoml/BentoML/pull/2952
* ci: temp fix for models test by sauyon in https://github.com/bentoml/BentoML/pull/2949
* fix: fix context parameter for multi-input IO descriptors by sauyon in https://github.com/bentoml/BentoML/pull/2948
* fix: use `torch.from_numpy()` instead of `torch.Tensor()` to keep data type by lsh918 in https://github.com/bentoml/BentoML/pull/2951
* docs: fix wrong name for example neural net by ssun-g in https://github.com/bentoml/BentoML/pull/2959
* docs: fix bentoml containerize command help message by aarnphm in https://github.com/bentoml/BentoML/pull/2957
* chore(cli): remove unused `--no-trunc` by benjamintanweihao in https://github.com/bentoml/BentoML/pull/2965
* fix: relax regex for setting environment variables by benjamintanweihao in https://github.com/bentoml/BentoML/pull/2964
* docs: update wrong paths for disabling logs by creativedutchmen in https://github.com/bentoml/BentoML/pull/2974
* feat: track serve update for start subcommands by ssheng in https://github.com/bentoml/BentoML/pull/2976
* feat: logging customization by jiewpeng in https://github.com/bentoml/BentoML/pull/2961
* chore(cli): using quotes instead of backslash by sauyon in https://github.com/bentoml/BentoML/pull/2981
* feat(cli): show full tracebacks in debug mode by sauyon in https://github.com/bentoml/BentoML/pull/2982
* feature(runner): add multiple output support by larme in https://github.com/bentoml/BentoML/pull/2912
* docs: add airflow integration page by parano in https://github.com/bentoml/BentoML/pull/2990
* chore(ci): fix the unit test of transformers by bojiang in https://github.com/bentoml/BentoML/pull/3003
* chore(ci): fix the issue caused by the change of check_task by bojiang in https://github.com/bentoml/BentoML/pull/3004
* fix(multipart): support multipart file inputs to non-file descriptors by sauyon in https://github.com/bentoml/BentoML/pull/3005
* feat(server): add runner metrics; refactoring batch size metrics by bojiang in https://github.com/bentoml/BentoML/pull/2977
* EXPERIMENTAL: gRPC support by aarnphm in https://github.com/bentoml/BentoML/pull/2808
* fix(runner): receive requests before cork by bojiang in https://github.com/bentoml/BentoML/pull/2996
* fix(server): service_name label of runner metrics by bojiang in https://github.com/bentoml/BentoML/pull/3008
* chore(misc): remove mentioned for team member from PR request by aarnphm in https://github.com/bentoml/BentoML/pull/3009
* feat(xgboost): support xgboost sklearn models by sauyon in https://github.com/bentoml/BentoML/pull/2997
* feat(io/image): allow restricting mime types by sauyon in https://github.com/bentoml/BentoML/pull/2999
* fix(grpc): docker message by aarnphm in https://github.com/bentoml/BentoML/pull/3012
* fix: broken legacy metrics by aarnphm in https://github.com/bentoml/BentoML/pull/3019
* fix(e2e): exception test for image IO by aarnphm in https://github.com/bentoml/BentoML/pull/3017
* revert(3017): filter write-only mime type for Image IO by bojiang in https://github.com/bentoml/BentoML/pull/3020
* chore: cleanup containerize utils by aarnphm in https://github.com/bentoml/BentoML/pull/3014
* feat(proto): add `serialized_bytes` to `pb.Part` by aarnphm in https://github.com/bentoml/BentoML/pull/3022
* docs: Update README.md by parano in https://github.com/bentoml/BentoML/pull/3023
* chore(grpc): vcs generated stubs by aarnphm in https://github.com/bentoml/BentoML/pull/3016
* feat(io/image): allow writeable mimes as output by sauyon in https://github.com/bentoml/BentoML/pull/3024
* docs: fix descriptor typo by darioarias in https://github.com/bentoml/BentoML/pull/3027
* fix(server): log localhost instead of 0.0.0.0 by sauyon in https://github.com/bentoml/BentoML/pull/3033
* fix(io): Pandas OpenAPI schema by jeffthebear in https://github.com/bentoml/BentoML/pull/3032
* chore(docker): support more cuda versions by larme in https://github.com/bentoml/BentoML/pull/3035
* docs: updates on blocks that failed to render by aarnphm in https://github.com/bentoml/BentoML/pull/3031
* chore: migrate to pyproject.toml by aarnphm in https://github.com/bentoml/BentoML/pull/3025
* docs: gRPC tutorial by aarnphm in https://github.com/bentoml/BentoML/pull/3013
* docs: gRPC advanced guides by aarnphm in https://github.com/bentoml/BentoML/pull/3034
* feat(configuration): override options with envvar by bojiang in https://github.com/bentoml/BentoML/pull/3018
* chore: update links by aarnphm in https://github.com/bentoml/BentoML/pull/3040
* fix(configuration): should validate config early by aarnphm in https://github.com/bentoml/BentoML/pull/3041
* qa(bentos): update latest options by aarnphm in https://github.com/bentoml/BentoML/pull/3042
* qa: ignore tools from distribution by aarnphm in https://github.com/bentoml/BentoML/pull/3045
* dependencies: ignore broken pypi combination by aarnphm in https://github.com/bentoml/BentoML/pull/3043
* feat: gRPC tracking by aarnphm in https://github.com/bentoml/BentoML/pull/3015
* configuration: migrate schema to `api_server` by ssheng in https://github.com/bentoml/BentoML/pull/3046
* qa: cleanup MLflow by aarnphm in https://github.com/bentoml/BentoML/pull/2945

New Contributors
* lsh918 made their first contribution in https://github.com/bentoml/BentoML/pull/2951
* ssun-g made their first contribution in https://github.com/bentoml/BentoML/pull/2959
* benjamintanweihao made their first contribution in https://github.com/bentoml/BentoML/pull/2965
* creativedutchmen made their first contribution in https://github.com/bentoml/BentoML/pull/2974
* darioarias made their first contribution in https://github.com/bentoml/BentoML/pull/3027
* jeffthebear made their first contribution in https://github.com/bentoml/BentoML/pull/3032

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.5...v1.0.6

1.0.5

Not secure
🍱 BentoML `v1.0.5` is released as a quick fix to a Yatai incompatibility introduced in `v1.0.4`.
- The incompatibility manifests in the following error message when deploying a bento on Yatai. Upgrading BentoML to `v1.0.5` will resolve the issue.
bash
Error while finding module specification for 'bentoml._internal.server.cli.api_server' (ModuleNotFoundError: No module named 'bentoml._internal.server.cli')

- The incompatibility resides in all Yatai versions prior to `v1.0.0-alpha.*`. Alternatively, upgrading Yatai to `v1.0.0-alpha.*` will also restore the compatibility with bentos built in `v1.0.4`.

1.0.4

Not secure
- Added support for explicit GPU mapping for runners. In addition to specifying the number of GPU devices allocated to a runner, we can map a list of device IDs directly to a runner through [configuration](https://docs.bentoml.org/en/latest/concepts/runner.html#runner-configuration).

yaml
runners:
iris_clf_1:
resources:
nvidia.com/gpu: [2, 4] Map device 2 and 4 to iris_clf_1 runner
iris_clf_2:
resources:
nvidia.com/gpu: [1, 3] Map device 1 and 3 to iris_clf_2 runner


- Added SSL support for API server through both CLI and configuration.

yaml
--ssl-certfile TEXT SSL certificate file
--ssl-keyfile TEXT SSL key file
--ssl-keyfile-password TEXT SSL keyfile password
--ssl-version INTEGER SSL version to use (see stdlib 'ssl' module)
--ssl-cert-reqs INTEGER Whether client certificate is required (see stdlib 'ssl' module)
--ssl-ca-certs TEXT CA certificates file
--ssl-ciphers TEXT Ciphers to use (see stdlib 'ssl' module)


- Added adaptive batching size histogram metrics, `BENTOML_{runner}_{method}_adaptive_batch_size_bucket`, for observability of batching mechanism details.

![image](https://user-images.githubusercontent.com/861225/186965455-8c89a713-fdc1-4f46-a633-82dc0d2586d7.png)

- Added support OpenTelemetry [OTLP exporter for tracing](https://docs.bentoml.org/en/latest/guides/tracing.html) and configures the OpenTelemetry resource automatically if user has not explicitly configured it through environment variables. Upgraded OpenTelemetry python packages to version `0.33b0`.

![image](https://user-images.githubusercontent.com/861225/186965627-56565bb0-a5ba-4599-b464-9e6b0caa6d8d.png)

- Added support for saving `external_modules` alongside with models in the `save_model` API. Saving external Python modules is useful for models with external dependencies, such as tokenizers, preprocessors, and configurations.
- Enhanced Swagger UI to include additional documentation and helper links.

![image](https://user-images.githubusercontent.com/861225/186965724-073f5efb-4507-4221-854c-63437ac851c7.png)

💡 We continue to update the documentation on every release to help our users unlock the full power of BentoML.

- Checkout the [adaptive batching](https://docs.bentoml.org/en/latest/guides/batching.html) documentation on how to leverage batching to improve inference latency and efficiency.
- Checkout the [runner configuration](https://docs.bentoml.org/en/latest/concepts/runner.html#runner-configuration) documentation on how to customize resource allocation for runners at run time.

🙌 We continue to receive great engagement and support from the BentoML community.

- Shout out to sptowey for their contribution on adding SSL support.
- Shout out to dbuades for their contribution on adding the OTLP exporter.
- Shout out to tweeklab for their contribution on fixing a bug on `import_model` in the MLflow framework.

What's Changed
* refactor: cli to `bentoml_cli` by sauyon in https://github.com/bentoml/BentoML/pull/2880
* chore: remove typing-extensions dependency by sauyon in https://github.com/bentoml/BentoML/pull/2879
* fix: remove chmod install scripts by aarnphm in https://github.com/bentoml/BentoML/pull/2830
* fix: relative imports to lazy by aarnphm in https://github.com/bentoml/BentoML/pull/2882
* fix(cli): click utilities imports by aarnphm in https://github.com/bentoml/BentoML/pull/2883
* docs: add custom model runner example by parano in https://github.com/bentoml/BentoML/pull/2885
* qa: analytics unit tests by aarnphm in https://github.com/bentoml/BentoML/pull/2878
* chore: script for releasing quickstart bento by parano in https://github.com/bentoml/BentoML/pull/2892
* fix: pushing models from Bento instead of local modelstore by parano in https://github.com/bentoml/BentoML/pull/2887
* fix(containerize): supports passing multiple tags by aarnphm in https://github.com/bentoml/BentoML/pull/2872
* feat: explicit GPU runner mappings by jjmachan in https://github.com/bentoml/BentoML/pull/2862
* fix: setuptools doesn't include `bentoml_cli` by bojiang in https://github.com/bentoml/BentoML/pull/2898
* feat: Add SSL support for http api servers via bentoml serve by sptowey in https://github.com/bentoml/BentoML/pull/2886
* patch: ssl styling and default value check by aarnphm in https://github.com/bentoml/BentoML/pull/2899
* fix(scheduling): raise an error for invalid resources by bojiang in https://github.com/bentoml/BentoML/pull/2894
* chore(templates): cleanup debian dependency logic by aarnphm in https://github.com/bentoml/BentoML/pull/2904
* fix(ci): unittest failed by bojiang in https://github.com/bentoml/BentoML/pull/2908
* chore(cli): add figlet for CLI by aarnphm in https://github.com/bentoml/BentoML/pull/2909
* feat: codespace by aarnphm in https://github.com/bentoml/BentoML/pull/2907
* feat: use yatai proxy to upload/download bentos/models by yetone in https://github.com/bentoml/BentoML/pull/2832
* fix(scheduling): numpy worker environs are not taking effect by bojiang in https://github.com/bentoml/BentoML/pull/2893
* feat: Adaptive batching size histogram metrics by ssheng in https://github.com/bentoml/BentoML/pull/2902
* chore(swagger): include help links by parano in https://github.com/bentoml/BentoML/pull/2927
* feat(tracing): add support for otlp exporter by dbuades in https://github.com/bentoml/BentoML/pull/2918
* chore: Lock OpenTelemetry versions and add tracing metadata by ssheng in https://github.com/bentoml/BentoML/pull/2928
* revert: unminify CSS by aarnphm in https://github.com/bentoml/BentoML/pull/2931
* fix: importing mlflow:/ urls with no extra path info by tweeklab in https://github.com/bentoml/BentoML/pull/2930
* fix(yatai): make presigned_urls_deprecated optional by bojiang in https://github.com/bentoml/BentoML/pull/2933
* feat: add timeout option for bentoml runner config by jjmachan in https://github.com/bentoml/BentoML/pull/2890
* perf(cli): speed up by aarnphm in https://github.com/bentoml/BentoML/pull/2934
* chore: remove multipart IO descriptor warning by ssheng in https://github.com/bentoml/BentoML/pull/2936
* fix(json): revert eager check by aarnphm in https://github.com/bentoml/BentoML/pull/2926
* chore: remove `--config` flag to load the bentoml runtime config by jjmachan in https://github.com/bentoml/BentoML/pull/2939
* chore: update README messaging by ssheng in https://github.com/bentoml/BentoML/pull/2937
* fix: use a temporary file for file uploads by sauyon in https://github.com/bentoml/BentoML/pull/2929
* feat(cli): add CLI command to serve a runner by bojiang in https://github.com/bentoml/BentoML/pull/2920
* docs: Runner configuration for batching and resource allocation by ssheng in https://github.com/bentoml/BentoML/pull/2941
* bug: handle bad image file by parano in https://github.com/bentoml/BentoML/pull/2942
* chore(docs): earlier check for buildx by aarnphm in https://github.com/bentoml/BentoML/pull/2940
* fix(cli): helper message default values by ssheng in https://github.com/bentoml/BentoML/pull/2943
* feat(sdk): add external_modules option to save_model by bojiang in https://github.com/bentoml/BentoML/pull/2895
* fix(cli): component name regression by ssheng in https://github.com/bentoml/BentoML/pull/2944

New Contributors
* sptowey made their first contribution in https://github.com/bentoml/BentoML/pull/2886
* dbuades made their first contribution in https://github.com/bentoml/BentoML/pull/2918
* tweeklab made their first contribution in https://github.com/bentoml/BentoML/pull/2930

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.3...v1.0.4

Page 16 of 28

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.