Bentoml

Latest version: v1.2.18

Safety actively analyzes 638845 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 21

1.0.19

🍱 BentoML `v1.0.19` is released with enhanced GPU utilization and expanded ML framework support.

- **Optimized GPU resource utilization**: Enabled scheduling of multiple instances of the same runner using the `workers_per_resource` [scheduling strategy configuration](https://docs.bentoml.org/en/latest/guides/scheduling.html). The following configuration allows scheduling 2 instances of the “iris” runner per GPU instance. `workers_per_resource` is 1 by default.

yaml
runners:
iris:
resources:
nvidia.com/gpu: 1
workers_per_resource: 2


- **New ML framework support**: We've added support for [EasyOCR](https://docs.bentoml.org/en/latest/frameworks/easyocr.html) and [Detectron2](https://docs.bentoml.org/en/latest/frameworks/detectron.html) to our growing list of supported ML frameworks.
- **Enhanced runner communication**: Implemented PEP 574 out-of-band pickling to improve runner communication by eliminating memory copying, resulting in better performance and efficiency.
- **Backward compatibility for Hugging Face Transformers**: Resolved compatibility issues with Hugging Face Transformers versions prior to `v4.18`, ensuring a seamless experience for users with older versions.

**⚙️** With the release of Kubeflow 1.7, [BentoML now has native integration with Kubeflow](https://modelserving.com/blog/streamline-production-ml-with-bentoml-and-kubeflow), allowing developers to leverage [Bento](https://docs.bentoml.org/en/latest/concepts/bento.html)ML's cloud-native components. Prior, developers were limited to exporting and deploying [Bento](https://docs.bentoml.org/en/latest/concepts/bento.html)
as a single container. With this integration, models trained in Kubeflow can easily be packaged, containerized, and deployed to a Kubernetes cluster as microservices. This architecture enables the individual models to run in their own pods, utilizing the most optimal hardware for their respective tasks and enabling independent scaling.

💡 With each release, we consistently update our blog, documentation and examples to empower the community in harnessing the full potential of BentoML.

- Learn more [scheduling strategy](https://docs.bentoml.org/en/latest/guides/scheduling.html) to get better resource utilization.
- Learn more about [model monitoring and drift detection in BentoML](https://modelserving.com/blog/a-guide-to-ml-monitoring-and-drift-detection) and integration with various monitoring framework.
- Learn more about using [Nvidia Triton Inference Server as a runner](https://modelserving.com/blog/bentoml-or-triton-inference-server-choose-both) to improve your application’s performance and throughput.

What's Changed
* fix(env): using `python -m` to run pip commands by frostming in https://github.com/bentoml/BentoML/pull/3762
* chore(deps): bump pytest from 7.3.0 to 7.3.1 by dependabot in https://github.com/bentoml/BentoML/pull/3766
* feat: lazy load `bentoml.server` by aarnphm in https://github.com/bentoml/BentoML/pull/3763
* fix(client): service route prefix by aarnphm in https://github.com/bentoml/BentoML/pull/3765
* chore: add test with many requests by sauyon in https://github.com/bentoml/BentoML/pull/3768
* fix: using http config for grpc server by aarnphm in https://github.com/bentoml/BentoML/pull/3771
* feat: apply pep574 out-of-band pickling to DefaultContainer by larme in https://github.com/bentoml/BentoML/pull/3736
* fix: passing serve_cmd and passthrough kwargs by aarnphm in https://github.com/bentoml/BentoML/pull/3764
* feat: Detectron by aarnphm in https://github.com/bentoml/BentoML/pull/3711
* chore(dispatcher): (re-)factor out training code by sauyon in https://github.com/bentoml/BentoML/pull/3767
* feat: EasyOCR by aarnphm in https://github.com/bentoml/BentoML/pull/3712
* feat(build): support 3.11 by aarnphm in https://github.com/bentoml/BentoML/pull/3774
* patch: backports module availability for transformers<4.18 by aarnphm in https://github.com/bentoml/BentoML/pull/3775
* fix(dispatcher): set wait to 0 while training by sauyon in https://github.com/bentoml/BentoML/pull/3664
* chore(deps): bump ruff from 0.0.261 to 0.0.262 by dependabot in https://github.com/bentoml/BentoML/pull/3778
* feat: add `modelload_model` method by parano in https://github.com/bentoml/BentoML/pull/3780
* feat: Allow spawning more than 1 worker on each resource by frostming in https://github.com/bentoml/BentoML/pull/3776
* docs: Fix TensorFlow `save_model` parameter order by ssheng in https://github.com/bentoml/BentoML/pull/3781
* chore(deps): bump yamllint from 1.30.0 to 1.31.0 by dependabot in https://github.com/bentoml/BentoML/pull/3782
* chore(deps): bump imageio from 2.27.0 to 2.28.0 by dependabot in https://github.com/bentoml/BentoML/pull/3783
* chore(deps): bump ruff from 0.0.262 to 0.0.263 by dependabot in https://github.com/bentoml/BentoML/pull/3790
* fix: allow import service defined under a Python package by parano in https://github.com/bentoml/BentoML/pull/3794

New Contributors
* frostming made their first contribution in https://github.com/bentoml/BentoML/pull/3762

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.18...v1.0.19

1.0.18

🍱 BentoML `v1.0.18` brings a new way of creating the server and client natively from Python.

- Start an HTTP or gRPC server and client asynchronously with a context manager.

python
server = HTTPServer("iris_classifier:latest", production=True, port=3000)

Start the server in a separate process and connect to it using a client
with server.start() as client:
res = client.classify(np.array([[4.9, 3.0, 1.4, 0.2]]))


- Start an HTTP or gRPC server synchronously.

python
server = HTTPServer("iris_classifier:latest", production=True, port=3000)
server.start(blocking=True)


- As always, a client can be created and connected to an running server.

python
client = Client.from_url("http://localhost:3000")
res = client.classify(np.array([[4.9, 3.0, 1.4, 0.2]]))

What's Changed
* chore(deps): bump coverage[toml] from 7.2.2 to 7.2.3 by dependabot in https://github.com/bentoml/BentoML/pull/3746
* bugs: Fix an f-string bug in Tranformers framework. by ssheng in https://github.com/bentoml/BentoML/pull/3753
* chore(deps): bump pytest from 7.2.2 to 7.3.0 by dependabot in https://github.com/bentoml/BentoML/pull/3751
* chore(deps): bump bufbuild/buf-setup-action from 1.16.0 to 1.17.0 by dependabot in https://github.com/bentoml/BentoML/pull/3750
* fix: BufferError when pushing model to BentoCloud by aarnphm in https://github.com/bentoml/BentoML/pull/3737
* chore: remove codecov dependencies by aarnphm in https://github.com/bentoml/BentoML/pull/3754
* feat: implement new serve API by sauyon in https://github.com/bentoml/BentoML/pull/3696
* examples: Add a client example to quickstart by ssheng in https://github.com/bentoml/BentoML/pull/3752


**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.17...v1.0.18

1.0.17

🍱 We are excited to announce the release of BentoML v1.0.17, which includes support for 🤗 Hugging Face Transformers pre-trained instances. Prior to this release, only pipelines could be saved and loaded using the `bentoml.transformers` APIs. However, based on the community's demand to work with pre-trained models, tokenizers, preprocessors, etc., without pipelines, we have expanded our capabilities in `bentoml.transformers` APIs. With this release, all pre-trained instances can be saved and loaded into either built-in Transformers framework runners or custom runners. This update opens up new possibilities for users to work with pre-trained models, and we are thrilled to see what the community will create using this feature. To learn more, visit [BentoML Transformers framework documentation](https://docs.bentoml.org/en/latest/frameworks/transformers.html).

- Pre-trained models and instances, such as tokenizers, preprocessors, and feature extractors, can also be saved as standalone models using the `bentoml.transformers.save_model` API.

python
import bentoml
from transformers import AutoTokenizer

processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")

bentoml.transformers.save_model("speecht5_tts_processor", processor)
bentoml.transformers.save_model("speecht5_tts_model", model, signatures={"generate_speech": {"batchable": False}})
bentoml.transformers.save_model("speecht5_tts_vocoder", vocoder)


- Pre-trained models and instances can be run either independently as Transformers framework runners or jointly in a custom runner. To use pre-trained models and instances as individual framework runners, simply get the models reference and convert them to runners using the `to_runner` method.

python
import bentoml
import torch

from bentoml.io import Text, NumpyNdarray
from datasets import load_dataset

proccessor_runner = bentoml.transformers.get("speecht5_tts_processor").to_runner()
model_runner = bentoml.transformers.get("speecht5_tts_model").to_runner()
vocoder_runner = bentoml.transformers.get("speecht5_tts_vocoder").to_runner()
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = torch.tensor(embeddings_dataset[7306]["xvector"]).unsqueeze(0)

svc = bentoml.Service("text2speech", runners=[proccessor_runner, model_runner, vocoder_runner])

svc.api(input=Text(), output=NumpyNdarray())
def generate_speech(inp: str):
inputs = proccessor_runner.run(text=inp, return_tensors="pt")
speech = model_runner.generate_speech.run(input_ids=inputs["input_ids"], speaker_embeddings=speaker_embeddings, vocoder=vocoder_runner.run)
return speech.numpy()


- To use the pre-trained models and instances together in a custom runner, use the `bentoml.transformers.get` API to get the models references and load them in a custom runner. The pretrained instances can then be used for inference in the custom runner.

python
import bentoml
import torch

from datasets import load_dataset

processor_ref = bentoml.models.get("speecht5_tts_processor:latest")
model_ref = bentoml.models.get("speecht5_tts_model:latest")
vocoder_ref = bentoml.models.get("speecht5_tts_vocoder:latest")

class SpeechT5Runnable(bentoml.Runnable):

def __init__(self):
self.processor = bentoml.transformers.load_model(processor_ref)
self.model = bentoml.transformers.load_model(model_ref)
self.vocoder = bentoml.transformers.load_model(vocoder_ref)
self.embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
self.speaker_embeddings = torch.tensor(self.embeddings_dataset[7306]["xvector"]).unsqueeze(0)

bentoml.Runnable.method(batchable=False)
def generate_speech(self, inp: str):
inputs = self.processor(text=inp, return_tensors="pt")
speech = self.model.generate_speech(inputs["input_ids"], self.speaker_embeddings, vocoder=self.vocoder)
return speech.numpy()

text2speech_runner = bentoml.Runner(SpeechT5Runnable, name="speecht5_runner", models=[processor_ref, model_ref, vocoder_ref])
svc = bentoml.Service("talk_gpt", runners=[text2speech_runner])

svc.api(input=bentoml.io.Text(), output=bentoml.io.NumpyNdarray())
async def generate_speech(inp: str):
return await text2speech_runner.generate_speech.async_run(inp)


What's Changed
* feat(containerize): caching pip/conda installation layers by smidm in https://github.com/bentoml/BentoML/pull/3673
* docs(batching): update docs to 503 by sauyon in https://github.com/bentoml/BentoML/pull/3677
* chore(deps): bump ruff from 0.0.255 to 0.0.256 by dependabot in https://github.com/bentoml/BentoML/pull/3676
* fix(type): annotate PdSeries with pandas-stubs by aarnphm in https://github.com/bentoml/BentoML/pull/3466
* chore(dispatcher): refactor out training code by sauyon in https://github.com/bentoml/BentoML/pull/3663
* fix: makes containerize for triton examples to all amd64 by aarnphm in https://github.com/bentoml/BentoML/pull/3678
* chore(deps): bump coverage[toml] from 7.2.1 to 7.2.2 by dependabot in https://github.com/bentoml/BentoML/pull/3679
* revert: "chore(dispatcher): refactor out training code (3663)" by sauyon in https://github.com/bentoml/BentoML/pull/3680
* doc: add more links to Bentoml/examples by larme in https://github.com/bentoml/BentoML/pull/3631
* perf: serialization optimization by larme in https://github.com/bentoml/BentoML/pull/3606
* examples: Kubeflow by ssheng in https://github.com/bentoml/BentoML/pull/3656
* chore(deps): bump pytest-asyncio from 0.20.3 to 0.21.0 by dependabot in https://github.com/bentoml/BentoML/pull/3688
* chore(deps): bump ruff from 0.0.256 to 0.0.257 by dependabot in https://github.com/bentoml/BentoML/pull/3689
* chore(deps): bump imageio from 2.26.0 to 2.26.1 by dependabot in https://github.com/bentoml/BentoML/pull/3690
* chore(deps): bump yamllint from 1.29.0 to 1.30.0 by dependabot in https://github.com/bentoml/BentoML/pull/3694
* fix: remove duplicate dependabot check for pip by aarnphm in https://github.com/bentoml/BentoML/pull/3691
* chore(deps): bump ruff from 0.0.257 to 0.0.258 by dependabot in https://github.com/bentoml/BentoML/pull/3699
* docs: Update the Kubeflow example by ssheng in https://github.com/bentoml/BentoML/pull/3703
* chore(deps): bump ruff from 0.0.258 to 0.0.259 by dependabot in https://github.com/bentoml/BentoML/pull/3709
* docs: add link to pyfilesystem plugins by sauyon in https://github.com/bentoml/BentoML/pull/3716
* docs: Kubeflow integration documentation by ssheng in https://github.com/bentoml/BentoML/pull/3704
* docs: replace load_runner() to get().to_runner() by KimSoungRyoul in https://github.com/bentoml/BentoML/pull/3715
* chore(deps): bump imageio from 2.26.1 to 2.27.0 by dependabot in https://github.com/bentoml/BentoML/pull/3720
* fix(readme): format markdown table by aarnphm in https://github.com/bentoml/BentoML/pull/3722
* fix: copy files before running `setup_script` by aarnphm in https://github.com/bentoml/BentoML/pull/3713
* chore: remove experimental warning for `bentoml.metrics` by aarnphm in https://github.com/bentoml/BentoML/pull/3725
* ci: temporary disable coverage by aarnphm in https://github.com/bentoml/BentoML/pull/3726
* chore(deps): bump ruff from 0.0.259 to 0.0.260 by dependabot in https://github.com/bentoml/BentoML/pull/3734
* chore(deps): bump tritonclient[all] from 2.31.0 to 2.32.0 by dependabot in https://github.com/bentoml/BentoML/pull/3730
* fix(type): `bentoml.container.build` should accept multiple `image_tag` by pmayd in https://github.com/bentoml/BentoML/pull/3719
* chore(deps): bump bufbuild/buf-setup-action from 1.15.1 to 1.16.0 by dependabot in https://github.com/bentoml/BentoML/pull/3738
* feat: add query params to request context by sauyon in https://github.com/bentoml/BentoML/pull/3717
* chore(dispatcher): use attr class instead of a tuple by sauyon in https://github.com/bentoml/BentoML/pull/3731
* fix: Make it so the configured max_batch_size is respected when batching inference requests together by RShang97 in https://github.com/bentoml/BentoML/pull/3741
* feat(transformers): pretrained protocol support by aarnphm in https://github.com/bentoml/BentoML/pull/3684
* fix(tests): broken CI by aarnphm in https://github.com/bentoml/BentoML/pull/3742
* chore(deps): bump ruff from 0.0.260 to 0.0.261 by dependabot in https://github.com/bentoml/BentoML/pull/3744
* docs: Transformers documentation on pre-trained instances support by ssheng in https://github.com/bentoml/BentoML/pull/3745

New Contributors
* smidm made their first contribution in https://github.com/bentoml/BentoML/pull/3673
* pmayd made their first contribution in https://github.com/bentoml/BentoML/pull/3719
* RShang97 made their first contribution in https://github.com/bentoml/BentoML/pull/3741

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.16...v1.0.17

1.0.16

- Triton Inference Server can be configured as a Runner in BentoML with its model repository and CLI arguments specified as parameters.

python
import bentoml

triton_runner = bentoml.triton.Runner(
"triton_runner",
model_repository="s3://bucket/path/to/model_repository",
cli_args=["--load-model=torchscrip_yolov5s", "--model-control-mode=explicit"],
)


- Models served by the Triton Inference Server Runner can be called as a method on the runner handle both synchronously and asynchronously.

python
svc.api(
input=bentoml.io.Image.from_sample("./data/0.png"), output=bentoml.io.NumpyNdarray()
)
async def bentoml_torchscript_mnist_infer(im: Image) -> NDArray[t.Any]:
arr = np.array(im) / 255.0
arr = np.expand_dims(arr, (0, 1)).astype("float32")
InferResult = await triton_runner.torchscript_mnist.async_run(arr)
return InferResult.as_numpy("OUTPUT__0")


- Build bentos and containerize images with Triton Runners by specifying `nvcr.io/nvidia/tritonserver` base image in `bentofile.yaml`.

yaml
service: service:svc
include:
- /model_repository
- /data/*.png
- /*.py
exclude:
- /__pycache__
- /venv
- /train.py
- /build_bento.py
- /containerize_bento.py
python:
packages:
- bentoml[triton]
docker:
base_image: nvcr.io/nvidia/tritonserver:22.12-py3


💡 If you are an existing Triton user, the integration provides simpler ways to add custom logics in Python, deploy distributed multi-model inference graph, unify model management across different ML frameworks and workflows, and standardize model packaging format with versioning and collaboration features. If you are an existing BentoML user, the integration improves the runner efficiency and throughput under high load thanks to Triton’s efficient C++ runtime.

What's Changed
* fix(container): podman virtual machine healthcheck (3575) by timc in https://github.com/bentoml/BentoML/pull/3576
* chore(aiohttp): remove deprecated verify_ssl to ssl by aarnphm in https://github.com/bentoml/BentoML/pull/3574
* feat(triton): support HTTP client by aarnphm in https://github.com/bentoml/BentoML/pull/3502
* fix(grpc): handle backward protocol version by aarnphm in https://github.com/bentoml/BentoML/pull/3332
* chore(deps): bump ruff from 0.0.246 to 0.0.247 by dependabot in https://github.com/bentoml/BentoML/pull/3579
* chore(test): using container API for testing by aarnphm in https://github.com/bentoml/BentoML/pull/3582
* fix(serve-cli): Make sure to use BENTOML_CONFIG value by aarnphm in https://github.com/bentoml/BentoML/pull/3597
* docs: Update documentation with an examples link by ssheng in https://github.com/bentoml/BentoML/pull/3599
* chore: lock starlette version by sauyon in https://github.com/bentoml/BentoML/pull/3600
* feature(diffusers): support `enable_attention_slicing` by larme in https://github.com/bentoml/BentoML/pull/3598
* chore(cli): figlet to show on CLI only by aarnphm in https://github.com/bentoml/BentoML/pull/3603
* chore(cli): using default background as color by aarnphm in https://github.com/bentoml/BentoML/pull/3608
* feat: Flax by aarnphm in https://github.com/bentoml/BentoML/pull/3123
* feat(gRPC): client implementation by aarnphm in https://github.com/bentoml/BentoML/pull/3280
* fix: invalid option dtype=True for pd.read_csv by parano in https://github.com/bentoml/BentoML/pull/3601
* chore(deps): bump coverage[toml] from 7.1.0 to 7.2.0 by dependabot in https://github.com/bentoml/BentoML/pull/3616
* chore(deps): bump ruff from 0.0.247 to 0.0.252 by dependabot in https://github.com/bentoml/BentoML/pull/3617
* docs: containerisation API by aarnphm in https://github.com/bentoml/BentoML/pull/3518
* chore(deps): bump coverage[toml] from 7.2.0 to 7.2.1 by dependabot in https://github.com/bentoml/BentoML/pull/3621
* chore(deps): bump imageio from 2.25.1 to 2.26.0 by dependabot in https://github.com/bentoml/BentoML/pull/3620
* fix(docs): missing space bug causes table not to render by aarnphm in https://github.com/bentoml/BentoML/pull/3622
* chore(deps): bump ruff from 0.0.252 to 0.0.253 by dependabot in https://github.com/bentoml/BentoML/pull/3624
* feat: enable cork for non-batched workloads by sauyon in https://github.com/bentoml/BentoML/pull/3602
* docs: Fix typo in concepts/service by FelixSchuSi in https://github.com/bentoml/BentoML/pull/3627
* chore(deps): bump tritonclient[all] from 2.30.0 to 2.31.0 by dependabot in https://github.com/bentoml/BentoML/pull/3628
* fix(docs): broken inline docstring by aarnphm in https://github.com/bentoml/BentoML/pull/3538
* fix: use a semaphore to limit runner connections by sauyon in https://github.com/bentoml/BentoML/pull/3607
* fix: make inference_api handle None type by aarnphm in https://github.com/bentoml/BentoML/pull/3611
* fix: make sure not to override user set values for from_sample by aarnphm in https://github.com/bentoml/BentoML/pull/3610
* docs: add exceptions API section by aarnphm in https://github.com/bentoml/BentoML/pull/3609
* revert(pyproject): add back pytest plugins by aarnphm in https://github.com/bentoml/BentoML/pull/3633
* fix(configuration): CORS docs, `allow_origins` and `allow_headers` by larme in https://github.com/bentoml/BentoML/pull/3643
* chore(deps): bump ruff from 0.0.253 to 0.0.254 by dependabot in https://github.com/bentoml/BentoML/pull/3641
* chore(deps): bump pytest from 7.2.1 to 7.2.2 by dependabot in https://github.com/bentoml/BentoML/pull/3642
* chore: http client healthcheck by denyszhak in https://github.com/bentoml/BentoML/pull/3636
* docs: typo in configuration.rst by davkime in https://github.com/bentoml/BentoML/pull/3644
* docs: correct links to configuration source code by davkime in https://github.com/bentoml/BentoML/pull/3645
* example: add fraud detection and benchmark examples by parano in https://github.com/bentoml/BentoML/pull/3647
* fix(containerize): remove autoconfig for buildctl by aarnphm in https://github.com/bentoml/BentoML/pull/3484
* feat: name in bentofile.yaml by aarnphm in https://github.com/bentoml/BentoML/pull/3604
* chore: ensure all labels are dict[str,str] by aarnphm in https://github.com/bentoml/BentoML/pull/3605
* fix(triton): enable runtime options by aarnphm in https://github.com/bentoml/BentoML/pull/3649
* docs: Triton Inference Server by aarnphm in https://github.com/bentoml/BentoML/pull/3519
* example: Triton Inference Server by aarnphm in https://github.com/bentoml/BentoML/pull/3471
* chore(deps): bump pytest from 7.2.1 to 7.2.2 in /requirements by dependabot in https://github.com/bentoml/BentoML/pull/3639
* chore(deps): bump bufbuild/buf-setup-action from 1.14.0 to 1.15.0 by dependabot in https://github.com/bentoml/BentoML/pull/3638
* fix: some missing logics for triton examples by aarnphm in https://github.com/bentoml/BentoML/pull/3650
* fix: use async implementation by characat0 in https://github.com/bentoml/BentoML/pull/3654
* feat: add ray deploy support by parano in https://github.com/bentoml/BentoML/pull/3632
* chore(deps): bump pytest-xdist[psutil] from 3.2.0 to 3.2.1 by dependabot in https://github.com/bentoml/BentoML/pull/3659
* chore(deps): bump bufbuild/buf-setup-action from 1.15.0 to 1.15.1 by dependabot in https://github.com/bentoml/BentoML/pull/3655
* fix: update scheme logic using ssl.enabled by aarnphm in https://github.com/bentoml/BentoML/pull/3660
* feat: `from_sample` docstring by aarnphm in https://github.com/bentoml/BentoML/pull/3318
* fix(ci): locking starlette for container tests by aarnphm in https://github.com/bentoml/BentoML/pull/3666
* chore: better exception for numpy by sauyon in https://github.com/bentoml/BentoML/pull/3665
* feat: make file io descriptor allow any mime type by default by sauyon in https://github.com/bentoml/BentoML/pull/3626
* fix(docs): broken link by aarnphm in https://github.com/bentoml/BentoML/pull/3537
* chore(stubs): remove unused by aarnphm in https://github.com/bentoml/BentoML/pull/3612
* docs: Update Triton documentation and examples by ssheng in https://github.com/bentoml/BentoML/pull/3668
* chore(deps): bump ruff from 0.0.254 to 0.0.255 by dependabot in https://github.com/bentoml/BentoML/pull/3671
* docs: Update integration docs by ssheng in https://github.com/bentoml/BentoML/pull/3672

New Contributors
* FelixSchuSi made their first contribution in https://github.com/bentoml/BentoML/pull/3627
* denyszhak made their first contribution in https://github.com/bentoml/BentoML/pull/3636
* davkime made their first contribution in https://github.com/bentoml/BentoML/pull/3644

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.15...v1.0.16

1.0.15

🍱 BentoML `v1.0.15` release is here featuring the introduction of the `bentoml.diffusers` framework.

- Learn more about the capabilities of the `bentoml.diffusers` framework in the [Creating Stable Diffusion 2.0 Service With BentoML And Diffusers](https://modelserving.com/blog/creating-stable-diffusion-20-service-with-bentoml-and-diffusers) blog and [BentoML Diffusers](https://github.com/bentoml/diffusers-examples) example project.
- Import a diffusion model with the `bentoml.diffusers.import_model` API.

python
import bentoml

bentoml.diffusers.import_model(
"sd2",
"stabilityai/stable-diffusion-2",
)


- Create a `text2img` service using a Stable Diffusion 2.0 model runner with the familiar `to_runner` API from the `bentoml.diffuser` framework.

python
import torch
from diffusers import StableDiffusionPipeline

import bentoml
from bentoml.io import Image, JSON, Multipart

bento_model = bentoml.diffusers.get("sd2:latest")
stable_diffusion_runner = bento_model.to_runner()

svc = bentoml.Service("stable_diffusion_v2", runners=[stable_diffusion_runner])

svc.api(input=JSON(), output=Image())
def txt2img(input_data):
images, _ = stable_diffusion_runner.run(**input_data)
return images[0]


🍱 Fixed a incompatibility change introduced in `starlette==0.25.0` result in the type `MultiPartMessage` not being found in `starlette.formparsers`.

ImportError: cannot import name 'MultiPartMessage' from 'starlette.formparsers' (/opt/miniconda3/envs/bentoml/lib/python3.10/site-packages/starlette/formparsers.py)


What's Changed
* chore(deps): bump pytest-xdist[psutil] from 3.1.0 to 3.2.0 by dependabot in https://github.com/bentoml/BentoML/pull/3536
* fix: include dockerfile_template to Bento for containerize by aarnphm in https://github.com/bentoml/BentoML/pull/3501
* chore: add missing logger and fix types by aarnphm in https://github.com/bentoml/BentoML/pull/3453
* chore(rtd): disable epub and pdf as format by aarnphm in https://github.com/bentoml/BentoML/pull/3544
* feat(torchscript): support `_extra_files` by aarnphm in https://github.com/bentoml/BentoML/pull/3480
* refactor(ci): make sure to run types on py,pyi files by aarnphm in https://github.com/bentoml/BentoML/pull/3545
* fix(server): deprecate client and cache get_client by aarnphm in https://github.com/bentoml/BentoML/pull/3547
* chore(serve): update options for triton_options by aarnphm in https://github.com/bentoml/BentoML/pull/3503
* tools(linter): Ruff by aarnphm in https://github.com/bentoml/BentoML/pull/3539
* chore(deps): bump ruff from 0.0.243 to 0.0.244 by dependabot in https://github.com/bentoml/BentoML/pull/3548
* chore(type): remove cattr type ignore by aarnphm in https://github.com/bentoml/BentoML/pull/3550
* chore: bumping otlp deps to 1.15 by aarnphm in https://github.com/bentoml/BentoML/pull/3351
* docs: Add an example index by ssheng in https://github.com/bentoml/BentoML/pull/3551
* revert: "chore: bumping otlp deps to 1.15" by bojiang in https://github.com/bentoml/BentoML/pull/3553
* chore(deps): bump bufbuild/buf-setup-action from 1.13.1 to 1.14.0 by dependabot in https://github.com/bentoml/BentoML/pull/3554
* chore(deps): bump ruff from 0.0.244 to 0.0.246 by dependabot in https://github.com/bentoml/BentoML/pull/3559
* chore(deps): bump imageio from 2.25.0 to 2.25.1 by dependabot in https://github.com/bentoml/BentoML/pull/3557
* chore: update README.md by timliubentoml in https://github.com/bentoml/BentoML/pull/3565
* feat(containerization): support 11.7 by aarnphm in https://github.com/bentoml/BentoML/pull/3567
* chore: remove deprecation warning when building bentos by CheeksTheGeek in https://github.com/bentoml/BentoML/pull/3566
* feature(framework): diffusers by larme in https://github.com/bentoml/BentoML/pull/3534
* fix: update formparser for new starlette by sauyon in https://github.com/bentoml/BentoML/pull/3569

New Contributors
* CheeksTheGeek made their first contribution in https://github.com/bentoml/BentoML/pull/3566

**Full Changelog**: https://github.com/bentoml/BentoML/compare/v1.0.14...v1.0.15

1.0.14

🍱 Fixed the backward incompatibility introduced in `starlette` version `0.24.0`. Upgrade BentoML to `v1.0.14` if you encounter the error related to `content_type` like below.


Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/bentoml/_internal/server/service_app.py", line 305, in api_func
input_data = await api.input.from_http_request(request)
File "/usr/local/lib/python3.8/dist-packages/bentoml/_internal/io_descriptors/multipart.py", line 208, in from_http_request
reqs = await populate_multipart_requests(request)
File "/usr/local/lib/python3.8/dist-packages/bentoml/_internal/utils/formparser.py", line 188, in populate_multipart_requests
form = await multipart_parser.parse()
File "/usr/local/lib/python3.8/dist-packages/bentoml/_internal/utils/formparser.py", line 158, in parse
multipart_file = UploadFile(
TypeError: __init__() got an unexpected keyword argument 'content_type'

Page 8 of 21

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.