Kfserving

Latest version: v0.6.1

Safety actively analyzes 681790 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.6.1

- Migrate images from gcr.io to dockerhub 1782 yuzisun
- Update Alibi Explainers to 0.6.0 1720 cliveseldon
- Enable tag based routing 1752 yuzisun
- prevent nested directories in s3 to be created as file inside model directory 1770 animesh-agarwal
- Upgrade ray[serve] to version 1.5.0, allowing kfserving to be installed on python3.9 1772 Lewiky

0.6.0

🌈 **What's New?**
- Web-app for managing InferenceServices 1328 kimwnasptd
- Web-app: Add manifests for launching and exposing the app 1512 kimwnasptd
- Web-app: Implement a GitHub action for building the web app 1504 kimwnasptd
- [storage-initializer] add support for aws sts, switch to use boto 1472 matty-rose
- [storage-initializer] Supports anonymous S3 connect 1640 mkbhanda
- MMS: Add heathcheck endpoint for InferenceService agent 1509 yuzisun
- MMS: Trained Model Validation Webhook + Memory in trained model immutable 1394 abchoo
- MMS: Add Multi Model Server support for custom spec 1470 shydefoo
- MMS: Added annotation to use anonymous credentials for s3 1538 abchoo
- MMS: Adds condition for Trained Model to check if isvc predictor supports MMS 1522 abchoo
- MMS: Introducing HTTP protocol for MMS downloader 1510 abchoo
- PMML: Improve PMMLServer predict performance 1405 AnyISalIn
- TorchServe: Upgrade torchserve version to 0.4.0 1649 jagadeeshi2i
- Paddle: Add Paddle predictor 1615 Ruminateer
- KFServer: Parallel inference support 1637 yuzisun
- Logger: Add the component label in payload logging 1636 theofpa
- Logger: Add logger in explainer and transformer 1597 theofpa

:bug: **What's Fixed?**
- The ingress virtual service is not reconciled when updating annotations/labels of inference service 1525 wengyao04
- Resolve knative service diff to prevent dup revision 1484 yuzisun
- Make v1beta1 custom predictors have configurable protocol 1493 cliveseldon
- Ingress reconciler compatibility with istio 1.10 1643 theofpa
- MMS: Service gets 404 during autoscaling 1429 mszacillo
- MMS: Added mutex for downloader providers. 1531 abchoo
- MMS: Prevents /mnt/models/<model name> from being converted into a file 1549 abchoo
- MMS: Watcher should not be started until models downloaded in MMS 1429 abchoo
- Storage initializer: download tar.gz or zip from uri with query params fails 1463 metaphor
- Storage initializer: Extend zip content type check when download zip use uri 1673 haijohn
- Logger: Fix logger for error response case 1533 yuzisun
- [xgboostserver] Convert list input to numpy array before creating DMatrix 1513 pradithya
- KFServer: Limit number of asyncio workers in custom transformers 1687 sukumargaonkar

**What's Changed?**
- Support knative 0.19+, defaults to use knative-local-gateway 1334 theofpa

**Development experience and docs**
- Speed-up alibi-explainer image build 1395 theofpa
- Improvements to self-signed-ca.sh 1661 elukey
- Fixes storage initializer image patch script 1650 Ruminateer
- Update all e2e tests to v1beta1 1622 yuzisun
- Update kubeflow overlay 1424 pvaneck
- Add github action for python lint 1485 yuzisun
- Update logger samples for newer eventing versions 1526 pvaneck
- Update pipelines documentation 1498 pvaneck
- Add Spark model inference example with export pmml file 1434 yuzisun
- Reorg multi-model serving doc 1412 yuzisun
- Feast transformer example doc 1647 chinhuang007
- Added benchmark for multi-model serving 1554 Aaronchoo

0.6.0rc0

🌈 **What's New?**
- Web app for managing InferenceServices 1328
- web-app: Add manifests for launching and exposing the app 1505
- web-app: Implement a GitHub action for building the web app 1504
- [storage-initializer] add support for aws sts 1451
- MMS: Add heathcheck endpoint for InferenceService agent 1041
- MMS: Trained Model Validation Webhook + Memory in trained model immutable 1394
- MMS: multi-model-serving support for custom container in predictorSpec 1427
- MMS: Added annotation to use anonymous credentials for s3 1538
- MMS: Adds condition for Trained Model to check if isvc predictor supports MMS 1522
- MMS: Introducing HTTP protocol for MMS downloader
- Improve PMMLServer predict performance 1405

:bug: **What's Fixed?**
- Fix duplicated revision when creating the service initially 1467
- The ingress virtual service is not reconciled when updating annotations/labels of inference service 1524
- Model server response status code not propagated when using logger 1530
- MMS service gets 404 during autoscaling 1338
- MMS: Added mutex for downloader providers. Fixes 1531
- MMS: Prevents /mnt/models/<model name> from being converted into a file 1549
- MMS: Watcher should not be started until models downloaded in MMS 1429
- Resolve knative service diff to prevent dup revision 1484
- Storage initializer download tar.gz or zip from uri with query params fails 1462
- Make v1beta1 custom predictors have configurable protocol 1483
- Fix logger for error response case 1533
- [xgboostserver] Convert list input to numpy array before creating DMatrix 1513

**What's Changed?**
- support knative 0.19+, defaults to knative-local-gateway 1334

**Development experience and docs**
- speed-up alibi-explainer image build 1395
- Update logger samples for newer eventing versions 1526
- Update pipelines documentation 1498
- Add github action for python lint 1485
- Add Spark model inference example with export pmml file 1434
- Update kubeflow overlay 1424
- reorg multi-model serving doc 1412

0.5.1

**Features**
- Support credentials for HTTP storage URIs (1372)
- Trained Model Validation Webhook + Memory in trained model immutable (1394)
- Validate the parent inference service is ready in trained model controller (1402)
- Validation for storage URI in Trained Model webhook (1407)

**Bug Fixes**
- Use custom local gateway for isvc external service (1382)
- Avoid overwriting arguments specified on container fields (1400)
- Bug Fix for CloudEvent data access (1396)
- Propagate Inferenceservice annotations to top level virtualservice (1403)
- Remove unnecessary "latest" routing tag (1378)

0.5.0

InferenceService V1Beta1
:ship: KFServing 0.5 promotes the core InferenceService from v1alpha2 to v1beta1!

The minimum required versions are Kubernetes 1.16 and Istio 1.3.1/Knative 0.14.3. Conversion webhook is installed to automatically convert v1alpha2 inference service to v1beta1.

:new: What's new ?
- You can now specify container fields on ML Framework spec such as env variable, liveness/readiness probes etc.
- You can now specify pod template fields on component spec such as NodeAffinity etc.
- Allow specifying timeouts on component spec
- Tensorflow Serving [gRPC support](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/tensorflow#create-the-inferenceservice-with-grpc).
- Triton Inference server V2 inference REST/gRPC protocol support, see [examples](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/triton)
- TorchServe [predict integration](https://pytorch.org/serve/inference_api.html#kfserving-inference-api), see [examples](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/torchserve)
- SKLearn/XGBoost V2 inference REST/gRPC protocol support with MLServer, see [SKLearn](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/sklearn) and [XGBoost](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/xgboost) examples
- PMMLServer support, see [examples](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/pmml)
- LightGBM support, see [examples](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/lightgbm)
- Simplified canary rollout, traffic split at knative revisions level instead of services level, see [examples](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/rollout)
- Transformer to predictor call is now using AsyncIO by default

:warning: What's gone ?
- Default/Canary level is removed, canaryTrafficPercent is moved to the component level
- `rollout_canary` and `promote_canary` API is deprecated on KFServing SDK
- Parallelism field is renamed to containerConcurrency
- `Custom` keyword is removed and `container` field is changed to be an array

:arrow_up: What actions are needed to take to upgrade?
- Make sure canary traffic is all rolled out before upgrade as v1alpha2 canary spec is deprecated, please use v1beta1 spec for canary rollout feature.
- Although KFServing automatically converts the InferenceService to v1beta1, we recommend rewriting all your spec with V1beta1 API as we plan to drop the support for v1alpha2 in later versions.

Contribution list
* Make KFServer HTTP requests asynchronous 983 by salanki

* Add support for generic HTTP/HTTPS URI for Storage Initializer  979 by tduffy000

*



InferenceService v1beta1 API  991 by yuzisun

* Validation check for InferenceService Name 1079 by jazzsir

* Set KFServing default worker to 1  1106 by yuzliu

* Add support for MLServer in the SKLearn predictor  1155 by adriangonz

* Add V2 support to XGBoost predictor 1196 by adriangonz

* Support PMML server 1141 by AnyISalIn

* Generate SDK for KFServing v1beta1  1150 by jinchihe


* Support Kubernetes 1.18 1128 by pugangxa
* Integrate TorchServe to v1beta1 spec 1161 by jagadeeshi2i
* Merge batcher to model agent 1287 by yuzisun
* Fix torchserve protocol version and update doc 1271 1277
* Support CloudEvent(Avro/Protobuf) for KFServer 1343 mtickoobb

Multi Model Serving V1Alpha1
:rainbow: KFServing 0.5 introduces Multi Model Serving with V1Alpha1 TrainedModel CR, this is currently for experiment only and we are looking for your feedbacks!

Checkout [sklearn](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/sklearn/multimodel), [triton](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/triton/multimodel) MMS examples.


* Multi-Model Puller 989 by ifilonenko 

* Add multi model configmap 992 by wengyao04

* Trained model v1alpha1 api  1009 by yuzliu
* TrainedModel controller 1013 by yuzliu

* Harden model puller logic and add tests  1055 by yuzisun
* Puller streamlining/simplification 1057 by njhill
*
Integrate MMS inferenceservice controller, configmap controller, model agent 1132 by yuzliu
* Add load/unload endpoint for SKLearn/XGBoost KFServer 1082 by wengyao04

* Sync from model config on agent startup 1204 by yuzisun
* Fix model puller flag for MMS 1281 by yuzisun
* TrainedModel status url 1319 by abchoo
* Add MMS support for SKLearn/XGBoost MLServer 1290 adriangonz
* Support GCS for model agent 1105 mszacillo

Explanation

*
Add support for AIX360 explanations 1094 by drewbutlerbb4

* Alibi 0.5.5 1168 by cliveseldon
* Adversarial robustness explainer(ART) 1244 by drewbutlerbb4
* PyTorch Captum [explain integration](https://pytorch.org/serve/inference_api.html#kfserving-explanations-api), see [example](https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/torchserve/bert#captum-explanations)


Documentation

* Docs/custom domain  1036 by adamkgray

* Update ingress gateway access instruction 1008 by yuzisun

* Document working k8s version 1062 by riklopfer
* Add triton torchscript example with prediction v2 protocol  1131 by yuzisun

* Add torchserve custom server with pv storage example 1182 by jagadeeshi2i 
* Add torchserve custom server example 1156 by jagadeeshi2i 
*
Add torchserve custom server bert sample 1185 by jagadeeshi2i 


* Bump up minimal Kube and Istio requirements  1166 by animeshsingh
* V1beta1 canary rollout examples 1267 by yuzisun
* Promethus based metrics and monitoring docs 1276 by sriumcp

Developer Experience

* Migrate controller tests to use BDD testing style 936 by yuzisun
*
Genericized component logic. 1018 by ellistarn

* Use github action for kfserving controller tests 1056 by yuzisun
* Make standalone installation kustomizable 1103 by jazzsir
*
Move KFServing CI to AWS 1170 by yuzisun
* Upgrade k8s and kn go library versions 1144 by ryandawsonuk
* Add e2e test for torchserve 1265 by jagadeeshi2i
* Add e2e test for SKLearn/XGBoost MMS 1306 by abchoo
* Upgrade k8s client library to 1.19 1305 by ivan-valkov
* Upgrade controller-runtime to 0.7.0 1341 by pugangxa

0.5.0rc2

Final RC release for InferenceService V1Beta1
Merge logger/batcher to model agent

- Merge batcher to model agent 1287
- Fix model puller flag for MMS 1281
- Fix torchserve protocol version and update doc 1271 1277
- Add e2e test for torchserve 1265
- V1beta1 canary rollout examples 1267
- Promethus based metrics and monitoring docs 1276

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.