Cog

Latest version: v0.9.9

Safety actively analyzes 638361 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 24

0.10.0alpha9

This release fixes being able to push concurrent models with cog.

0.10.0alpha8

What's Changed
* predict_time_share needs to be set before sending the completed webhook by technillogue in https://github.com/replicate/cog/pull/1683
* drop default_target by technillogue in https://github.com/replicate/cog/pull/1685
* COG_DISABLE_TIME_SHARE_METRIC


**Full Changelog**: https://github.com/replicate/cog/compare/v0.10.0-alpha7...v0.10.0-alpha8

0.10.0alpha7

This release adds `cog.emit_metric`, a `predict_time_share` metric, and provisional support for setting target concurrency. It also properly records the changes that were already in prod.

What's Changed
* replace requests with httpx and factor out clients by technillogue in https://github.com/replicate/cog/pull/1574
* implement mp.Connection with async streams by technillogue in https://github.com/replicate/cog/pull/1640
* omnibus actual concurrency and major refactor by technillogue in https://github.com/replicate/cog/pull/1530
* fix flaky runner test by technillogue in https://github.com/replicate/cog/pull/1669
* predict_time_share metric by technillogue in https://github.com/replicate/cog/pull/1643
* function to emit metrics by technillogue in https://github.com/replicate/cog/pull/1649
* allow setting both max and target concurrency in cog.yaml by technillogue in https://github.com/replicate/cog/pull/1672


**Full Changelog**: https://github.com/replicate/cog/compare/v0.10.0-alpha6...v0.10.0-alpha7

0.10.0alpha5

**Full Changelog**: https://github.com/replicate/cog/compare/v0.10.0-alpha3...v0.10.0-alpha4

Scary temporary commit for a hemorrhaging-edge release. This adds concurrency to the config and significantly changes the behavior of cog.Path, does something unsavory to upload very large files, and actually enables concurrency.

* add concurrency to config
* this basically works!
* more descriptive names for predict functions
* maybe pass through prediction id and try to make cancelation do both?
* don't cancel from signal handler if a loop is running. expose worker busy state to runner
* move handle_event_stream to PredictionEventHandler
* make setup and canceling work
* drop some checks around cancelation
* try out eager_predict_state_change
* keep track of multiple runner prediction tasks to make idempotent endpoint return the same result and fix tests somewhat
* fix idempotent tests
* fix remaining errors?
* worker predict_generator shouldn't be eager
* wip: make the stuff that handles events and sends webhooks etc async
* drop Runner._result
* drop comments
* inline client code
* get started
* inline webhooks
* move clients into runner, switch to httpx, move create_event_handler into runner
* add some comments
* more notes
* rip out webhooks and most of files and put them in a new ClientManager that handles most of everything. inline upload_files for that
* move create_event_handler into PredictionEventHandler.__init__
* fix one test
* break out Path.validate into value_to_path and inline get_filename and File.validate
* split out URLPath into BackwardsCompatibleDataURLTempFilePath and URLThatCanBeConvertedToPath with the download part of URLFile inlined
* let's make DataURLTempFilePath also use convert and move value_to_path back to Path.validate
* use httpx for downloading input urls and follow redirects
* take get_filename back out for tests
* don't upload in http and delete cog/files.py
* drop should_cancel
* prediction->request
* split up predict/inner/prediction_ctx into enter_predict/exit_predict/prediction_ctx/inner_async_predict/predict/good_predict as one way to do it. however, exposing all of those for runner predict enter/coro exit still sucks, but this is still an improvement
* bigish change: inline predict_and_handle_errors
* inline make_error_handler into setup
* move runner.setup into runner.Runner.setup
* add concurrency to config in go
* try explicitly using prediction_ctx __enter__ and __exit__
* make runner setup more correct and marginally better
* fix a few tests
* notes
* wip ClientManager.convert
* relax setup argument requirement to str
* glom worker into runner
* add logging message
* fix prediction retry and improve logging
* split out handle_event
* use CURL_CA_BUNDLE for file upload
* clean up comments
* dubious upload fix
* small fixes
* attempt to add context logging?
* tweak names
* fix error for predictionOutputType(multi=False)
* improve comments
* fix lints
* add a note about this release

0.10.0alpha4

**Full Changelog**: https://github.com/replicate/cog/compare/v0.10.0-alpha3...v0.10.0-alpha4

Scary temporary commit for a hemorrhaging-edge release. This adds concurrency to the config and significantly changes the behavior of cog.Path, does something unsavory to upload very large files, and actually enables concurrency.

* add concurrency to config
* this basically works!
* more descriptive names for predict functions
* maybe pass through prediction id and try to make cancelation do both?
* don't cancel from signal handler if a loop is running. expose worker busy state to runner
* move handle_event_stream to PredictionEventHandler
* make setup and canceling work
* drop some checks around cancelation
* try out eager_predict_state_change
* keep track of multiple runner prediction tasks to make idempotent endpoint return the same result and fix tests somewhat
* fix idempotent tests
* fix remaining errors?
* worker predict_generator shouldn't be eager
* wip: make the stuff that handles events and sends webhooks etc async
* drop Runner._result
* drop comments
* inline client code
* get started
* inline webhooks
* move clients into runner, switch to httpx, move create_event_handler into runner
* add some comments
* more notes
* rip out webhooks and most of files and put them in a new ClientManager that handles most of everything. inline upload_files for that
* move create_event_handler into PredictionEventHandler.__init__
* fix one test
* break out Path.validate into value_to_path and inline get_filename and File.validate
* split out URLPath into BackwardsCompatibleDataURLTempFilePath and URLThatCanBeConvertedToPath with the download part of URLFile inlined
* let's make DataURLTempFilePath also use convert and move value_to_path back to Path.validate
* use httpx for downloading input urls and follow redirects
* take get_filename back out for tests
* don't upload in http and delete cog/files.py
* drop should_cancel
* prediction->request
* split up predict/inner/prediction_ctx into enter_predict/exit_predict/prediction_ctx/inner_async_predict/predict/good_predict as one way to do it. however, exposing all of those for runner predict enter/coro exit still sucks, but this is still an improvement
* bigish change: inline predict_and_handle_errors
* inline make_error_handler into setup
* move runner.setup into runner.Runner.setup
* add concurrency to config in go
* try explicitly using prediction_ctx __enter__ and __exit__
* make runner setup more correct and marginally better
* fix a few tests
* notes
* wip ClientManager.convert
* relax setup argument requirement to str
* glom worker into runner
* add logging message
* fix prediction retry and improve logging
* split out handle_event
* use CURL_CA_BUNDLE for file upload
* clean up comments
* dubious upload fix
* small fixes
* attempt to add context logging?
* tweak names
* fix error for predictionOutputType(multi=False)
* improve comments
* fix lints
* add a note about this release

0.10.0alpha3

Changelog
* 513e837 Revert "Revert PR "async runner" (1352)"
* db88489 Revert "Revert PR "create event loop before predictor setup" (1366)"
* 3444169 lints
* 73a6de9 minimal async worker (1410)
* 0df9b82 run CI for this branch the same way as for main

Page 3 of 24

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.