**Full Changelog**: https://github.com/replicate/cog/compare/v0.10.0-alpha3...v0.10.0-alpha4
Scary temporary commit for a hemorrhaging-edge release. This adds concurrency to the config and significantly changes the behavior of cog.Path, does something unsavory to upload very large files, and actually enables concurrency.
* add concurrency to config
* this basically works!
* more descriptive names for predict functions
* maybe pass through prediction id and try to make cancelation do both?
* don't cancel from signal handler if a loop is running. expose worker busy state to runner
* move handle_event_stream to PredictionEventHandler
* make setup and canceling work
* drop some checks around cancelation
* try out eager_predict_state_change
* keep track of multiple runner prediction tasks to make idempotent endpoint return the same result and fix tests somewhat
* fix idempotent tests
* fix remaining errors?
* worker predict_generator shouldn't be eager
* wip: make the stuff that handles events and sends webhooks etc async
* drop Runner._result
* drop comments
* inline client code
* get started
* inline webhooks
* move clients into runner, switch to httpx, move create_event_handler into runner
* add some comments
* more notes
* rip out webhooks and most of files and put them in a new ClientManager that handles most of everything. inline upload_files for that
* move create_event_handler into PredictionEventHandler.__init__
* fix one test
* break out Path.validate into value_to_path and inline get_filename and File.validate
* split out URLPath into BackwardsCompatibleDataURLTempFilePath and URLThatCanBeConvertedToPath with the download part of URLFile inlined
* let's make DataURLTempFilePath also use convert and move value_to_path back to Path.validate
* use httpx for downloading input urls and follow redirects
* take get_filename back out for tests
* don't upload in http and delete cog/files.py
* drop should_cancel
* prediction->request
* split up predict/inner/prediction_ctx into enter_predict/exit_predict/prediction_ctx/inner_async_predict/predict/good_predict as one way to do it. however, exposing all of those for runner predict enter/coro exit still sucks, but this is still an improvement
* bigish change: inline predict_and_handle_errors
* inline make_error_handler into setup
* move runner.setup into runner.Runner.setup
* add concurrency to config in go
* try explicitly using prediction_ctx __enter__ and __exit__
* make runner setup more correct and marginally better
* fix a few tests
* notes
* wip ClientManager.convert
* relax setup argument requirement to str
* glom worker into runner
* add logging message
* fix prediction retry and improve logging
* split out handle_event
* use CURL_CA_BUNDLE for file upload
* clean up comments
* dubious upload fix
* small fixes
* attempt to add context logging?
* tweak names
* fix error for predictionOutputType(multi=False)
* improve comments
* fix lints
* add a note about this release