Ray

Latest version: v2.44.1

Safety actively analyzes 723650 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 18

0.7.7

Not secure
=======================

Highlights
----------

- Remote functions and actors now support kwargs and positionals (5606).
- `ray.get` now supports a `timeout` argument (6107). If the object isn't available before the timeout passes, a `RayTimeoutError` is raised.
- Ray now supports [detached actors](https://ray.readthedocs.io/en/latest/advanced.html#detached-actors) (6036), which persist beyond the lifetime of the script that creates them and can be referred to by a user-defined name.
- Added [documentation](https://ray.readthedocs.io/en/latest/deploy-on-yarn.html) for how to deploy Ray on YARN clusters using [Skein](https://jcrist.github.io/skein/) (#6119, 6173).
- The Ray scheduler now attempts to schedule tasks fairly to avoid starvation (5851).

Core
----

- Progress towards a new backend architecture where tasks and actor tasks are submitted directly between workers. 5783, 5991, 6040, 6054, 6075, 6088, 6122, 6147, 6171, 6177, 6118, 6188, 6259, 6277
- Progress towards Windows compatibility. 6071, 6204, 6205, 6282
- Now using cloudpickle_fast for serialization by default, which supports more types of Python objects without sacrificing performance. 5658, 5805, 5960, 5978
- Various bugfixes. 5946, 6175, 6176, 6231, 6253, 6257, 6276,

RLlib
-----

- Now using pytorch's function to see if gpu is available. 5890
- Fixed APEX priorities returning zero all the time. 5980
- Fixed leak of TensorFlow assign operations in DQN/DDPG. 5979
- Fixed choosing the wrong neural network model for Atari in 0.7.5. 6087
- Added large scale regression test for RLlib. 6093
- Fixed and added test for LR annealing config. 6101
- Reduced log verbosity. 6154
- Added a microbatch optimizer with an A2C example. 6161

Tune
-----

- Search algorithms now use early stopped trials for optimization. 5651
- Metrics are now outputted via a tabular format. Errors are outputted on a separate table. 5822
- In the distributed setting, checkpoints are now deleted automatically post-sync using an rsync flag. Checkpoints on the driver are garbage collected according to the policy defined by the user. 5877
- A much faster ExperimentAnalysis tool. 5962
- Trial executor callbacks now take in a “Runner” parameter. 5868
- Fixed `queue_trials` so to enable cluster autoscaling with a CPU-Only Head Node. 5900
- Added a TensorBoardX logger. 6133

Other Libraries
---------------

- Serving: Progress towards a new Ray serving library. 5854, 5886, 5894, 5929, 5937, 5961, 6051

Thanks
-------

We thank the following contributors for their amazing contributions:

zhuohan123, jovany-wang, micafan, richardliaw, waldroje, mitchellstern, visatish, mehrdadn, istoica, ericl, adizim, simon-mo, lsklyut, zhu-eric, pcmoritz, hhbyyh, suquark, sotte, hershg, pschafhalter, stackedsax, edoakes, mawright, stephanie-wang, ujvl, ashione, couturierc, AdamGleave, robertnishihara, DaveyBiggers, daiyaanarfeen, danyangz, AmeerHajAli, mimoralea

0.7.6

Not secure
=======================

Highlights
----------

- The Ray autoscaler now supports Kubernetes as a backend (5492). This makes it possible to start a Ray cluster on top of your existing Kubernetes cluster with a simple shell command.
+ Please see the Kubernetes section of the [autoscaler documentation](https://ray.readthedocs.io/en/latest/autoscaling.html) to get started.
+ This is a new feature and may be rough around the edges. If you run into problems or have suggestions for how to improve Ray on Kubernetes, please file an issue.

- The Ray cluster dashboard has been revamped (5730, 5857) to improve the UI and include logs and error messages. More improvements will be coming in the near future.
+ You can try out the dashboard by starting Ray with `ray.init(include_webui=True)` or `ray start --include-webui`.
+ Please let us know if you have suggestions for what would be most useful to you in the new dashboard.

Core
----

- Progress towards refactoring the Python worker on top of the core worker. 5750, 5771, 5752
- Fix an issue in local mode where multiple actors didn't work properly. 5863
- Fix class attributes and methods for actor classes. 5802
- Improvements in error messages and handling. 5782, 5746, 5799
- Serialization improvements. 5841, 5725
- Various documentation improvements. 5801, 5792, 5414, 5747, 5780, 5582

RLlib
-----

- Added a link to BAIR blog posts in the documentation. 5762
- Tracing for eager tensorflow policies with `tf.function`. 5705

Tune
-----

- Improved MedianStoppingRule. 5402
- Add PBT + Memnn example. 5723
- Add support for function-based stopping condition. 5754
- Save/Restore for Suggestion Algorithms. 5719
- TensorBoard HParams for TF2.0. 5678

Other Libraries
---------------

- Serving: Progress towards a new Ray serving library. 5849, 5850, 5852

Thanks
-------

We thank the following contributors for their amazing contributions:

hershg, JasonWayne, kfstorm, richardliaw, batzner, vakker, robertnishihara, stephanie-wang, gehring, edoakes, zhijunfu, pcmoritz, mitchellstern, ujvl, simon-mo, ecederstrand, mawright, ericl, anthonyhsyu, suquark, waldroje

0.7.5

Not secure
Ray API
- Objects created with `ray.put()` are now reference counted. 5590
- Add internal `pin_object_data()` API. 5637
- Initial support for pickle5. 5611
- Warm up Ray on `ray.init()`. 5685
- `redis_address` passed to `ray.init` is now just `address`. 5602

Core
- Progress towards a common C++ core worker. 5516, 5272, 5566, 5664
- Fix log monitor stall with many log files. 5569
- Print warnings when tasks are unschedulable. 5555
- Take into account resource queue lengths when autoscaling 5702, 5684

Tune
- TF2.0 TensorBoard support. 5547, 5631
- `tune.function()` is now deprecated. 5601

RLlib
- Enhancements for TF eager support. 5625, 5683, 5705
- Fix DDPG regression. 5626

Other Libraries
- Complete rewrite of experimental serving library. 5562
- Progress toward Ray projects APIs. 5525, 5632, 5706
- Add TF SGD implementation for training. 5440
- Many documentation improvements and bugfixes.

0.7.4

Not secure
=================

Highlights
----------

- There were many **documentation improvements** (5391, 5389, 5175). As we continue to improve the documentation we value your feedback through the “Doc suggestion?” link at the top of the [documentation](https://ray.readthedocs.io/en/latest/). Notable improvements:
+ We’ve added guides for best practices using TensorFlow and PyTorch.
+ We’ve revamped the Walkthrough page for Ray users, providing a better experience for beginners.
+ We’ve revamped guides for using Actors and inspecting internal state.

- Ray supports **memory limits** now to ensure memory-intensive applications run predictably and reliably. You
can activate them through the `ray.remote` decorator:
python
ray.remote(
memory=2000 * 1024 * 1024,
object_store_memory=200 * 1024 * 1024)
class SomeActor(object):
def __init__(self, a, b):
pass

You can set limits for the heap and the object store, see the [documentation](https://ray.readthedocs.io/en/latest/memory-management.html).

- There is now preliminary support for **projects**, see the the [project documentation](https://ray.readthedocs.io/en/latest/projects.html). Projects allow you to
package your code and easily share it with others, ensuring a reproducible cluster setup. To get started, you
can run
shell
Create a new project.
ray project create <project-name>
Launch a session for the project in the current directory.
ray session start
Open a console for the given session.
ray session attach
Stop the given session and all of its worker nodes.
ray session stop

Check out the [examples](https://github.com/ray-project/ray/tree/f1dcce5a472fba1c77c4aa023589689efbfeb4f6/python/ray/projects/examples). This is an actively developed new feature so we appreciate your feedback!

**Breaking change:** The `redis_address` parameter was renamed to `address` (5412, 5602) and the former will be removed in the future.

Core
-----

- Move Java bindings on top of the core worker 5370
- Improve log file discoverability 5580
- Clean up and improve error messages 5368, 5351

RLlib
-----

- Support custom action space distributions 5164
- Add TensorFlow eager support 5436
- Add autoregressive KL 5469
- Autoregressive Action Distributions 5304
- Implement MADDPG agent 5348
- Port Soft Actor-Critic on Model v2 API 5328
- More examples: Add CARLA community example 5333 and rock paper scissors multi-agent example 5336
- Moved RLlib to top level directory 5324

Tune
-----

- Experimental Implementation of the BOHB algorithm 5382
- Breaking change: Nested dictionary results are now flattened for CSV writing: `{“a”: {“b”: 1}} => {“a/b”: 1}` 5346
- Add Logger for MLFlow 5438
- TensorBoard support for TensorFlow 2.0 5547
- Added examples for XGBoost and LightGBM 5500
- HyperOptSearch now has warmstarting 5372

Other Libraries
---------------

- SGD: Tune interface for Pytorch MultiNode SGD 5350
- Serving: The old version of ray.serve was deprecated 5541
- Autoscaler: Fix ssh control path limit 5476
- Dev experience: Ray CI tracker online at https://ray-travis-tracker.herokuapp.com/

Various fixes: Fix log monitor issues 4382 5221 5569, the top-level ray directory was cleaned up 5404

Thanks
-------

We thank the following contributors for their amazing contributions:

jon-chuang, lufol, adamochayon, idthanm, RehanSD, ericl, michaelzhiluo, nflu, pengzhenghao, hartikainen, wsjeon, raulchen, TomVeniat, layssi, jovany-wang, llan-ml, ConeyLiu, mitchellstern, gregSchwartz18, jiangzihao2009, jichan3751, mhgump, zhijunfu, micafan, simon-mo, richardliaw, stephanie-wang, edoakes, akharitonov, mawright, robertnishihara, lisadunlap, flying-mojo, pcmoritz, jredondopizarro, gehring, holli, kfstorm

0.7.3

Not secure
======================

Highlights
----------
- RLlib [ModelV2API](https://ray.readthedocs.io/en/latest/rllib-models.html) is ready to use. It improves support for Keras and RNN models, as well as allowing object-oriented reuse of variables. ModelV1 API is deprecated. No migration is needed.
- `ray.experimental.sgd.pytorch.PyTorchTrainer` is ready for early adopters. Checkout the documentation [here](https://ray.readthedocs.io/en/latest/distributed_training.html). We welcome your feedback!
python
model_creator = lambda config: YourPyTorchModel()
data_creator = lambda config: YourTrainingSet(), YourValidationSet()

trainer = PyTorchTrainer(
model_creator,
data_creator,
optimizer_creator=utils.sgd_mse_optimizer,
config={"lr": 1e-4},
num_replicas=2,
resources_per_replica=Resources(num_gpus=1),
batch_size=16,
backend="auto")

for i in range(NUM_EPOCHS):
trainer.train()

- You can query all the clients that have performed `ray.init` to connect to the current cluster with `ray.jobs()`. 5076
python
>>> ray.jobs()
[{'JobID': '02000000',
'NodeManagerAddress': '10.99.88.77',
'DriverPid': 74949,
'StartTime': 1564168784,
'StopTime': 1564168798},
{'JobID': '01000000',
'NodeManagerAddress': '10.99.88.77',
'DriverPid': 74871,
'StartTime': 1564168742}]


Core
----
- Improvement on memory storage handling. 5143, 5216, 4893
- Improved workflow:
- Debugging tool `local_mode` now behaves more consistently. 5060
- Improved KeyboardInterrupt Exception Handling, stack trace reduced from 115 lines to 22 lines. 5237
- Ray core:
- Experimental direct actor call. 5140, 5184
- Improvement in core worker, the shared module between Python and Java. 5079, 5034, 5062
- GCS (global control store) was refactored. 5058, 5050

RLlib
-----
- Finished port of all major RLlib algorithms to builder pattern 5277, 5258, 5249
- `learner_queue_timeout` can be configured for async sample optimizer. 5270
- `reproducible_seed` can be used for reproducible experiments. 5197
- Added entropy coefficient decay to IMPALA, APPO and PPO 5043

Tune:
-----
- **Breaking:** `ExperimentAnalysis` is now returned by default from `tune.run`. To obtain a list of trials, use `analysis.trials`. 5115
- **Breaking:** Syncing behavior between head and workers can now be customized (`sync_to_driver`). Syncing behavior (`upload_dir`) between cluster and cloud is now separately customizable (`sync_to_cloud`). This changes the structure of the uploaded directory - now `local_dir` is synced with `upload_dir`. 4450
- Introduce `Analysis` and `ExperimentAnalysis` objects. `Analysis` object will now return all trials in a folder; `ExperimentAnalysis` is a subclass that returns all trials of an experiment. 5115
- Add missing argument `tune.run(keep_checkpoints_num=...)`. Enables only keeping the last N checkpoints. 5117
- Trials on failed nodes will be prioritized in processing. 5053
- Trial Checkpointing is now more flexible. 4728
- Add system performance tracking for gpu, ram, vram, cpu usage statistics - toggle with `tune.run(log_sys_usage=True)`. 4924
- Experiment checkpointing frequency is now less frequent and can be controlled with `tune.run(global_checkpoint_period=...)`. 4859

Autoscaler
----------
- Add a `request_cores` function for manual autoscaling. You can now manually request resources for the autoscaler. 4754
- Local cluster:
- More readable example yaml with comments. 5290

- Multiple cluster name is supported. 4864

- Improved logging with AWS NodeProvider. `create_instance` call will be logged. 4998

Others Libraries:
-----------------
- SGD:
- Example for Training. 5292
- Deprecate old distributed SGD implementation. 5160
- Kuberentes: Ray namespace added for k8s. 4111
- Dev experience: Add linting pre-push hook. 5154

Thanks:
-------

We thank the following contributors for their amazing contributions:

joneswong, 1beb, richardliaw, pcmoritz, raulchen, stephanie-wang, jiangzihao2009, LorenzoCevolani, kfstorm, pschafhalter, micafan, simon-mo, vipulharsh, haje01, ls-daniel, hartikainen, stefanpantic, edoakes, llan-ml, alex-petrenko, ztangent, gravitywp, MQQ, dulex123, morgangiraud, antoine-galataud, robertnishihara, qxcv, vakker, jovany-wang, zhijunfu, ericl

0.7.2

Not secure
Core
----
- Improvements
- Continue moving the worker code to C++. 5031, 4966, 4922, 4899, 5032, 4996, 4875
- Add a hash table data structure to the Redis modules. 4911
- Use gRPC for communication between node managers. 4968, 5023, 5024
- Python
- `ray.remote` now inherits the function docstring. 4985
- Remove `typing` module from setup.py `install_requirements`. 4971
- Java
- Allow users to set JVM options at actor creation time. 4970
- Internal
- Refactor IDs: `DriverID` -> `JobID`, change all ID functions to camel case. 4964, 4896
- Improve organization of directory structure. 4898
- Peformance
- Get task object dependencies in parallel from object store. 4775
- Flush lineage cache on task submission instead of execution. 4942
- Remove debug check for uncommitted lineage. 5038

Tune
----
- Add directional metrics for components. 4120, 4915
- Disallow setting `resources_per_trial` when it is already configured. 4880
- Make PBT Quantile fraction configurable. 4912

RLlib
-----
- Add QMIX mixer parameters to optimizer param list. 5014
- Allow Torch policies access to full action input dict in `extra_action_out_fn`. 4894
- Allow access to batches prior to postprocessing. 4871
- Throw error if `sample_async` is used with pytorch for A3C. 5000
- Patterns & User Experience
- Rename `PolicyEvaluator` => `RolloutWorker`. 4820
- Port remainder of algorithms to `build_trainer()` pattern. 4920
- Port DQN to `build_tf_policy()` pattern. 4823
- Documentation
- Add docs on how to use TF eager execution. 4927
- Add preprocessing example to offline documentation. 4950

Other Libraries
---------------
- Add support for distributed training with PyTorch. 4797, 4933
- Autoscaler will kill workers on exception. 4997
- Fix handling of non-integral timeout values in `signal.receive`. 5002

Thanks
-----
We thank the following contributors for their amazing contributions: jiangzihao2009, raulchen, ericl, hershg, kfstorm, kiddyboots216, jovany-wang, pschafhalter, richardliaw, robertnishihara, stephanie-wang, simon-mo, zhijunfu, ls-daniel, ajgokhale, rueberger, suquark, guoyuhong, jovany-wang, pcmoritz, hartikainen, timonbimon, TianhongDai

Page 14 of 18

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.