Cleanrl

Latest version: v1.2.0

Safety actively analyzes 640974 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

0.4.3

0.4.2

0.4.1

0.4.0

What's new in the 0.4.0 release
* Added contribution guide here https://github.com/vwxyzjn/cleanrl/blob/master/CONTRIBUTING.md. We welcome contribution of new algorithms and new games to be added to the Open RL Benchmark (http://benchmark.cleanrl.dev/)
* Added tables for the benchmark results with standard deviations created by (https://github.com/vwxyzjn/cleanrl/blob/master/benchmark/plots.py)

Atari Results


| gym_id | apex_dqn_atari_visual | c51_atari_visual | dqn_atari_visual | ppo_atari_visual |
|:----------------------------|:------------------------|:-------------------|:-------------------|:-------------------|
| BeamRiderNoFrameskip-v4 | 2936.93 ± 362.18 | 13380.67 ± 0.00 | 7139.11 ± 479.11 | 2053.08 ± 83.37 |
| QbertNoFrameskip-v4 | 3565.00 ± 690.00 | 16286.11 ± 0.00 | 11586.11 ± 0.00 | 17919.44 ± 383.33 |

0.3.0

See https://streamable.com/cq8e62 for a demo

Significant amount of effort was put into the making of Open RL Benchmark (http://benchmark.cleanrl.dev/). It provides benchmark of popular Deep Reinforcement Learning algorithms in 34+ games with unprecedented level of transparency, openness, and reproducibility.

In addition, the legacy `common.py` is depreciated in favor of using single-file implementations.

0.2.1

We've made the SAC algorithm works for both continuous and discrete action spaces, with primary references from the following papers:

Page 4 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.