Deepcomp

Latest version: v1.4.2

Safety actively analyzes 641049 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

1.0

Major release of DeepCoMP, DD-CoMP, and D3-CoMP

0.10

* New observation space with better normalization improving performance of both central and multi agent PPO
* Extra observations and new reward function for multi agent PPO to learn non-greedy, cooperative & fair behavior, taking other UEs into account
* Support for continuous instead of episodic training
* Refactoring, fixes, improvements

Details: [v0.10 details](https://github.com/CN-UPB/deep-rl-mobility-management/blob/master/docs/mdp.md#v010-fair-cooperative-multi-agent)

0.9

* New variants for observation (components, normalization, ...) and reward (utility function and penalties)
* New larger scenario and adjusted rendering
* New utility scripts for evaluation: Running experiments and visualzing results
* Bug fixes and refactoring
* Default radio model is resource-fair again (more stable than proportional-fair)

Details: [v0.9 details](https://github.com/CN-UPB/deep-rl-mobility-management/blob/master/docs/mdp.md#v09-preparation-for-evaluation)

0.8

* Support for proportional-fair sharing (new default)
* 2 new greedy heuristic algorithms as baselines
* New default UE movement: Random waypoint
* New default UE utility: Log function with increasing data rate
* Improved and refactored environment and model

Details: [v0.8 details](https://github.com/CN-UPB/deep-rl-mobility-management/blob/master/docs/mdp.md#v08-environment--model-improvements-new-heuristic-algorithms-week-29)

0.7

* Larger environment with 3 BS and 4 moving UEs.
* Extra observation (optional) showing number of connected UEs per BS. To help learn balancing connections. Seems not to be very useful.
* Improved visualization
* Improved install. Added CLI support.

Details: [v0.7 details](https://github.com/CN-UPB/deep-rl-mobility-management/blob/master/docs/mdp.md#v07-larger-environment-week-27)

0.6

* Support for multi-agent RL: Each UE is trained by its own RL agent
* Currently, all agents share the same RL algorithm and NN
* Already with 2 UEs, multi-agent leads to better results more quickly than a central agent

Details: [v0.6 details](https://github.com/CN-UPB/deep-rl-mobility-management/blob/master/docs/mdp.md#v06-multi-agent-rl-week-27)

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.