Mo-gymnasium

Latest version: v1.3.1

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.3.1

Doc fixes

**Full Changelog**: https://github.com/Farama-Foundation/MO-Gymnasium/compare/v1.3.0...v1.3.1

1.3.0

This release adds the new Mujoco v5 environments:
- mo-ant-v5
- mo-ant-2obj-v5
- mo-hopper-v5
- mo-hopper-2obj-v5
- mo-walker2d-v5
- mo-halfcheetah-v5
- mo-humanoid-v5
- mo-swimmer-v5

What's Changed
* Add Mujoco v5 environments by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/85
* Add Python 3.12 support by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/108
* Remove pymoo dep by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/109

**Full Changelog**: https://github.com/Farama-Foundation/MO-Gymnasium/compare/v1.2.0...v1.3.0

1.2.0

Breaking Changes
* Similar to Gymnasium v1.0, `VecEnv`s now differ from normal `Env`s. The associated wrappers also differ. See [Gymnasium 1.0.0 release notes](https://gymnasium.farama.org/gymnasium_release_notes/#release-v1-0-0).
* Wrappers have been moved to their `wrappers` subpackage, e.g., `from mo_gymnasium import MORecordEpisodeStatistics` -> `from mo_gymnasium.wrappers import MORecordEpisodeStatistics`. Vector wrappers can be found under `mo-gymnasium.wrappers.vector`. See the [`tests/`](https://github.com/Farama-Foundation/MO-Gymnasium/tree/main/tests) folder or our [documentation](https://mo-gymnasium.farama.org/main/wrappers/vector_wrappers/) for example usage.

Environments
* Update Gymnasium to v1.0.0 by LucasAlegre & ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/95
* Add Gymnasium performance improvement to Lunar Lander by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/MO-Gymnasium/pull/89
* Update Lunar lander step to match performance with Gymnasium by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/MO-Gymnasium/pull/91
* Adding three different mo-mountain-car environments by pranavg23 in https://github.com/Farama-Foundation/MO-Gymnasium/pull/97
* `Lunar-Lander` is now v3 in https://github.com/Farama-Foundation/MO-Gymnasium/pull/95

Documentation and Tests
* Add pydoc on how to map the multi-objective reward to the original gymnasium reward by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/92
* Documentation of new Mountain Car environments by pranavg23 in https://github.com/Farama-Foundation/MO-Gymnasium/pull/101
* Add test that Gymnasium and MO-Gymnasium envs match by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/MO-Gymnasium/pull/90
* Add forgotten envs to doc by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/94
* Add py.typed to allow mypy type checking by sebimarkgraf in https://github.com/Farama-Foundation/MO-Gymnasium/pull/107

Bug Fixes
* breakable-bottles observation space correction by scott-j-johnson in https://github.com/Farama-Foundation/MO-Gymnasium/pull/93
* Fix fishwood's inconsistent observation dimension by timondesch in https://github.com/Farama-Foundation/MO-Gymnasium/pull/103
* Fix Docs Generation by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/106
* * (Issue 99) Fix disc_episode_returns off-by-one error by Katze2664 in https://github.com/Farama-Foundation/MO-Gymnasium/pull/100
* Bump deprecated action by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/105

New Contributors
* scott-j-johnson made their first contribution in https://github.com/Farama-Foundation/MO-Gymnasium/pull/93
* pranavg23 made their first contribution in https://github.com/Farama-Foundation/MO-Gymnasium/pull/97
* Katze2664 made their first contribution in https://github.com/Farama-Foundation/MO-Gymnasium/pull/100
* timondesch made their first contribution in https://github.com/Farama-Foundation/MO-Gymnasium/pull/103
* sebimarkgraf made their first contribution in 107

**Full Changelog**: https://github.com/Farama-Foundation/MO-Gymnasium/compare/v1.1.0...v1.2.0

1.1.0

Environments
* Add new MuJoCo environments by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/87
* Add mirror DST env by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/79

Other improvements and utils
* Use .unwrapped to access reward_space by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/77
* Add rendering for fruit_tree env by tomekster in https://github.com/Farama-Foundation/MO-Gymnasium/pull/81

Documentation
* Group environments by type in docs by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/83
* Add mirrored DST to docs by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/80
* Update citations by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/86

Bug fixes
* unpin mujoco by Kallinteris-Andreas in https://github.com/Farama-Foundation/MO-Gymnasium/pull/84

**Full Changelog**: https://github.com/Farama-Foundation/MO-Gymnasium/compare/v1.0.1...v1.1.0

1.0.1

Environments
* Add pygame render to breakable-bottles by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/75

Wrapper
* Add MOMaxAndSkipObservation Wrapper by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/76

Other improvements and utils
* Modify LinearReward to return reward weights as part of info_dict by ianleongudri in https://github.com/Farama-Foundation/MO-Gymnasium/pull/69
* Add warning for order of wrapping in the MORecordEpisodeStatistics Wrapper by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/70
* Support Gymnasium 0.29 by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/73


Documentation
* Add tuto for custom env creation by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/72

Bug fixes
* Fix test worker by ffelten in https://github.com/Farama-Foundation/MO-Gymnasium/pull/67
* Fix PF and CCS computation of minecart-deterministic-v0 by LucasAlegre in https://github.com/Farama-Foundation/MO-Gymnasium/pull/74

**Full Changelog**: https://github.com/Farama-Foundation/MO-Gymnasium/compare/v1.0.0...v1.0.1

1.0.0

We are thrilled to introduce the mature release of [MO-Gymnasium](https://mo-gymnasium.farama.org/), a standardized API and collection of environments designed for Multi-Objective Reinforcement Learning (MORL).

MORL expands the capabilities of RL to scenarios where agents need to optimize multiple objectives, which may potentially conflict with each other. Each objective is represented by a distinct reward function. In this context, the agent learns to make trade-offs between these objectives based on a reward vector received after each step. For instance, in the well-known Mujoco halfcheetah environment, reward components are combined linearly using predefined weights as shown in the following code snippet from [Gymnasium](https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/envs/mujoco/half_cheetah_v4.py#LL201C9-L206C44):

python
ctrl_cost = self.control_cost(action)
forward_reward = self._forward_reward_weight * x_velocity
reward = forward_reward - ctrl_cost


With MORL, users have the flexibility to determine the compromises they desire based on their preferences for each objective. Consequently, the environments in MO-Gymnasium do not have predefined weights. Thus, MO-Gymnasium extends the capabilities of [Gymnasium](https://gymnasium.farama.org/) to the multi-objective setting, where the agents receives a vectorial reward.

For example, here is an illustration of the multiple policies learned by an MORL agent for the `mo-halfcheetah` domain, balancing between saving battery and speed:

<img src="https://github.com/Farama-Foundation/MO-Gymnasium/assets/11799929/10796cae-6f84-4690-8e17-d23f792c32c2" width=400 />

This release marks the first mature version of MO-Gymnasium within Farama, indicating that the API is stable, and we have achieved a high level of quality in this library.

API
python
import gymnasium as gym
import mo_gymnasium as mo_gym
import numpy as np

It follows the original Gymnasium API ...
env = mo_gym.make('minecart-v0')

obs, info = env.reset()
but vector_reward is a numpy array!
next_obs, vector_reward, terminated, truncated, info = env.step(your_agent.act(obs))

Optionally, you can scalarize the reward function with the LinearReward wrapper.
This allows to fall back to single objective RL

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.