Gymnasium

Latest version: v1.0.0

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.0.0

Over the last few years, the volunteer team behind Gym and Gymnasium has worked to fix bugs, improve the documentation, add new features, and change the API where appropriate so that the benefits outweigh the costs. This is the complete release of `v1.0.0`, which will be the end of this road to change the project's central API (`Env`, `Space`, `VectorEnv`). In addition, the release has included over 200 PRs since `0.29.1`, with many bug fixes, new features, and improved documentation. So, thank you to all the volunteers for their hard work that has made this possible. For the rest of these release notes, we include sections of core API changes, ending with the additional new features, bug fixes, deprecation and documentation changes included.

Finally, we have published a paper on Gymnasium, discussing its overall design decisions and more at https://arxiv.org/abs/2407.17032, which can be cited using the following:

misc{towers2024gymnasium,
title={Gymnasium: A Standard Interface for Reinforcement Learning Environments},
author={Mark Towers and Ariel Kwiatkowski and Jordan Terry and John U. Balis and Gianluca De Cola and Tristan Deleu and Manuel Goulão and Andreas Kallinteris and Markus Krimmel and Arjun KG and Rodrigo Perez-Vicente and Andrea Pierré and Sander Schulhoff and Jun Jet Tai and Hannah Tan and Omar G. Younis},
year={2024},
eprint={2407.17032},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.17032},
}


Removing The Plugin System
Within Gym v0.23+ and Gymnasium v0.26 to v0.29, an undocumented feature for registering external environments behind the scenes has been removed. For users of [Atari (ALE)](https://github.com/Farama-Foundation/Arcade-Learning-Environment), [Minigrid](https://github.com/farama-Foundation/minigrid) or [HighwayEnv](https://github.com/Farama-Foundation/HighwayEnv), then users could previously use the following code:
python
import gymnasium as gym

env = gym.make("ALE/Pong-v5")

Despite Atari never being imported (i.e., `import ale_py`), users can still create an Atari environment. This feature has been removed in `v1.0.0`, which will require users to update to
python
import gymnasium as gym
import ale_py

gym.register_envs(ale_py) optional, helpful for IDEs or pre-commit

env = gym.make("ALE/Pong-v5")

Alternatively, users can use the following structure, `module_name:env_id, ' so that the module is imported first before the environment is created. e.g., `ale_py:ALE/Pong-v5`.
python
import gymnasium as gym

env = gym.make("ale_py:ALE/Pong-v5")


To help users with IDEs (e.g., VSCode, PyCharm), when importing modules to register environments (e.g., `import ale_py`) this can cause the IDE (and pre-commit isort / black / flake8) to believe that the import is pointless and should be removed. Therefore, we have introduced `gymnasium.register_envs` as a no-op function (the function literally does nothing) to make the IDE believe that something is happening and the import statement is required.

Vector Environments
To increase the sample speed of an environment, vectorizing is one of the easiest ways to sample multiple instances of the same environment simultaneously. Gym and Gymnasium provide the `VectorEnv` as a base class for this, but one of its issues has been that it inherited `Env`. This can cause particular issues with type checking (the return type of `step` is different for `Env` and `VectorEnv`), testing the environment type (`isinstance(env, Env)` can be true for vector environments despite the two acting differently) and finally wrappers (some Gym and Gymnasium wrappers supported Vector environments, but there are no clear or consistent API for determining which do or don't). Therefore, we have separated out `Env` and `VectorEnv` to not inherit from each other.

In implementing the new separate `VectorEnv` class, we have tried to minimize the difference between code using `Env` and `VectorEnv` along with making it more generic in places. The class contains the same attributes and methods as `Env` in addition to the attributes `num_envs: int`, `single_action_space: gymnasium.Space` and `single_observation_space: gymnasium.Space`. Further, we have removed several functions from `VectorEnv` that are not needed for all vector implementations: `step_async`, `step_wait`, `reset_async`, `reset_wait`, `call_async` and `call_wait`. This change now allows users to write their own custom vector environments, v1.0.0 includes an example vector cartpole environment that runs thousands of times faster written solely with NumPy than using Gymnasium's Sync vector environment.

To allow users to create vectorized environments easily, we provide `gymnasium.make_vec` as a vectorized equivalent of `gymnasium.make`. As there are multiple different vectorization options ("sync", "async", and a custom class referred to as "vector_entry_point"), the argument `vectorization_mode` selects how the environment is vectorized. This defaults to `None` such that if the environment has a vector entry point for a custom vector environment implementation, this will be utilized first (currently, Cartpole is the only environment with a vector entry point built into Gymnasium). Otherwise, the synchronous vectorizer is used (previously, the Gym and Gymnasium `vector.make` used asynchronous vectorizer as default). For more information, see the function [docstring](https://gymnasium.farama.org/main/api/registry/#gymnasium.make_vec). We are excited to see other projects utilize this option to make creating their environments easier.

python
env = gym.make("CartPole-v1")
env = gym.wrappers.ClipReward(env, min_reward=-1, max_reward=3)

envs = gym.make_vec("CartPole-v1", num_envs=3)
envs = gym.wrappers.vector.ClipReward(envs, min_reward=-1, max_reward=3)


Due to this split of `Env` and `VectorEnv`, there are now `Env` only wrappers and `VectorEnv` only wrappers in `gymnasium.wrappers` and `gymnasium.wrappers.vector` respectively. Furthermore, we updated the names of the base vector wrappers from `VectorEnvWrapper` to `VectorWrapper` and added `VectorObservationWrapper`, `VectorRewardWrapper` and `VectorActionWrapper` classes. See the [vector wrapper](https://gymnasium.farama.org/main/api/vector/wrappers/) page for new information.

To increase the efficiency of vector environments, autoreset is a common feature that allows sub-environments to reset without requiring all sub-environments to finish before resetting them all. Previously in Gym and Gymnasium, auto-resetting was done on the same step as the environment episode ends, such that the final observation and info would be stored in the step's info, i.e., `info["final_observation"]` and `info[“final_info”]` and standard obs and info containing the sub-environment's reset observation and info. Thus, accurately sampling observations from a vector environment required the following code (note the need to extract the `infos["next_obs"][j]` if the sub-environment was terminated or truncated). Additionally, for on-policy algorithms that use rollout would require an additional forward pass to compute the correct next observation (this is often not done as an optimization assuming that environments only terminate, not truncate).

python
replay_buffer = []
obs, _ = envs.reset()
for _ in range(total_timesteps):
next_obs, rewards, terminations, truncations, infos = envs.step(envs.action_space.sample())

for j in range(envs.num_envs):
if not (terminations[j] or truncations[j]):
replay_buffer.append((
obs[j], rewards[j], terminations[j], truncations[j], next_obs[j]
))
else:
replay_buffer.append((
obs[j], rewards[j], terminations[j], truncations[j], infos["next_obs"][j]
))

obs = next_obs


However, over time, the development team has recognized the inefficiency of this approach (primarily due to the extensive use of a Python dictionary) and the annoyance of having to extract the final observation to train agents correctly, for [example](https://github.com/vwxyzjn/cleanrl/blob/c37a3ec4ef8d33ab7c8a69d4d2714e3817739365/cleanrl/dqn.py#L174). Therefore, in v1.0.0, we are modifying autoreset to align with specialized vector-only projects like [EnvPool](https://github.com/sail-sg/envpool) and [SampleFactory](https://github.com/alex-petrenko/sample-factory) where the sub-environment's doesn't reset until the next step. As a result, the following changes are required when sampling:

python
replay_buffer = []
obs, _ = envs.reset()
autoreset = np.zeros(envs.num_envs)
for _ in range(total_timesteps):
next_obs, rewards, terminations, truncations, _ = envs.step(envs.action_space.sample())

for j in range(envs.num_envs):
if not autoreset[j]:
replay_buffer.append((
obs[j], rewards[j], terminations[j], truncations[j], next_obs[j]
))

obs = next_obs
autoreset = np.logical_or(terminations, truncations)


For on-policy rollout, to account for the autoreset requires masking the error for the first observation in a new episode (`done[t+1]`) to prevent computing the error between the last and first observations of episodes.

Finally, we have improved `AsyncVectorEnv.set_attr` and `SyncVectorEnv.set_attr` functions to use the `Wrapper.set_wrapper_attr` to allow users to set variables anywhere in the environment stack if it already exists. Previously, this was not possible and users could only modify the variable in the "top" wrapper on the environment stack, importantly not the actual environment itself.

Wrappers
Previously, some wrappers could support both environment and vector environments, however, this was not standardized, and was unclear which wrapper did and didn't support vector environments. For v1.0.0, with separating `Env` and `VectorEnv` to no longer inherit from each other (read more in the vector section), the wrappers in `gymnasium.wrappers` will only support standard environments and wrappers in `gymnasium.wrappers.vector` contains the provided specialized vector wrappers (most but not all wrappers are supported, please raise a feature request if you require it).

In v0.29, we deprecated the `Wrapper.__getattr__` function to be replaced by `Wrapper.get_wrapper_attr`, providing access to variables anywhere in the environment stack. In v1.0.0, we have added `Wrapper.set_wrapper_attr` as an equivalent function for setting a variable anywhere in the environment stack if it already exists; otherwise the variable is assigned to the top wrapper.

Most significantly, we have removed, renamed, and added several wrappers listed below.
* Removed wrappers
- `monitoring.VideoRecorder` - The replacement wrapper is `RecordVideo`
- `StepAPICompatibility` - We expect all Gymnasium environments to use the terminated / truncated step API, therefore, users shouldn't need the `StepAPICompatibility` wrapper. [Shimmy](https://shimmy.farama.org/) includes a compatibility environment to convert gym-api environments for gymnasium.
* Renamed wrappers (We wished to make wrappers consistent in naming. Therefore, we have removed "Wrapper" from all wrappers and included "Observation", "Action" and "Reward" within wrapper names where appropriate)
- `AutoResetWrapper` -> `Autoreset`
- `FrameStack` -> `FrameStackObservation`
- `PixelObservationWrapper` -> `AddRenderObservation`
* Moved wrappers (All vector wrappers are in `gymnasium.wrappers.vector`)
- `VectorListInfo` -> `vector.DictInfoToList`
* Added wrappers
- `DelayObservation` - Adds a delay to the next observation and reward
- `DtypeObservation` - Modifies the dtype of an environment's observation space
- `MaxAndSkipObservation` - Will skip `n` observations and will max over the last 2 observations, inspired by the Atari environment heuristic for other environments
- `StickyAction` - Random repeats actions with a probability for a step returning the final observation and sum of rewards over steps. Inspired by Atari environment heuristics
- `JaxToNumpy` - Converts a Jax-based environment to use Numpy-based input and output data for `reset`, `step`, etc
- `JaxToTorch` - Converts a Jax-based environment to use PyTorch-based input and output data for `reset`, `step`, etc
- `NumpyToTorch` - Converts a Numpy-based environment to use PyTorch-based input and output data for `reset`, `step`, etc

For all wrappers, we have added example code documentation and a changelog to help future researchers understand any changes made. See the following [page](https://gymnasium.farama.org/main/api/wrappers/misc_wrappers/#gymnasium.wrappers.TimeLimit) for an example.

Functional Environments
One of the substantial advantages of Gymnasium's `Env` is it generally requires minimal information about the underlying environment specifications; however, this can make applying such environments to planning, search algorithms, and theoretical investigations more difficult. We are proposing `FuncEnv` as an alternative definition to `Env` which is closer to a Markov Decision Process definition, exposing more functions to the user, including the observation, reward, and termination functions along with the environment's raw state as a single object.

python
from typing import Any
import gymnasium as gym
from gymnasium.functional import StateType, ObsType, ActType, RewardType, TerminalType, Params

class ExampleFuncEnv(gym.functional.FuncEnv):
def initial(self, rng: Any, params: Params | None = None) -> StateType:
...
def transition(self, state: StateType, action: ActType, rng: Any, params: Params | None = None) -> StateType:
...
def observation(self, state: StateType, rng: Any, params: Params | None = None) -> ObsType:
...
def reward(
self, state: StateType, action: ActType, next_state: StateType, rng: Any, params: Params | None = None
) -> RewardType:
...
def terminal(self, state: StateType, rng: Any, params: Params | None = None) -> TerminalType:
...


`FuncEnv` requires that `initial` and `transition` functions return a new state given its inputs as a partial implementation of `Env.step` and `Env.reset`. As a result, users can sample (and save) the next state for a range of inputs to use with planning, searching, etc. Given a state, `observation`, `reward`, and `terminal` provide users explicit definitions to understand how each can affect the environment's output.

Collecting Seeding Values
It was possible to seed with both environments and spaces with `None` to use a random initial seed value, however it wouldn't be possible to know what these initial seed values were. We have addressed this for `Space.seed` and `reset.seed` in https://github.com/Farama-Foundation/Gymnasium/pull/1033 and https://github.com/Farama-Foundation/Gymnasium/pull/889. Additionally, for `Space.seed`, we have changed the return type to be specialized for each space such that the following code will work for all spaces.
python
seeded_values = space.seed(None)
initial_samples = [space.sample() for _ in range(10)]

reseed_values = space.seed(seeded_values)
reseed_samples = [space.sample() for _ in range(10)]

assert seeded_values == reseed_values
assert initial_samples == reseed_samples

Additionally, for environments, we have added a new `np_random_seed` attribute that will store the most recent `np_random` seed value from `reset(seed=seed)`.

Environment Version Changes
* It was discovered recently that the MuJoCo-based Pusher was not compatible with `mujoco>= 3` as the model's density for the block that the agent had to push was lighter than air. This obviously began to cause issues for users with `mujoco>= 3` and Pusher. Therefore, we are disabled the `v4` environment with `mujoco>= 3` and updated to the model in MuJoCo `v5` that produces more expected behavior like `v4` and `mujoco< 3` (https://github.com/Farama-Foundation/Gymnasium/pull/1019).
* New v5 MuJoCo environments as a follow-up to v4 environments added two years ago, fixing consistencies, adding new features and updating the documentation (https://github.com/Farama-Foundation/Gymnasium/pull/572). Additionally, we have decided to mark the mujoco-py based (v2 and v3) environments as deprecated and plan to remove them from Gymnasium in future (https://github.com/Farama-Foundation/Gymnasium/pull/926).

* Lunar Lander version increased from v2 to v3 due to two bug fixes. The first fixes the determinism of the environment such that the world object was not completely destroyed on reset causing non-determinism in particular cases (https://github.com/Farama-Foundation/Gymnasium/pull/979). Second, the wind generation (by default turned off) was not randomly generated by each reset, therefore, we have updated this to gain statistical independence between episodes (https://github.com/Farama-Foundation/Gymnasium/pull/959).
* CarRacing version increased from v2 to v3 to change how the environment ends such that when the agent completes the track then the environment will terminate not truncate.
* We have remove `pip install "gymnasium[accept-rom-license]"` as `ale-py>=0.9` now comes packaged with the roms meaning that users don't need to install the atari roms separately with `autoroms`.

Additional Bug Fixes
* `spaces.Box` would allow low and high values outside the dtype's range, which could result in some very strange edge cases that were very difficult to detect by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/774)
* Limit the cython version for `gymnasium[mujoco-py]` due to `cython==3` issues by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/616)
* Fix mujoco rendering with custom width values by logan-dunbar (https://github.com/Farama-Foundation/Gymnasium/pull/634)
* Fix environment checker to correctly report infinite bounds by chrisyeh96 (https://github.com/Farama-Foundation/Gymnasium/pull/708)
* Fix type hint for `register(kwargs)` from `**kwargs` to `kwargs: dict | None = None` by younik (https://github.com/Farama-Foundation/Gymnasium/pull/788)
* Fix registration in `AsyncVectorEnv` for custom environments by RedTachyon (https://github.com/Farama-Foundation/Gymnasium/pull/810)
* Remove `mujoco-py` import error for v4+ MuJoCo environments by MischaPanch
(https://github.com/Farama-Foundation/Gymnasium/pull/934)
* Fix reading shared memory for `Tuple` and `Dict` spaces (https://github.com/Farama-Foundation/Gymnasium/pull/941)
* Fix `Multidiscrete.from_jsonable` on windows (https://github.com/Farama-Foundation/Gymnasium/pull/932)
* Remove `play` rendering normalization (https://github.com/Farama-Foundation/Gymnasium/pull/956)
* Fix non-used device argument in `to_torch` conversion by mantasu (https://github.com/Farama-Foundation/Gymnasium/pull/1107)
* Fix torch to numpy conversion when on GPU by mantasu (https://github.com/Farama-Foundation/Gymnasium/pull/1109)

Additional new features
* Added Python 3.12 and NumPy 2.0 support by RedTachyon in https://github.com/Farama-Foundation/Gymnasium/pull/1094
* Add support in MuJoCo human rendering to change the size of the viewing window by logan-dunbar (https://github.com/Farama-Foundation/Gymnasium/pull/635)
* Add more control in MuJoCo rendering over offscreen dimensions and scene geometries by guyazran (https://github.com/Farama-Foundation/Gymnasium/pull/731)
* Add stack trace reporting to `AsyncVectorEnv` by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/1119
* Add support to handle `NamedTuples` in `JaxToNumpy`, `JaxToTorch` and `NumpyToTorch` by RogerJL (https://github.com/Farama-Foundation/Gymnasium/pull/789) and pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/811)
* Add `padding_type` parameter to `FrameSkipObservation` to select the padding observation by jamartinh (https://github.com/Farama-Foundation/Gymnasium/pull/830)
* Add render check to `check_environments_match` by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/748)
* Add a new `OneOf` space that provides exclusive unions of spaces by RedTachyon and pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/812)
* Update `Dict.sample` to use standard Python dicts rather than `OrderedDict` due to dropping Python 3.7 support by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/977)
* Jax environment return jax data rather than numpy data by RedTachyon and pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/817)
* Add `wrappers.vector.HumanRendering` and remove human rendering from `CartPoleVectorEnv` by pseudo-rnd-thoughts and TimSchneider42 (https://github.com/Farama-Foundation/Gymnasium/pull/1013)
* Add more helpful error messages if users use a mixture of Gym and Gymnasium by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/957)
* Add `sutton_barto_reward` argument for `CartPole` that changes the reward function to not return 1 on terminating states by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/958)
* Add `visual_options` rendering argument for MuJoCo environments by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/965)
* Add `exact` argument to `utlis.env_checker.data_equivilance` by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/924)
* Update `wrapper.NormalizeObservation` observation space and change observation to `float32` by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/978)
* Catch exception during `env.spec` if kwarg is unpickleable by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/982)
* Improving ImportError for Box2D by turbotimon (https://github.com/Farama-Foundation/Gymnasium/pull/1009)
* Add an option for a tuple of (int, int) screen-size in AtariPreprocessing wrapper by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/1105)
* Add `is_slippery` option for cliffwalking environment by CloseChoice (https://github.com/Farama-Foundation/Gymnasium/pull/1087)
* Update `RescaleAction` and `RescaleObservation` to support `np.inf` bounds by TimSchneider42 (https://github.com/Farama-Foundation/Gymnasium/pull/1095)
* Update determinism check for `env.reset(seed=42); env.reset()` by qgallouedec (https://github.com/Farama-Foundation/Gymnasium/pull/1086)
* Refactor mujoco to remove `BaseMujocoEnv` class by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/1075)

Deprecation
* Remove unnecessary error classes in error.py by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/801)
* Stop exporting MuJoCo v2 environment classes from `gymnasium.envs.mujoco` by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/827)
* Remove deprecation warning from PlayPlot by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/800)

Documentation changes
* Updated the custom environment tutorial for v1.0.0 by kir0ul (https://github.com/Farama-Foundation/Gymnasium/pull/709)
* Add swig to installation instructions for Box2D by btjanaka (https://github.com/Farama-Foundation/Gymnasium/pull/683)
* Add tutorial Load custom quadruped robot environments using `Gymnasium/MuJoCo/Ant-v5` framework by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/838)
* Add a third-party tutorial page to list tutorials written and hosted on other websites by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/867)
* Add more introductory pages by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/791)
* Add figures for each MuJoCo environment representing their action space by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/762)
* Fix the documentation on blackjack's starting state by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/893)
* Update Taxi environment documentation to clarify starting state definition by britojr in https://github.com/Farama-Foundation/Gymnasium/pull/1120
* Fix the documentation on Frozenlake and Cliffwalking's position by PierreCounathe (https://github.com/Farama-Foundation/Gymnasium/pull/695)
* Update the classic control environment's `__init__` and `reset` arguments by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/898)

**Full Changelog**: https://github.com/Farama-Foundation/Gymnasium/compare/v0.29.1...v1.0.0

1.0.0a2

This is our second alpha version which we hope to be the last before the full Gymnasium v1.0.0 release. We summarise the key changes, bug fixes and new features added in this alpha version.

Key Changes

Atari environments
[ale-py](https://github.com/Farama-Foundation/Arcade-Learning-Environment) that provides the Atari environments has been updated in v0.9.0 to use Gymnasium as the API backend. Furthermore, the pip install contains the ROMs so all that should be necessary for installing Atari will be `pip install “gymnasium[atari]”` (as a result, `gymnasium[accept-rom-license]` has been removed). A reminder that for Gymnasium v1.0 to register the external environments (e.g., `ale-py`), you will be required to `import ale_py` before creating any of the Atari environments.

Collecting seeding values
It was possible to seed with both environments and spaces with `None` to use a random initial seed value however it wouldn’t be possible to know what these initial seed values were. We have addressed for this `Space.seed` and `reset.seed` in https://github.com/Farama-Foundation/Gymnasium/pull/1033 and https://github.com/Farama-Foundation/Gymnasium/pull/889. For `Space.seed`, we have changed the return type to be specialised for each space such that the following code will work for all spaces.
python
seeded_values = space.seed(None)
initial_samples = [space.sample() for _ in range(10)]

reseed_values = space.seed(seeded_values)
reseed_samples = [space.sample() for _ in range(10)]

assert seeded_values == reseed_values
assert initial_samples == reseed_samples

Additionally, for environments, we have added a new `np_random_seed` attribute that will store the most recent `np_random` seed value from `reset(seed=seed)`.

Environment Version changes
* It was discovered recently that the mujoco-based pusher was not compatible with MuJoCo `>= 3` due to bug fixes that found the model density for a block that the agent had to push was the density of air. This obviously began to cause issues for users with MuJoCo v3+ and Pusher. Therefore, we are disabled the `v4` environment with MuJoCo `>= 3` and updated to the model in MuJoCo `v5` that produces more expected behaviour like `v4` and MuJoCo `< 3` (https://github.com/Farama-Foundation/Gymnasium/pull/1019).
* Alpha 2 includes new v5 MuJoCo environments as a follow-up to v4 environments added two years ago, fixing consistencies, adding new features and updating the documentation. We have decided to mark the MuJoCo-py (v2 and v3) environments as deprecated and plan to remove them from Gymnasium in future (https://github.com/Farama-Foundation/Gymnasium/pull/926).
* Lunar Lander version increased from v2 to v3 due to two bug fixes. The first fixes the determinism of the environment such that the world object was not completely destroyed on reset causing non-determinism in particular cases (https://github.com/Farama-Foundation/Gymnasium/pull/979). Second, the wind generation (by default turned off) was not randomly generated by each reset, therefore, we have updated this to gain statistical independence between episodes (https://github.com/Farama-Foundation/Gymnasium/pull/959).

Box Samples
It was discovered that the `spaces.Box` would allow low and high values outside the dtype’s range (https://github.com/Farama-Foundation/Gymnasium/pull/774) which could result in some very strange edge cases that were very difficult to detect. We hope that these changes improve debugging and detecting invalid inputs to the space, however, let us know if your environment raises issues related to this.

Bug Fixes
* Updates `CartPoleVectorEnv` for the new autoreset API (https://github.com/Farama-Foundation/Gymnasium/pull/915)
* Fixed `wrappers.vector.RecordEpisodeStatistics` episode length computation from new autoreset api (https://github.com/Farama-Foundation/Gymnasium/pull/1018)
* Remove `mujoco-py` import error for v4+ MuJoCo environments (https://github.com/Farama-Foundation/Gymnasium/pull/934)
* Fix `make_vec(**kwargs)` not being passed to vector entry point envs (https://github.com/Farama-Foundation/Gymnasium/pull/952)
* Fix reading shared memory for `Tuple` and `Dict` spaces (https://github.com/Farama-Foundation/Gymnasium/pull/941)
* Fix `Multidiscrete.from_jsonable` for windows (https://github.com/Farama-Foundation/Gymnasium/pull/932)
* Remove `play` rendering normalisation (https://github.com/Farama-Foundation/Gymnasium/pull/956)

New Features
* Added Python 3.12 support
* Add a new `OneOf` space that provides exclusive unions of spaces (https://github.com/Farama-Foundation/Gymnasium/pull/812)
* Update `Dict.sample` to use standard Python dicts rather than `OrderedDict` due to dropping Python 3.7 support (https://github.com/Farama-Foundation/Gymnasium/pull/977)
* Jax environment return jax data rather than numpy data (https://github.com/Farama-Foundation/Gymnasium/pull/817)
* Add `wrappers.vector.HumanRendering` and remove human rendering from `CartPoleVectorEnv` (https://github.com/Farama-Foundation/Gymnasium/pull/1013)
* Add more helpful error messages if users use a mixture of Gym and Gymnasium (https://github.com/Farama-Foundation/Gymnasium/pull/957)
* Add `sutton_barto_reward` argument for `CartPole` that changes the reward function to not return 1 on terminating states (https://github.com/Farama-Foundation/Gymnasium/pull/958)
* Add `visual_options` rendering argument for MuJoCo environments (https://github.com/Farama-Foundation/Gymnasium/pull/965)
* Add `exact` argument to `utlis.env_checker.data_equivilance` (https://github.com/Farama-Foundation/Gymnasium/pull/924)
* Update `wrapper.NormalizeObservation` observation space and change observation to `float32` (https://github.com/Farama-Foundation/Gymnasium/pull/978)
* Catch exception during `env.spec` if kwarg is unpickleable (https://github.com/Farama-Foundation/Gymnasium/pull/982)
* Improving ImportError for Box2D (https://github.com/Farama-Foundation/Gymnasium/pull/1009)
* Added metadata field to VectorEnv and VectorWrapper (https://github.com/Farama-Foundation/Gymnasium/pull/1006)
* Fix `make_vec` for sync or async when modifying make arguments (https://github.com/Farama-Foundation/Gymnasium/pull/1027)

**Full Changelog**: https://github.com/Farama-Foundation/Gymnasium/compare/v1.0.0a1...v1.0.0a2 https://github.com/Farama-Foundation/Gymnasium/compare/v0.29.1...v1.0.0a2

1.0.0a1

Over the last few years, the volunteer team behind Gym and Gymnasium has worked to fix bugs, improve the documentation, add new features, and change the API where appropriate such that the benefits outweigh the costs. This is the first alpha release of `v1.0.0`, which aims to be the end of this road of changing the project's API along with containing many new features and improved documentation.

To install v1.0.0a1, you must use `pip install gymnasium==1.0.0a1` or `pip install --pre gymnasium` otherwise, `v0.29.1` will be installed. Similarly, the website will default to v0.29.1's documentation, which can be changed with the pop-up in the bottom right.

We are really interested in projects testing with these v1.0.0 alphas to find any bugs, missing documentation, or issues with the API changes before we release v1.0 in full.

Removing the plugin system
Within Gym v0.23+ and Gymnasium v0.26 to v0.29, an undocumented feature that has existed for registering external environments behind the scenes has been removed. For users of [Atari (ALE)](https://github.com/Farama-Foundation/Arcade-Learning-Environment), [Minigrid](https://github.com/farama-Foundation/minigrid) or [HighwayEnv](https://github.com/Farama-Foundation/HighwayEnv), then users could use the following code:
python
import gymnasium as gym

env = gym.make("ALE/Pong-v5")

such that despite Atari never being imported (i.e., `import ale_py`), users can still load an Atari environment. This feature has been removed in v1.0.0, which will require users to update to
python
import gymnasium as gym
import ale_py

gym.register_envs(ale_py) optional

env = gym.make("ALE/Pong-v5")

Alternatively, users can do the following where the `ale_py` within the environment id will import the module
python
import gymnasium as gym

env = gym.make("ale_py:ALE/Pong-v5") `module_name:env_id`


For users with IDEs (i.e., VSCode, PyCharm), then `import ale_py` can cause the IDE (and pre-commit isort / black / flake8) to believe that the import statement does nothing. Therefore, we have introduced `gymnasium.register_envs` as a no-op function (the function literally does nothing) to make the IDE believe that something is happening and the import statement is required.

Note: ALE-py, Minigrid, and HighwayEnv must be updated to work with Gymnasium v1.0.0, which we hope to complete for all projects affected by alpha 2.

Vector environments
To increase the sample speed of an environment, vectorizing is one of the easiest ways to sample multiple instances of the same environment simultaneously. Gym and Gymnasium provide the `VectorEnv` as a base class for this, but one of its issues has been that it inherited `Env`. This can cause particular issues with type checking (the return type of `step` is different for `Env` and `VectorEnv`), testing the environment type (`isinstance(env, Env)` can be true for vector environments despite the two actings differently) and finally wrappers (some Gym and Gymnasium wrappers supported Vector environments but there are no clear or consistent API for determining which did or didn’t). Therefore, we have separated out `Env` and `VectorEnv` to not inherit from each other.

In implementing the new separate `VectorEnv` class, we have tried to minimize the difference between code using `Env` and `VectorEnv` along with making it more generic in places. The class contains the same attributes and methods as `Env` along with `num_envs: int`, `single_action_space: gymnasium.Space` and `single_observation_space: gymnasium.Space`. Additionally, we have removed several functions from `VectorEnv` that are not needed for all vector implementations: `step_async`, `step_wait`, `reset_async`, `reset_wait`, `call_async` and `call_wait`. This change now allows users to write their own custom vector environments, v1.0.0a1 includes an example vector cartpole environment that runs thousands of times faster than using Gymnasium’s Sync vector environment.

To allow users to create vectorized environments easily, we provide `gymnasium.make_vec` as a vectorized equivalent of `gymnasium.make`. As there are multiple different vectorization options (“sync”, “async”, and a custom class referred to as “vector_entry_point”), the argument `vectorization_mode` selects how the environment is vectorized. This defaults to `None` such that if the environment has a vector entry point for a custom vector environment implementation, this will be utilized first (currently, Cartpole is the only environment with a vector entry point built into Gymnasium). Otherwise, the synchronous vectorizer is used (previously, the Gym and Gymnasium `vector.make` used asynchronous vectorizer as default). For more information, see the function [docstring](https://gymnasium.farama.org/main/api/registry/#gymnasium.make_vec).

python
​​env = gym.make("CartPole-v1")
env = gym.wrappers.ClipReward(env, min_reward=-1, max_reward=3)

envs = gym.make_vec("CartPole-v1", num_envs=3)
envs = gym.wrappers.vector.ClipReward(envs, min_reward=-1, max_reward=3)


Due to this split of `Env` and `VectorEnv`, there are now `Env` only wrappers and `VectorEnv` only wrappers in `gymnasium.wrappers` and `gymnasium.wrappers.vector` respectively. Furthermore, we updated the names of the base vector wrappers from `VectorEnvWrapper` to `VectorWrapper` and added `VectorObservationWrapper`, `VectorRewardWrapper` and `VectorActionWrapper` classes. See the [vector wrapper](https://gymnasium.farama.org/main/api/vector/wrappers/) page for new information.

To increase the efficiency of vector environment, autoreset is a common feature that allows sub-environments to reset without requiring all sub-environments to finish before resetting them all. Previously in Gym and Gymnasium, auto-resetting was done on the same step as the environment episode ends, such that the final observation and info would be stored in the step’s info, i.e., `info["final_observation"]` and `info[“final_info”]` and standard obs and info containing the sub-environment’s reset observation and info. This required similar general sampling for vectorized environments.

python
replay_buffer = []
obs, _ = envs.reset()
for _ in range(total_timesteps):
next_obs, rewards, terminations, truncations, infos = envs.step(envs.action_space.sample())

for j in range(envs.num_envs):
if not (terminations[j] or truncations[j]):
replay_buffer.append((
obs[j], rewards[j], terminations[j], truncations[j], next_obs[j]
))
else:
replay_buffer.append((
obs[j], rewards[j], terminations[j], truncations[j], infos["next_obs"][j]
))

obs = next_obs


However, over time, the development team has recognized the inefficiency of this approach (primarily due to the extensive use of a Python dictionary) and the annoyance of having to extract the final observation to train agents correctly, for [example](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn.py#L200). Therefore, in v1.0.0, we are modifying autoreset to align with specialized vector-only projects like [EnvPool](https://github.com/sail-sg/envpool) and [SampleFactory](https://github.com/alex-petrenko/sample-factory) such that the sub-environment’s doesn’t reset until the next step. As a result, this requires the following changes when sampling. For environments with more complex observation spaces (and action actions) then

python
replay_buffer = []
obs, _ = envs.reset()
autoreset = np.zeros(envs.num_envs)
for _ in range(total_timesteps):
next_obs, rewards, terminations, truncations, _ = envs.step(envs.action_space.sample())

for j in range(envs.num_envs):
if not autoreset[j]:
replay_buffer.append((
obs[j], rewards[j], terminations[j], truncations[j], next_obs[j]
))

obs = next_obs
autoreset = np.logical_or(terminations, truncations)


Finally, we have improved `AsyncVectorEnv.set_attr` and `SyncVectorEnv.set_attr` functions to use the `Wrapper.set_wrapper_attr` to allow users to set variables anywhere in the environment stack if it already exists. Previously, this was not possible and users could only modify the variable in the “top” wrapper on the environment stack, importantly not the actual environment its self.

Wrappers
Previously, some wrappers could support both environment and vector environments, however, this was not standardized, and was unclear which wrapper did and didn't support vector environments. For v1.0.0, with separating `Env` and `VectorEnv` to no longer inherit from each other (read more in the vector section), the wrappers in `gymnasium.wrappers` will only support standard environments and wrappers in `gymnasium.wrappers.vector` contains the provided specialized vector wrappers (most but not all wrappers are supported, please raise a feature request if you require it).

In v0.29, we deprecated the `Wrapper.__getattr__` function to be replaced by `Wrapper.get_wrapper_attr`, providing access to variables anywhere in the environment stack. In v1.0.0, we have added `Wrapper.set_wrapper_attr` as an equivalent function for setting a variable anywhere in the environment stack if it already exists; only the variable is set in the top wrapper (or environment).

Most significantly, we have removed, renamed, and added several wrappers listed below.
* Removed wrappers
- `monitoring.VideoRecorder` - The replacement wrapper is `RecordVideo`
- `StepAPICompatibility` - We expect all Gymnasium environments to use the terminated / truncated step API, therefore, user shouldn't need the `StepAPICompatibility` wrapper. [Shimmy](https://shimmy.farama.org/) includes a compatibility environments to convert gym-api environment's for gymnasium.
* Renamed wrappers (We wished to make wrappers consistent in naming. Therefore, we have removed "Wrapper" from all wrappers and included "Observation", "Action" and "Reward" within wrapper names where appropriate)
- `AutoResetWrapper` -> `Autoreset`
- `FrameStack` -> `FrameStackObservation`
- `PixelObservationWrapper` -> `AddRenderObservation`
* Moved wrappers (All vector wrappers are in `gymnasium.wrappers.vector`)
- `VectorListInfo` -> `vector.DictInfoToList`
* Added wrappers
- `DelayObservation` - Adds a delay to the next observation and reward
- `DtypeObservation` - Modifies the dtype of an environment’s observation space
- `MaxAndSkipObservation` - Will skip `n` observations and will max over the last 2 observations, inspired by the Atari environment heuristic for other environments
- `StickyAction` - Random repeats actions with a probability for a step returning the final observation and sum of rewards over steps. Inspired by Atari environment heuristics
- `JaxToNumpy` - Converts a Jax-based environment to use Numpy-based input and output data for `reset`, `step`, etc
- `JaxToTorch` - Converts a Jax-based environment to use PyTorch-based input and output data for `reset`, `step`, etc
- `NumpyToTorch` - Converts a Numpy-based environment to use PyTorch-based input and output data for `reset`, `step`, etc

For all wrappers, we have added example code documentation and a changelog to help future researchers understand any changes made. See the following [page](https://gymnasium.farama.org/main/api/wrappers/misc_wrappers/#gymnasium.wrappers.TimeLimit) for an example.

Functional environments
One of the substantial advantages of Gymnasium's `Env` is it generally requires minimal information about the underlying environment specifications however, this can make applying such environments to planning, search algorithms, and theoretical investigations more difficult. We are proposing `FuncEnv` as an alternative definition to `Env` which is closer to a Markov Decision Process definition, exposing more functions to the user, including the observation, reward, and termination functions along with the environment’s raw state as a single object.

python
from typing import Any
import gymnasium as gym
from gymnasium.functional import StateType, ObsType, ActType, RewardType, TerminalType, Params

class ExampleFuncEnv(gym.functional.FuncEnv):
def initial(rng: Any, params: Params | None = None) → StateType

def transition(state: StateType, action: ActType, rng: Any, params: Params | None = None) → StateType

def observation(state: StateType, params: Params | None = None) → ObsType

def reward(
state: StateType, action: ActType, next_state: StateType, params: Params | None = None
) → RewardType

def terminal(state: StateType, params: Params | None = None) → TerminalType



`FuncEnv` requires that `initial` and `transition` functions to return a new state given its inputs as a partial implementation of `Env.step` and `Env.reset`. As a result, users can sample (and save) the next state for a range of inputs to use with planning, searching, etc. Given a state, `observation`, `reward`, and `terminal` provide users explicit definitions to understand how each can affect the environment's output.

Additional bug fixes
* Limit the cython version for `gymnasium[mujoco-py]` due to cython==3 issues by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/616)
* Fix `MuJoCo` environment type issues by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/612)
* Fix mujoco rendering with custom width values by logan-dunbar (https://github.com/Farama-Foundation/Gymnasium/pull/634)
* Fix environment checker to correctly report infinite bounds by chrisyeh96 (https://github.com/Farama-Foundation/Gymnasium/pull/708)
* Fix type hint for `register(kwargs)` from `**kwargs` to `kwargs: dict | None = None` by younik (https://github.com/Farama-Foundation/Gymnasium/pull/788)
* Fix `CartPoleVectorEnv` step counter to be set back to zero on `reset` by TimSchneider42 (https://github.com/Farama-Foundation/Gymnasium/pull/886)
* Fix registration for async vector environment for custom environments by RedTachyon (https://github.com/Farama-Foundation/Gymnasium/pull/810)

Additional new features
* New MuJoCo v5 environments (the changes and performance graphs will be included in a separate blog post) by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/572)
* Add support in MuJoCo human rendering to changing the size of the viewing window by logan-dunbar (https://github.com/Farama-Foundation/Gymnasium/pull/635)
* Add more control in MuJoCo rendering over offscreen dimensions and scene geometries by guyazran (https://github.com/Farama-Foundation/Gymnasium/pull/731)
* Add support to handle `NamedTuples` in `JaxToNumpy`, `JaxToTorch` and `NumpyToTorch` by RogerJL (https://github.com/Farama-Foundation/Gymnasium/pull/789) and pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/811)
* Add `padding_type` parameter to `FrameSkipObservation` to select the padding observation by jamartinh (https://github.com/Farama-Foundation/Gymnasium/pull/830)
* Add render check to `check_environments_match` by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/748)

Deprecation
* Remove unnecessary error classes in error.py by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/801)
* Stop exporting MuJoCo v2 environment classes from `gymnasium.envs.mujoco` by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/827)
* Remove deprecation warning from PlayPlot by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/800)

Documentation changes
* Updated the custom environment tutorial for v1.0.0 by kir0ul (https://github.com/Farama-Foundation/Gymnasium/pull/709)
* Add swig to installation instructions for Box2D by btjanaka (https://github.com/Farama-Foundation/Gymnasium/pull/683)
* Add tutorial Load custom quadruped robot environments using `Gymnasium/MuJoCo/Ant-v5` framework by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/838)
* Add third-party tutorial page to list tutorials write and hosted on other websites by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/867)
* Add more introductory pages by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/791)
* Add figures for each MuJoCo environments representing their action space by Kallinteris-Andreas (https://github.com/Farama-Foundation/Gymnasium/pull/762)
* Fix the documentation on blackjack's starting state by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/893)
* Fix the documentation on Frozenlake and Cliffwalking's position by PierreCounathe (https://github.com/Farama-Foundation/Gymnasium/pull/695)
* Update the classic control environment's `__init__` and `reset` arguments by pseudo-rnd-thoughts (https://github.com/Farama-Foundation/Gymnasium/pull/898)

**Full Changelog**: https://github.com/Farama-Foundation/Gymnasium/compare/v0.29.0...v1.0.0a1

0.29.1

A minimal release that fixes a warning produced by `Wrapper.__getattr__`.
In particular, this function will be removed in v1.0.0 however the reported solution for this was incorrect and the updated solution still caused the warning to show (due to technical python reasons).

Changes
* The `Wrapper.__getattr__` warning reports the incorrect new function, `get_attr` rather than `get_wrapper_attr`
* When using `get_wrapper_attr`, the `__getattr__` warning is still be raised due to `get_wrapper_attr` using `hasattr` which under the hood uses `__getattr__.` Therefore, updated to remove the unintended warning.
* Add warning to `VectorEnvWrapper.__getattr__` to specify that it also is deprecated in v1.0.0

**Full Changelog**: https://github.com/Farama-Foundation/Gymnasium/compare/v0.29.0...v0.29.1

0.29.0

We finally have a software citation for Gymnasium with the plan to release an associated paper after v1.0, thank you to all the contributors over the last 3 years who have made helped Gym and Gymnasium (https://github.com/Farama-Foundation/Gymnasium/pull/590)

misc{towers_gymnasium_2023,
title = {Gymnasium},
url = {https://zenodo.org/record/8127025},
abstract = {An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)},
urldate = {2023-07-08},
publisher = {Zenodo},
author = {Towers, Mark and Terry, Jordan K. and Kwiatkowski, Ariel and Balis, John U. and Cola, Gianluca de and Deleu, Tristan and Goulão, Manuel and Kallinteris, Andreas and KG, Arjun and Krimmel, Markus and Perez-Vicente, Rodrigo and Pierré, Andrea and Schulhoff, Sander and Tai, Jun Jet and Shen, Andrew Tan Jin and Younis, Omar G.},
month = mar,
year = {2023},
doi = {10.5281/zenodo.8127026},
}


Gymnasium has a [conda package](https://github.com/conda-forge/gymnasium-feedstock), `conda install gymnasium`. Thanks to ChristofKaufmann for completing this

Breaking Changes
* Drop support for Python 3.7 which has reached its end of life support by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/573
* Update MuJoCo Hopper & Walker2D models to work with MuJoCo >= 2.3.3 by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/589
* Add deprecation warnings to several features which will be removed in v1.0: `Wrapper.__get_attr__`, `gymnasium.make(..., autoreset=True)`, `gymnasium.make(..., apply_api_compatibility=True)`, `Env.reward_range` and `gymnasium.vector.make`. For their proposed replacement, see https://github.com/Farama-Foundation/Gymnasium/pull/535
* Raise error for `Box` bounds of `low > high`, `low == inf` and `high == -inf` by jjshoots in https://github.com/Farama-Foundation/Gymnasium/pull/495
* Add dtype testing for NumPy Arrays in `data_equivalence()` by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/515
* Remove [Jumpy](https://github.com/farama-Foundation/jumpy) from gymnasium wrappers as it was partially implemented with limited testing and usage by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/548
* Update project require for `jax>=0.4` by charraut in https://github.com/Farama-Foundation/Gymnasium/pull/373

New Features
* Remove the restrictions on pygame version, `pygame>=2.1.3` by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/558
* Adding `start` parameter to `MultiDiscrete` space, similar to the `Discrete(..., start)` parameter by Rayerdyne in https://github.com/Farama-Foundation/Gymnasium/pull/557
* Adds testing to `check_env` that closing a closed environment doesn't raise an error by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/564
* On initialisation `wrapper.RecordVideo` throws an error if the environment has an invalid render mode `(None, "human", "ansi")` by robertoschiavone in https://github.com/Farama-Foundation/Gymnasium/pull/580
* Add `MaxAndSkipObservation` wrapper by LucasAlegre in https://github.com/Farama-Foundation/Gymnasium/pull/561
* Add `check_environments_match` function for checking if two environments are identical by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/576
* Add performance debugging utilities, `utils/performance.py` by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/583
* Added Jax based cliff walking environment by balisujohn in https://github.com/Farama-Foundation/Gymnasium/pull/407
* MuJoCo
* Add support for relative paths with `xml_file` arguments by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/536
* Add support for environments to specify `info` in `reset` by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/540
* Remove requirement of environments defining `metadata["render_fps"]`, the value is determined on `__init__` using `dt` by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/525
* Experimental
* Add deprecated wrapper error in `gymnasium.experimental.wrappers` by charraut in https://github.com/Farama-Foundation/Gymnasium/pull/341
* Add `fps` argument to `RecordVideoV0` for custom fps value that overrides an environment's internal `render_fps` value by younik in https://github.com/Farama-Foundation/Gymnasium/pull/503
* Add experimental vector wrappers for lambda observation, action and reward wrappers by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/444

Bug Fixes
* Fix `spaces.Dict.keys()` as `key in keys` was False by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/608
* Updates the action space of `wrappers.RescaleAction` based on the bounds by mmcaulif in https://github.com/Farama-Foundation/Gymnasium/pull/569
* Remove warnings in the passive environment checker for infinite Box bounds by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/435
* Revert Lunar Lander Observation space change by alexdlukens in https://github.com/Farama-Foundation/Gymnasium/pull/512
* Fix URL links in `check_env` by robertoschiavone in https://github.com/Farama-Foundation/Gymnasium/pull/554
* Update `shimmy[gym]` to `shimmy[gym-v21]` or `shimmy[gym-v26]` by elliottower in https://github.com/Farama-Foundation/Gymnasium/pull/433
* Fix several issues within the experimental vector environment and wrappers by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/516
* Video recorder wrapper
* Fix `VideoRecorder` on `reset` to empty `recorded_frames` rather than `frames` by voidflight in https://github.com/Farama-Foundation/Gymnasium/pull/518
* Remove `Env.close` in `VideoRecorder.close` by qgallouedec in https://github.com/Farama-Foundation/Gymnasium/pull/533
* Fix `VideoRecorder` and `RecordVideoV0` to move `import moviepy` such that `__del__` doesn't raise `AttributeErrors` by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/553
* Mujoco
* Remove Hopper-v4's old render API func by Kallinteris-Andreas in https://github.com/Farama-Foundation/Gymnasium/pull/588
* Fix TypeError when closing rendering by sonelu in (https://github.com/Farama-Foundation/Gymnasium/pull/440)
* Fix the wrong `nstep` in `_step_mujoco_simulation` function of `MujocoEnv` by xuanhien070594 in https://github.com/Farama-Foundation/Gymnasium/pull/424
* Allow a different number of actuator control from the action space by reginald-mclean in https://github.com/Farama-Foundation/Gymnasium/pull/604

Documentation Updates
* Allow users to view source code of referenced objects on the website by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/497
* Update website homepage by elliottower in https://github.com/Farama-Foundation/Gymnasium/pull/482
* Make atari documentation consistent by pseudo-rnd-thoughts in https://github.com/Farama-Foundation/Gymnasium/pull/418 and add missing descriptions by dylwil3 in https://github.com/Farama-Foundation/Gymnasium/pull/510
* Add third party envs: safety gymnasium, pyflyt, Gym-Trading-Env, stable-retro, DACBench, gym-cellular-automata by elliottower, stefanbschneider, ClementPerroud, jjshoots, MatPoliquin, and robertoschiavone in 450, 451, 474, 487, 529, 538, 581
* Update MuJoCo documentation for all environments and base mujoco environment by Kallinteris-Andreas in 524, 522
* Update CartPole reward documentation to clarify different maximum rewards for v0 and v1 by robertoschiavone in https://github.com/Farama-Foundation/Gymnasium/pull/429
* Clarify Frozen lake time limit for `FrozenLake4x4` and `FrozenLake8x8` environments by yaniv-peretz in https://github.com/Farama-Foundation/Gymnasium/pull/459
* Typo in the documentation for single_observation_space by kvrban in https://github.com/Farama-Foundation/Gymnasium/pull/491
* Fix the rendering of warnings on the website by helpingstar in https://github.com/Farama-Foundation/Gymnasium/pull/520

**Full Changelog**: https://github.com/Farama-Foundation/Gymnasium/compare/v0.28.1...v0.29.0

0.28.1

Small emergency release to fix several issues

* Fixed `gymnasium.vector` as the `gymnasium/__init__.py` as it isn't imported https://github.com/Farama-Foundation/Gymnasium/pull/403
* Update third party envs to separate environments that support gymnasium and gym and have a consistent style https://github.com/Farama-Foundation/Gymnasium/pull/404
* Update the documentation for v0.28 as frontpage gif had the wrong link, experimental documentation was missing and add gym release notes https://github.com/Farama-Foundation/Gymnasium/pull/405

**Full Changelog**: https://github.com/Farama-Foundation/Gymnasium/compare/v0.28.0...v0.28.1

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.