- New vectorized environments supported by the RPC-based render system (`sapien.RenderServer` and `sapien.RenderClient`).
- The renderer is significantly improved. `sapien.VulkanRenderer` and `sapien.KuafuRenderer` are merged into a unified renderer `sapien.SapienRenderer`.
- Hands-on tutorials are provided for new users. Most of them can run on Google Colab.
- `mani_skill2` is a pip-installable package now!
- Documentation is improved. The descriptions of environments are improved and their thumbnails are added.
- We experimentally support adding visual backgrounds and enabling realistic stereo depth cameras.
- Customization of environments (configuring cameras) is easier now!
> Given many new features, we refactor ManiSkill2, which leads to many changes between v0.4.0 and v0.3.0. The instructions to migrate are presented below.
New Features
Installation
Installation becomes easier: `pip install mani-skill2`.
> Note that to fully uninstall `mani_skill2`, you might need manually remove the generated cache files.
We include some examples in the package.
bash
Example with random actions. Can be used to test the installation
python -m mani_skill2.examples.demo_random_action
Interactive play
python -m mani_skill2.examples.demo_manual_control -e PickCube-v0
https://user-images.githubusercontent.com/17827258/218015297-b85d0b49-a5b4-4496-bc4e-bb03162080f3.mp4
Vectorized Environments
We provide an implementation of vectorized environments (for rigid-body environments) powered by the SAPIEN RPC-based render server-client system.
python
from mani_skill2.vector import VecEnv, make
env: VecEnv = make("PickCube-v0", num_envs=4)
Please see `mani_skill2.examples.demo_vec_env` for an example: `python -m mani_skill2.examples.demo_vec_env -e PickCube-v0 -n 4`.
We provide examples to use our `VecEnv` with [Stable-baselines3](https://stable-baselines3.readthedocs.io/en/master/) at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb and https://github.com/haosulab/ManiSkill2/tree/main/examples/tutorials/reinforcement-learning
![FPS](https://user-images.githubusercontent.com/17827258/218010717-3ceb0529-9648-4f90-a4e3-dd4f0ce6cc32.png)
Improved Renderer
It is easier to enable ray tracing:
python
Enable ray tracing by changing shaders
env = gym.make("PickCube-v0", shader_dir="rt")
> v0.3.0 experimentally supports ray tracing by `KuafuRenderer`. v0.4.0 uses `SapienRenderer` instead to provide a more seamless experience. Ray tracing is still not supported for soft-body environments currently.
Colab Tutorials
[![Quickstart](https://img.shields.io/badge/quickstart-blue.svg?logo=googlecolab)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/1_quickstart.ipynb) [![Reinforcement learning](https://img.shields.io/badge/reinforcement%20learning-blue.svg?logo=googlecolab)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb) [![Imitation learning](https://img.shields.io/badge/imitation%20learning-blue.svg?logo=googlecolab)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/3_imitation_learning.ipynb)
![colab](https://user-images.githubusercontent.com/17827258/218015320-2e4b5162-310f-42b8-8272-4ffe9b04cd01.png)
Camera Configurations
It is easier to change camera configurations in v0.4.0:
python
Change camera resolutions
env = gym.make(
"PickCube-v0",
only change "base_camera" and keep other cameras for observations unchanged
camera_cfgs=dict(base_camera=dict(width=320, height=240)),
change for all cameras for visualization
render_camera_cfgs=dict(width=640, height=480),
)
To include GT segmentation masks for all cameras in observations, you can set `add_segmentation=True` in `camera_cfgs` to initialize an environment.
python
Add segmentation masks to observations (equivalent to adding Segmentation texture for each camera)
env = gym.make("PickCube-v0", camera_cfgs=dict(add_segmentation=True))
> v0.3.0 uses `gym.make(..., enable_gt_seg=True)` to enable GT segmentation masks (`visual_seg` and `actor_seg`). v0.4.0 uses `env = gym.make(..., camera_cfgs=dict(add_segmentation=True))`. Besides, there will be `Segmentation` in observations instead, where `Segmentation[..., 0:1] == visual_seg` and `Segmentation[..., 1:2] == actor_seg`.
More examples can be found at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/customize_environments.ipynb
Visual Background
We experimentally support adding visual backgrounds.
python
Download the background asset first: python -m mani_skill2.utils.download_asset minimal_bedroom
env = gym.make("PickCube-v0", bg_name="minimal_bedroom")
Stereo Depth Camera
We experimentally support realistic stereo depth cameras.
python
env = gym.make(
"PickCube-v0",
obs_mode="rgbd",
shader_dir="rt",
camera_cfgs={"use_stereo_depth": True, "height": 512, "width": 512},
)
Breaking Changes
Assets
`mani_skill2` is pip-installable. The basic assets (the robot description of the Panda arm, PartNet-mobility metadata, essential assets for soft-body environments) are located at `mani_skill2/assets`, which are packed into the pip wheel. Task-specific assets need to be downloaded. The extra assets are downloaded to `./data` by default.
- Improve the script to download assets: `python -m mani_skill2.utils.download_asset ${ASSET_UID/ENV_ID}`. The positional argument can be a UID of the asset, an environment ID, or "all".
> `mani_skill2.utils.download` (v0.3.0) is renamed to `mani_skill2.utils.download_asset` (v0.4.0).
bash
Download YCB object models
python -m mani_skill2.utils.download_asset ycb
Download the required assets for PickSingleYCB-v0, which are just YCB object models
python -m mani_skill2.utils.download_asset PickSingleYCB-v0
- When `mani_skill2` is imported, it uses the environment variable `MS2_ASSET_DIR` to decide where assets are stored, which is set to `./data` if not specified. It also takes effect for downloading assets.
Demonstrations
We add a script to download demonstrations: `python -m mani_skill2.utils.download_demo ${ENV_ID} -o ${DEMO_DIR}`.
> There are some minor changes to the file structure, but no updates to the data itself.
Observations
The observation modes that include robot segmentation masks are renamed to `pointcloud+robot_seg` and `rgbd+robot_seg` from `pointcloud_robot_seg` and `rgbd_robot_seg`.
> v0.3.0 uses `xxx_robot_seg` while v0.4.0 uses `xxx+robot_seg`. However, the concrete implementation only checks the keyword `robot_seg`. Thus, the previous codes will not be broken by this change.
For RGB-D observations, we move all camera parameters from the key `image` to a new key `camera_param`. Please see https://haosulab.github.io/ManiSkill2/concepts/observation.html#image for more details.
> In v0.3.0, camera parameters are within `obs["image"]`. In v0.4.0, there is a separate key `obs["camera_param"]` for camera parameters. It will make users easier to discard camera parameters if they do not need them.
Fixes
- Fix undefined behavior due to `solver_velocity_iterations=0`
- Fix paths to download assets of "PickClutterYCB-v0", "OpenCabinetDrawer-v1", "OpenCabinetDoor-v1"
Pull Requests
* track order in h5py files to make stored 'obs' key data be consistent with order in env observations by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/48
* Add python api to download demonstrations and fix gdown bug for large file downloads by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/45
* README download path "rigid/soft_body_envs" -> "rigid/soft_body" by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/55
* fix PickClutter bug where obj_start_pos is not an np array by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/58
* v0.4.0: SapienRenderer, vectorized environments, pip wheel and other new features by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/57
* gpu runtime specification. by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/60
* 0.4.0 patch by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/59
**Full Changelog**: https://github.com/haosulab/ManiSkill2/compare/v0.3.0...v0.4.0