Mani-skill

Latest version: v3.0.0b15

Safety actively analyzes 685670 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 5

0.4.1

Highlights
- Improve documents (docker, challenge submission)
- Update tutorials (add missing dependencies and fix links)
- Fix a missing file for `Hang-v0` in the wheel

What's Changed
* fix link to point to main branch by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/61
* Update docs by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/63
* Update 2_reinforcement_learning.ipynb by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/64
* Fix missing asset in setup and remove unused pkl by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/66
* fix bugs with submission docker by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/65


**Full Changelog**: https://github.com/haosulab/ManiSkill2/compare/v0.4.0...v0.4.1

0.4.0

ManiSkill2 v0.4.0 Release Notes

ManiSkill2 v0.4.0 introduces many new features and makes it easier to start a journey of robot learning. Here are the highlights:
- New vectorized environments supported by the RPC-based render system (`sapien.RenderServer` and `sapien.RenderClient`).
- The renderer is significantly improved. `sapien.VulkanRenderer` and `sapien.KuafuRenderer` are merged into a unified renderer `sapien.SapienRenderer`.
- Hands-on tutorials are provided for new users. Most of them can run on Google Colab.
- `mani_skill2` is a pip-installable package now!
- Documentation is improved. The descriptions of environments are improved and their thumbnails are added.
- We experimentally support adding visual backgrounds and enabling realistic stereo depth cameras.
- Customization of environments (configuring cameras) is easier now!

> Given many new features, we refactor ManiSkill2, which leads to many changes between v0.4.0 and v0.3.0. The instructions to migrate are presented below.

New Features

Installation

Installation becomes easier: `pip install mani-skill2`.

> Note that to fully uninstall `mani_skill2`, you might need manually remove the generated cache files.

We include some examples in the package.

bash
Example with random actions. Can be used to test the installation
python -m mani_skill2.examples.demo_random_action
Interactive play
python -m mani_skill2.examples.demo_manual_control -e PickCube-v0


https://user-images.githubusercontent.com/17827258/218015297-b85d0b49-a5b4-4496-bc4e-bb03162080f3.mp4

Vectorized Environments

We provide an implementation of vectorized environments (for rigid-body environments) powered by the SAPIEN RPC-based render server-client system.

python
from mani_skill2.vector import VecEnv, make
env: VecEnv = make("PickCube-v0", num_envs=4)


Please see `mani_skill2.examples.demo_vec_env` for an example: `python -m mani_skill2.examples.demo_vec_env -e PickCube-v0 -n 4`.

We provide examples to use our `VecEnv` with [Stable-baselines3](https://stable-baselines3.readthedocs.io/en/master/) at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb and https://github.com/haosulab/ManiSkill2/tree/main/examples/tutorials/reinforcement-learning

![FPS](https://user-images.githubusercontent.com/17827258/218010717-3ceb0529-9648-4f90-a4e3-dd4f0ce6cc32.png)

Improved Renderer

It is easier to enable ray tracing:

python
Enable ray tracing by changing shaders
env = gym.make("PickCube-v0", shader_dir="rt")


> v0.3.0 experimentally supports ray tracing by `KuafuRenderer`. v0.4.0 uses `SapienRenderer` instead to provide a more seamless experience. Ray tracing is still not supported for soft-body environments currently.

Colab Tutorials

[![Quickstart](https://img.shields.io/badge/quickstart-blue.svg?logo=googlecolab)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/1_quickstart.ipynb) [![Reinforcement learning](https://img.shields.io/badge/reinforcement%20learning-blue.svg?logo=googlecolab)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/2_reinforcement_learning.ipynb) [![Imitation learning](https://img.shields.io/badge/imitation%20learning-blue.svg?logo=googlecolab)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/3_imitation_learning.ipynb)

![colab](https://user-images.githubusercontent.com/17827258/218015320-2e4b5162-310f-42b8-8272-4ffe9b04cd01.png)

Camera Configurations

It is easier to change camera configurations in v0.4.0:

python
Change camera resolutions
env = gym.make(
"PickCube-v0",
only change "base_camera" and keep other cameras for observations unchanged
camera_cfgs=dict(base_camera=dict(width=320, height=240)),
change for all cameras for visualization
render_camera_cfgs=dict(width=640, height=480),
)


To include GT segmentation masks for all cameras in observations, you can set `add_segmentation=True` in `camera_cfgs` to initialize an environment.

python
Add segmentation masks to observations (equivalent to adding Segmentation texture for each camera)
env = gym.make("PickCube-v0", camera_cfgs=dict(add_segmentation=True))


> v0.3.0 uses `gym.make(..., enable_gt_seg=True)` to enable GT segmentation masks (`visual_seg` and `actor_seg`). v0.4.0 uses `env = gym.make(..., camera_cfgs=dict(add_segmentation=True))`. Besides, there will be `Segmentation` in observations instead, where `Segmentation[..., 0:1] == visual_seg` and `Segmentation[..., 1:2] == actor_seg`.

More examples can be found at https://github.com/haosulab/ManiSkill2/blob/main/examples/tutorials/customize_environments.ipynb

Visual Background

We experimentally support adding visual backgrounds.

python
Download the background asset first: python -m mani_skill2.utils.download_asset minimal_bedroom
env = gym.make("PickCube-v0", bg_name="minimal_bedroom")


Stereo Depth Camera

We experimentally support realistic stereo depth cameras.

python
env = gym.make(
"PickCube-v0",
obs_mode="rgbd",
shader_dir="rt",
camera_cfgs={"use_stereo_depth": True, "height": 512, "width": 512},
)


Breaking Changes

Assets

`mani_skill2` is pip-installable. The basic assets (the robot description of the Panda arm, PartNet-mobility metadata, essential assets for soft-body environments) are located at `mani_skill2/assets`, which are packed into the pip wheel. Task-specific assets need to be downloaded. The extra assets are downloaded to `./data` by default.

- Improve the script to download assets: `python -m mani_skill2.utils.download_asset ${ASSET_UID/ENV_ID}`. The positional argument can be a UID of the asset, an environment ID, or "all".
> `mani_skill2.utils.download` (v0.3.0) is renamed to `mani_skill2.utils.download_asset` (v0.4.0).
bash
Download YCB object models
python -m mani_skill2.utils.download_asset ycb
Download the required assets for PickSingleYCB-v0, which are just YCB object models
python -m mani_skill2.utils.download_asset PickSingleYCB-v0

- When `mani_skill2` is imported, it uses the environment variable `MS2_ASSET_DIR` to decide where assets are stored, which is set to `./data` if not specified. It also takes effect for downloading assets.

Demonstrations

We add a script to download demonstrations: `python -m mani_skill2.utils.download_demo ${ENV_ID} -o ${DEMO_DIR}`.

> There are some minor changes to the file structure, but no updates to the data itself.

Observations

The observation modes that include robot segmentation masks are renamed to `pointcloud+robot_seg` and `rgbd+robot_seg` from `pointcloud_robot_seg` and `rgbd_robot_seg`.

> v0.3.0 uses `xxx_robot_seg` while v0.4.0 uses `xxx+robot_seg`. However, the concrete implementation only checks the keyword `robot_seg`. Thus, the previous codes will not be broken by this change.

For RGB-D observations, we move all camera parameters from the key `image` to a new key `camera_param`. Please see https://haosulab.github.io/ManiSkill2/concepts/observation.html#image for more details.

> In v0.3.0, camera parameters are within `obs["image"]`. In v0.4.0, there is a separate key `obs["camera_param"]` for camera parameters. It will make users easier to discard camera parameters if they do not need them.

Fixes

- Fix undefined behavior due to `solver_velocity_iterations=0`
- Fix paths to download assets of "PickClutterYCB-v0", "OpenCabinetDrawer-v1", "OpenCabinetDoor-v1"

Pull Requests
* track order in h5py files to make stored 'obs' key data be consistent with order in env observations by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/48
* Add python api to download demonstrations and fix gdown bug for large file downloads by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/45
* README download path "rigid/soft_body_envs" -> "rigid/soft_body" by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/55
* fix PickClutter bug where obj_start_pos is not an np array by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/58
* v0.4.0: SapienRenderer, vectorized environments, pip wheel and other new features by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/57
* gpu runtime specification. by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/60
* 0.4.0 patch by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/59


**Full Changelog**: https://github.com/haosulab/ManiSkill2/compare/v0.3.0...v0.4.0

0.3.0

Added
* Add soft-body envs: `Pinch-v0` and `Write-v0`
* Add `PickClutterYCB-v0`
* Migrate all ManiSkill1 environments

Breaking Changes
* `download` and `replay_trajectory` are moved from `tools` to `mani_skill2.utils` and `mani_skill2.trajectory`. It is to enable users to call these utilities at other directories.
* Change the pose of the base camera for pick-and-place environments. It is to ease RGBD-based approaches to observe goal positions.

Other Changes
* We call `self.seed(2022)` in `sapien_env::BaseEnv.__init__` to improve reproducibility.
* Refactor evaluation
* Improve the error message when assets are missing

What's Changed
* Fix saving state in RecordEpisode wrapper & Update README by tongzhoumu in https://github.com/haosulab/ManiSkill2/pull/29
* Fixed edge case handling in RecordEpisode wrapper by xiqiangliu in https://github.com/haosulab/ManiSkill2/pull/31
* remove pickled trimesh object by fbxiang in https://github.com/haosulab/ManiSkill2/pull/37
* Refactor code structure for better user experience by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/38
* Modify use-env-states description in README by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/39

New Contributors
* tongzhoumu made their first contribution in https://github.com/haosulab/ManiSkill2/pull/29
* fabid made his contribution in https://github.com/haosulab/ManiSkill2/pull/31

**Full Changelog**: https://github.com/haosulab/ManiSkill2/compare/v0.2.1...v0.3.0

0.2.1

What's Changed
* Added the option to download all assets by xiqiangliu in https://github.com/haosulab/ManiSkill2/pull/13
* update readme for downloading assets by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/17
* Update readme by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/22
* [Fix] Fix trajectory conversion to ee-based controllers by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/23
* Experimentally support KuafuRenderer by Jiayuan-Gu in https://github.com/haosulab/ManiSkill2/pull/24
* Simplify pick single reward to be more friendly to RL by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/27

Other Changes
* Fix `StackCube-v0` success metric
* Refactor `PickSingle` and `AssemblingKits`

New Contributors
* Jiayuan-Gu made their first contribution in https://github.com/haosulab/ManiSkill2/pull/22

**Full Changelog**: https://github.com/haosulab/ManiSkill2/compare/v0.2.0...v0.2.1

0.2.0

Added
- Support new observation modes: `rgbd_robot_seg` and `pointcloud_robot_seg`
- Support `enable_gt_seg` option for environments.
- Add two new rigid-body environments: `AssemblingKits-v0` and `PandaAvoidObstacles-v0`

Breaking Changes
- `TurnFaucet-v0`: Add `target_link_pos` to observations
- `PickSingleEGAD-v0`: Reduce the density of EGAD objects and update EGAD object information
- Remove `tcp_goal_pos` in PickCube, LiftCube, PickSingle
- Update TurnFaucet assets. **Need to re-download assets**
- Change segmentation images from 2-dim to 3-dim
- Replace `xyz` with `xyzw` in `obs["pointcloud"]`. We use the homogeneous representation to handle infinite points (beyond the far of camera).

Fixed
- `TurnFaucet-v0`: Cache the initial joint positions so that they will not be affected by previous episodes
- `Pour-v0`: Fix agent initialization typo
- `Excavate-v0`: Fix hand camera position and max number of particles

What's Changed
* Update README.md by StoneT2000 in https://github.com/haosulab/ManiSkill2/pull/1
* Update Dockerfile by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/2
* Improved functionality of a few utility scripts by xiqiangliu in https://github.com/haosulab/ManiSkill2/pull/4
* Update base_env.py by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/5
* Update README.md by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/6
* Update README.md by xuanlinli17 in https://github.com/haosulab/ManiSkill2/pull/9
* Soft body patch by fbxiang in https://github.com/haosulab/ManiSkill2/pull/12

New Contributors
* StoneT2000 made their first contribution in https://github.com/haosulab/ManiSkill2/pull/1
* xuanlinli17 made their first contribution in https://github.com/haosulab/ManiSkill2/pull/2
* xiqiangliu made their first contribution in https://github.com/haosulab/ManiSkill2/pull/4
* fbxiang made their first contribution in https://github.com/haosulab/ManiSkill2/pull/12

**Full Changelog**: https://github.com/haosulab/ManiSkill2/commits/v0.2.0

Page 5 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.