Mmpose

Latest version: v1.3.2

Safety actively analyzes 681916 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 7

1.3.1

Fix the bug when downloading config and checkpoint using `mim` (see [Issue 2918](https://github.com/open-mmlab/mmpose/issues/2918)).

1.3.0

RTMO

We are exited to release [RTMO](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo):
- RTMO is the first one-stage pose estimation method that achieves both high accuracy and real-time speed.
- It performs best on crowded scenes. RTMO achieves 83.8% AP on the CrowdPose test set.
- RTMO is easy to run for inference and deployment. It does not require an extra human detector.
- Try it online with this [demo](https://openxlab.org.cn/apps/detail/mmpose/RTMPose) by choosing `rtmo | body`.
- The paper is available on [arXiv](https://arxiv.org/abs/2312.07526).

![rtmo](https://github.com/open-mmlab/mmpose/assets/26127467/54d5555a-23e5-4308-89d1-f0c82a6734c2)

Improved RTMW

We have released additional [RTMW](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose#wholebody-2d-133-keypoints) models in various sizes:

| Config | Input Size | Whole AP | Whole AR | FLOPS<sup><br>(G) |
| :------------------------------ | :--------: | :------: | :------: | :---------------: |

1.2.0

RTMW

We are excited to release the alpha version of [RTMW](https://github.com/open-mmlab/mmpose/blob/main/projects/rtmpose/rtmpose/wholebody_2d_keypoint/rtmw-x_8xb320-270e_cocktail13-384x288.py):
- The first whole-body pose estimation model with accuracy exceeding 70 AP on COCO-Wholebody benchmark. RTMW-x achieves 70.2 AP.
- More accurate hand details for pose-guided image generation, gesture recognition, and human-computer interaction, etc.
- Compatible with `dw_openpose_full` preprocessor in [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet)
- Try it online with this [demo](https://openxlab.org.cn/apps/detail/mmpose/RTMPose) by choosing wholebody(rtmw).
- The technical report will be released soon.

<img src="https://github.com/open-mmlab/mmpose/assets/13503330/635c4618-c459-45e8-84a5-eb68cf338d00" style="height:200px" />

New Algorithms

We are glad to support the following new algorithms:

- (ICCV 2023) [MotionBERT](https://github.com/open-mmlab/mmpose/tree/main/configs/body_3d_keypoint/motionbert)
- (ICCVW 2023) [DWPose](https://github.com/open-mmlab/mmpose/tree/main/configs/wholebody_2d_keypoint/dwpose)
- (ICLR 2023) [EDPose](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo/body_2d_keypoint.html#edpose-edpose-on-coco)
- (ICLR 2022) [Uniformer](https://github.com/open-mmlab/mmpose/tree/main/projects/uniformer)

(ICCVW 2023) DWPose

We are glad to support the two-stage distillation method DWPose, which achieves the new SOTA performance on COCO-WholeBody.

- Since DWPose is the distilled model of RTMPose, you can directly load the weights of DWPose into [RTMPose](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose).
- DWPose has been supported in [sd-webui-controlnet](https://github.com/Mikubill/sd-webui-controlnet).
- You can also try DWPose online with this [demo](https://openxlab.org.cn/apps/detail/mmpose/RTMPose) by choosing wholebody(dwpose).

<img src="https://github.com/IDEA-Research/DWPose/blob/onnx/resources/lalaland.gif" style="height:200px" />

Here is a guide to train DWPose:

1. Train DWPose with the first stage distillation

python
bash tools/dist_train.sh configs/wholebody_2d_keypoint/dwpose/ubody/s1_dis/rtmpose_x_dis_l_coco-ubody-384x288.py 8


2. Transfer the S1 distillation models into regular models

python
first stage distillation
python pth_transfer.py $dis_ckpt $new_pose_ckpt


3. Train DWPose with the second stage distillation
python
bash tools/dist_train.sh configs/wholebody_2d_keypoint/dwpose/ubody/s2_dis/dwpose_l-ll_coco-ubody-384x288.py 8


4. Transfer the S2 distillation models into regular models

python
second stage distillation
python pth_transfer.py $dis_ckpt $new_pose_ckpt --two_dis


- Thanks yzd-v for helping with the integration of DWPose!

(ICCV 2023) MotionBERT

MotionBERT is the new SOTA method of Monocular 3D Human Pose Estimation on Human3.6M.

![motionbert](https://github.com/open-mmlab/mmpose/assets/13503330/2d516a52-5016-4d6e-866f-52895a4c0272)

You can conviently try MotionBERT via the 3D Human Pose Demo with Inferencer:

python
python demo/inferencer_demo.py tests/data/coco/000000000785.jpg \
--pose3d human3d --vis-out-dir vis_results/human3d


- Supported by LareinaM

(ICLR 2023) EDPose

We support ED-Pose, an end-to-end framework with Explicit box Detection for multi-person Pose estimation. ED-Pose re-considers this task as two explicit box detection processes with a unified representation and regression supervision. In general, ED-Pose is conceptually simple without post-processing and dense heatmap supervision.

<img src="https://github.com/IDEA-Research/ED-Pose/blob/master/figs/crowd%20scene.gif" style="height:200px" />

The checkpoint is converted from the official repo. The training of EDPose is not supported yet. It will be supported in the future updates.

You can conviently try EDPose via the 2D Human Pose Demo with Inferencer:

python
python demo/inferencer_demo.py tests/data/coco/000000197388.jpg \
--pose2d edpose_res50_8xb2-50e_coco-800x1333 --vis-out-dir vis_results


- Thanks LiuYi-Up for helping with the integration of EDPose!
- This is the task of our OpenMMLabCamp, if you also wish to contribute code to us, feel free to refer to [this link](https://github.com/open-mmlab/OpenMMLabCamp/discussions/categories/mmpose) to pick up the task!

(ICLR 2022) Uniformer

In [projects](https://github.com/open-mmlab/mmpose/tree/main/projects/uniformer), we implement a topdown heatmap based human pose estimator, utilizing the approach outlined in UniFormer: Unifying Convolution and Self-attention for Visual Recognition (TPAMI 2023) and UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning (ICLR 2022).

<img src="https://raw.githubusercontent.com/Sense-X/UniFormer/main/figures/framework.png" alt>

- Thanks xin-li-67 for helping with the integration of Uniformer!
- This is the task of our OpenMMLabCamp, if you also wish to contribute code to us, feel free to refer to [this link](https://github.com/open-mmlab/OpenMMLabCamp/discussions/categories/mmpose) to pick up the task!

New Datasets

We have added support for two new datasets:

- (CVPR 2023) [UBody](https://mmpose.readthedocs.io/zh_CN/latest/model_zoo_papers/datasets.html#ubody-cvpr-2023)
- [300W-LP](https://github.com/open-mmlab/mmpose/tree/main/configs/face_2d_keypoint/topdown_heatmap/300wlp)

(CVPR 2023) UBody

UBody can boost 2D whole-body pose estimation and controllable image generation, especially for in-the-wild hand keypoint detection.

![grouned_sam_osx_demo](https://github.com/open-mmlab/mmpose/assets/13503330/8e186c07-45fd-4982-acb9-b6e786df5993)

- Supported by xiexinch

300W-LP

300W-LP contains the synthesized large-pose face images from 300W.

![300wlp](https://github.com/open-mmlab/mmpose/assets/13503330/d5be922f-004c-4e19-b02f-192445f5657b)

- Thanks Yang-Changhui for helping with the integration of 300W-LP!
- This is the task of our OpenMMLabCamp, if you also wish to contribute code to us, feel free to refer to [this link](https://github.com/open-mmlab/OpenMMLabCamp/discussions/categories/mmpose) to pick up the task!

Contributors
- Tau-J
- Ben-Louis
- xin-li-67
- Indigo6
- xiexinch
- tpoisonooo
- crazysteeaam
- yzd-v
- chaodyna
- lwttttt
- k-yomo
- LiuYi-Up
- ZhaoQiiii
- Yang-Changhui
- juxuan27

1.1.0

New Datasets
We are glad to support 3 new datasets:
- (CVPR 2023) [Human-Art](https://github.com/IDEA-Research/HumanArt)
- (CVPR 2022) [Animal Kingdom](https://github.com/sutdcv/Animal-Kingdom)
- (AAAI 2020) [LaPa](https://github.com/JDAI-CV/lapa-dataset/)

(CVPR 2023) Human-Art
**Human-Art** is a large-scale dataset that targets multi-scenario human-centric tasks to bridge the gap between natural and artificial scenes.

![image](https://github.com/open-mmlab/mmpose/assets/13503330/c9171dbb-7e7a-4c39-98e3-c92932182efb)

**Contents of Human-Art:**
- 50,000 images including human figures in 20 scenarios (5 natural scenarios, 3 2D artificial scenarios, and 12 2D artificial scenarios)
- Human-centric annotations include human bounding box, 21 2D human keypoints, human self-contact keypoints, and description text
- baseline human detector and human pose estimator trained on the joint of [MSCOCO](https://cocodataset.org/) and Human-Art

**Models trained on Human-Art:**
- [HRNet](https://github.com/open-mmlab/mmpose/blob/main/configs/body_2d_keypoint/topdown_heatmap/humanart/hrnet_humanart.md)
- [ViTPose](https://github.com/open-mmlab/mmpose/blob/main/configs/body_2d_keypoint/topdown_heatmap/humanart/vitpose_humanart.md)
- [RTMPose](https://github.com/open-mmlab/mmpose/blob/main/configs/body_2d_keypoint/rtmpose/humanart/rtmpose_humanart.md)

Thanks juxuan27 for helping with the integration of **Human-Art**!

(CVPR 2022) Animal Kingdom
**Animal Kingdom** provides multiple annotated tasks to enable a more thorough understanding of natural animal behaviors.

![image](https://github.com/open-mmlab/mmpose/assets/13503330/fb71095f-209f-4cd0-868f-9a033798fa09)

Results comparison:
<html>
<body>
<!--StartFragment--><byte-sheet-html-origin data-id="1688477010192" data-version="4" data-is-embed="true" data-grid-line-hidden="false">

Arch | Input Size | PCK(0.05) Ours | Official Repo | Paper
-- | -- | -- | -- | --
P1_hrnet_w32 | 256x256 | 0.6323 | 0.6342 | 0.6606
P2_hrnet_w32 | 256x256 | 0.3741 | 0.3726 | 0.393
P3_mammals_hrnet_w32 | 256x256 | 0.571 | 0.5719 | 0.6159
P3_amphibians_hrnet_w32 | 256x256 | 0.5358 | 0.5432 | 0.5674
P3_reptiles_hrnet_w32 | 256x256 | 0.51 | 0.5 | 0.5606
P3_birds_hrnet_w32 | 256x256 | 0.7671 | 0.7636 | 0.7735
P3_fishes_hrnet_w32 | 256x256 | 0.6406 | 0.636 | 0.6825

</byte-sheet-html-origin><!--EndFragment-->
</body>
</html>

For more details, see [this page](https://github.com/open-mmlab/mmpose/blob/main/configs/animal_2d_keypoint/topdown_heatmap/ak/hrnet_animalkingdom.md)

Thanks Dominic23331 for helping with the integration of **Animal Kingdom**!

(AAAI 2020) LaPa
**La**ndmark guided face **Pa**rsing dataset (LaPa) consists of more than 22,000 facial images with abundant variations in expression, pose and occlusion, and each image of LaPa is provided with an 11-category pixel-level label map and 106-point landmarks.

![image](https://github.com/open-mmlab/mmpose/assets/13503330/afef8cfd-c084-4b19-89cd-2c520f490ece)

Supported by Tau-J

New Config Type
MMEngine introduced the pure Python style configuration file:
- Support navigating to base configuration file in IDE
- Support navigating to base variable in IDE
- Support navigating to source code of class in IDE
- Support inheriting two configuration files containing the same field
- Load the configuration file without other third-party requirements

Refer to the [tutorial](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta) for more detailed usages.

![image](https://github.com/open-mmlab/mmengine/assets/57566630/7eb41748-9374-488f-901e-fcd7f0d3c8a1)

We provided some examples [here](https://github.com/open-mmlab/mmpose/tree/main/mmpose/configs/body_2d_keypoint). Also, new config type of YOLOX-Pose is supported [here](https://github.com/open-mmlab/mmpose/blob/main/projects/yolox_pose/configs/py_yolox_pose_s_8xb32_300e_coco.py).
Feel free to try this new feature and give us your feedback!

Improved RTMPose
We combined public datasets and released more powerful RTMPose models:
- 17-kpt and 26-kpt body models
- 21-kpt hand models
- 106-kpt face models

<div align=left>
<img src="https://user-images.githubusercontent.com/13503330/241645174-38aa345e-4ceb-4e73-bc37-5e082735e336.gif" width=500 height=350/><img src="https://user-images.githubusercontent.com/13503330/243889217-2ecbf9f4-6963-4a14-9801-da10c0a65dac.gif" width=300 height=350/>
</div>


List of [examples](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose/examples) to deploy RTMPose:
- RTMPose-Deploy HW140701 Dominic23331
- RTMPose-Deploy is a C++ code example for RTMPose localized deployment.
- RTMPose inference with ONNXRuntime (Python) IRONICBo
- This example shows how to run RTMPose inference with ONNXRuntime in Python.
- PoseTracker Android Demo
- PoseTracker Android Demo Prototype based on mmdeploy.

Check out [this page](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose) to know more.

Supported by Tau-J

3D Pose Lifter Refactory
We have migrated SimpleBaseline3D and VideoPose3D into MMPose v1.1.0. Users can easily use [Inferencer](https://mmpose.readthedocs.io/en/latest/user_guides/inference.html#inferencer-a-unified-inference-interface) and [body3d demo](https://github.com/open-mmlab/mmpose/blob/main/demo/body3d_pose_lifter_demo.py) to conduct inference.

Below is an example of how to use Inferencer to predict 3d pose:

shell
python demo/inferencer_demo.py tests/data/coco/000000000785.jpg \
--pose3d human3d --vis-out-dir vis_results/human3d \
--rebase-keypoint-height


![image](https://github.com/open-mmlab/mmpose/assets/13503330/0b6b8159-a569-4ec5-b99c-2407cdcfdd1b)

Video result:

![img_v2_45ba54f3-adae-49c7-bf45-07e84d49d21g](https://github.com/open-mmlab/mmpose/assets/13503330/88a95bd9-ac1d-42a6-bf4b-a513b8852ce0)

Supported by LareinaM

Inference Speed-up & Webcam Inference
We have made a lot of improvements to our demo scripts:
- Much higher inference speed
- OpenCV-backend visualizer
- All demos support inference with webcam

Take `topdown_demo_with_mmdet.py` as example, you can conduct inference with webcam by specifying `--input webcam`:

shell
inference with webcam
python demo/topdown_demo_with_mmdet.py \
projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \
projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \
--input webcam \
--show


Supported by Ben-Louis and LareinaM

New Contributors
* xin-li-67 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2205
* irexyc made their first contribution in https://github.com/open-mmlab/mmpose/pull/2216
* lu-minous made their first contribution in https://github.com/open-mmlab/mmpose/pull/2225
* FishBigOcean made their first contribution in https://github.com/open-mmlab/mmpose/pull/2286
* ATang0729 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2201
* HW140701 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2316
* IRONICBo made their first contribution in https://github.com/open-mmlab/mmpose/pull/2323
* shuheilocale made their first contribution in https://github.com/open-mmlab/mmpose/pull/2340
* Dominic23331 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2139
* notplus made their first contribution in https://github.com/open-mmlab/mmpose/pull/2365
* juxuan27 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2304
* 610265158 made their first contribution in https://github.com/open-mmlab/mmpose/pull/2366
* CescMessi made their first contribution in https://github.com/open-mmlab/mmpose/pull/2385
* huangjiyi made their first contribution in https://github.com/open-mmlab/mmpose/pull/2467
* Billccx made their first contribution in https://github.com/open-mmlab/mmpose/pull/2417
* mareksubocz made their first contribution in https://github.com/open-mmlab/mmpose/pull/2474

**Full Changelog**: https://github.com/open-mmlab/mmpose/compare/v1.0.0...v1.1.0

1.0.0

| Model | Training Speed | Memory |
| :-- | :--: | :--: |
| ViTPose-B | 29.6% ↑ (0.931 → 0.655) | 10586 → 10663 |
| ViTPose-S | 33.7% ↑ (0.563 → 0.373) | 6091 → 6170 |
| HRNet-w32 | 12.8% ↑ (0.553 → 0.482) | 9849 → 10145 |
| HRNet-w48 | 37.1% ↑ (0.437 → 0.275) | 7319 → 7394 |
| RTMPose-t | 6.3% ↑ (1.533 → 1.437) | 6292 → 6489 |
| RTMPose-s | 13.1% ↑ (1.645 → 1.430) | 9013 → 9208 |

* Pytorch 2.0 test, add projects doc and refactor by LareinaM in https://github.com/open-mmlab/mmpose/pull/2136

New Design: Codecs

In pose estimation tasks, various algorithms require different target formats, such as normalized coordinates, vectors, and heatmaps. MMPose 1.0.0 introduces a unified Codec module to streamline the encoding and decoding processes:

![image](https://user-images.githubusercontent.com/13503330/231523685-ea056d1e-8eec-4df8-87a6-69215a74adae.png)

- Encoder: Transforms input image space coordinates into required target formats.
- Decoder: Transforms model outputs into input image space coordinates, performing the inverse operation of the encoder.
This integration offers a more coherent and user-friendly experience when working with different pose estimation algorithms. For a detailed introduction to codecs, including concrete examples, please refer to our guide on [learn about Codecs](https://mmpose.readthedocs.io/en/latest/advanced_guides/codecs.html)

Bug Fixes
* [Fix] fix readthedocs compiling requirements by ly015 in https://github.com/open-mmlab/mmpose/pull/2071
* [Fix] fix online documentation by ly015 in https://github.com/open-mmlab/mmpose/pull/2073
* [Fix] fix online docs by ly015 in https://github.com/open-mmlab/mmpose/pull/2075
* [Fix] fix warnings when falling back to mmengine registry by ly015 in https://github.com/open-mmlab/mmpose/pull/2082
* [Fix] fix CI by ly015 in https://github.com/open-mmlab/mmpose/pull/2088
* [Fix] fix model names in metafiles by Ben-Louis in https://github.com/open-mmlab/mmpose/pull/2093
* [Fix] fix simcc visualization by Tau-J in https://github.com/open-mmlab/mmpose/pull/2130

New Contributors
* ChenZhenGui made their first contribution in https://github.com/open-mmlab/mmpose/pull/1800
* xinxinxinxu made their first contribution in https://github.com/open-mmlab/mmpose/pull/1843
* jack0rich made their first contribution in https://github.com/open-mmlab/mmpose/pull/1912
* zwfcrazy made their first contribution in https://github.com/open-mmlab/mmpose/pull/1944
* LKJacky made their first contribution in https://github.com/open-mmlab/mmpose/pull/2024
* tongda made their first contribution in https://github.com/open-mmlab/mmpose/pull/2028
* LRuid made their first contribution in https://github.com/open-mmlab/mmpose/pull/2055

**Full Changelog**: https://github.com/open-mmlab/mmpose/compare/v0.29.0...v1.0.0

1.0.0rc1

**Highlights**

- Release RTMPose, a high-performance real-time pose estimation algorithm with cross-platform deployment and inference support. See details at the [project page](/projects/rtmpose/)
- Support several new algorithms: ViTPose (arXiv'2022), CID (CVPR'2022), DEKR (CVPR'2021)
- Add Inferencer, a convenient inference interface that perform pose estimation and visualization on images, videos and webcam streams with only one line of code
- Introduce *Project*, a new form for rapid and easy implementation of new algorithms and features in MMPose, which is more handy for community contributors

**New Features**

- Support RTMPose ([1971](https://github.com/open-mmlab/mmpose/pull/1971), [#2024](https://github.com/open-mmlab/mmpose/pull/2024), [#2028](https://github.com/open-mmlab/mmpose/pull/2028), [#2030](https://github.com/open-mmlab/mmpose/pull/2030), [#2040](https://github.com/open-mmlab/mmpose/pull/2040), [#2057](https://github.com/open-mmlab/mmpose/pull/2057))
- Support Inferencer ([1969](https://github.com/open-mmlab/mmpose/pull/1969))
- Support ViTPose ([1876](https://github.com/open-mmlab/mmpose/pull/1876), [#2056](https://github.com/open-mmlab/mmpose/pull/2056), [#2058](https://github.com/open-mmlab/mmpose/pull/2058), [#2065](https://github.com/open-mmlab/mmpose/pull/2065))
- Support CID ([1907](https://github.com/open-mmlab/mmpose/pull/1907))
- Support DEKR ([1834](https://github.com/open-mmlab/mmpose/pull/1834), [#1901](https://github.com/open-mmlab/mmpose/pull/1901))
- Support training with multiple datasets ([1767](https://github.com/open-mmlab/mmpose/pull/1767), [#1930](https://github.com/open-mmlab/mmpose/pull/1930), [#1938](https://github.com/open-mmlab/mmpose/pull/1938), [#2025](https://github.com/open-mmlab/mmpose/pull/2025))
- Add *project* to allow rapid and easy implementation of new models and features ([1914](https://github.com/open-mmlab/mmpose/pull/1914))

**Improvements**

- Improve documentation quality ([1846](https://github.com/open-mmlab/mmpose/pull/1846), [#1858](https://github.com/open-mmlab/mmpose/pull/1858), [#1872](https://github.com/open-mmlab/mmpose/pull/1872), [#1899](https://github.com/open-mmlab/mmpose/pull/1899), [#1925](https://github.com/open-mmlab/mmpose/pull/1925), [#1945](https://github.com/open-mmlab/mmpose/pull/1945), [#1952](https://github.com/open-mmlab/mmpose/pull/1952), [#1990](https://github.com/open-mmlab/mmpose/pull/1990), [#2023](https://github.com/open-mmlab/mmpose/pull/2023), [#2042](https://github.com/open-mmlab/mmpose/pull/2042))
- Support visualizing keypoint indices ([2051](https://github.com/open-mmlab/mmpose/pull/2051))
- Support OpenPose style visualization ([2055](https://github.com/open-mmlab/mmpose/pull/2055))
- Accelerate image transpose in data pipelines with tensor operation ([1976](https://github.com/open-mmlab/mmpose/pull/1976))
- Support auto-import modules from registry ([1961](https://github.com/open-mmlab/mmpose/pull/1961))
- Support keypoint partition metric ([1944](https://github.com/open-mmlab/mmpose/pull/1944))
- Support SimCC 1D-heatmap visualization ([1912](https://github.com/open-mmlab/mmpose/pull/1912))
- Support saving predictions and data metainfo in demos ([1814](https://github.com/open-mmlab/mmpose/pull/1814), [#1879](https://github.com/open-mmlab/mmpose/pull/1879))
- Support SimCC with DARK ([1870](https://github.com/open-mmlab/mmpose/pull/1870))
- Remove Gaussian blur for offset maps in UDP-regress ([1815](https://github.com/open-mmlab/mmpose/pull/1815))
- Refactor encoding interface of Codec for better extendibility and easier configuration ([1781](https://github.com/open-mmlab/mmpose/pull/1781))
- Support evaluating CocoMetric without annotation file ([1722](https://github.com/open-mmlab/mmpose/pull/1722))
- Improve unit tests ([1765](https://github.com/open-mmlab/mmpose/pull/1765))

**Bug Fixes**

- Fix repeated warnings from different ranks ([2053](https://github.com/open-mmlab/mmpose/pull/2053))
- Avoid frequent scope switching when using mmdet inference api ([2039](https://github.com/open-mmlab/mmpose/pull/2039))
- Remove EMA parameters and message hub data when publishing model checkpoints ([2036](https://github.com/open-mmlab/mmpose/pull/2036))
- Fix metainfo copying in dataset class ([2017](https://github.com/open-mmlab/mmpose/pull/2017))
- Fix top-down demo bug when there is no object detected ([2007](https://github.com/open-mmlab/mmpose/pull/2007))
- Fix config errors ([1882](https://github.com/open-mmlab/mmpose/pull/1882), [#1906](https://github.com/open-mmlab/mmpose/pull/1906), [#1995](https://github.com/open-mmlab/mmpose/pull/1995))
- Fix image demo failure when GUI is unavailable ([1968](https://github.com/open-mmlab/mmpose/pull/1968))
- Fix bug in AdaptiveWingLoss ([1953](https://github.com/open-mmlab/mmpose/pull/1953))
- Fix incorrect importing of RepeatDataset which is deprecated ([1943](https://github.com/open-mmlab/mmpose/pull/1943))
- Fix bug in bottom-up datasets that ignores images without instances ([1752](https://github.com/open-mmlab/mmpose/pull/1752), [#1936](https://github.com/open-mmlab/mmpose/pull/1936))
- Fix upstream dependency issues ([1867](https://github.com/open-mmlab/mmpose/pull/1867), [#1921](https://github.com/open-mmlab/mmpose/pull/1921))
- Fix evaluation issues and update results ([1763](https://github.com/open-mmlab/mmpose/pull/1763), [#1773](https://github.com/open-mmlab/mmpose/pull/1773), [#1780](https://github.com/open-mmlab/mmpose/pull/1780), [#1850](https://github.com/open-mmlab/mmpose/pull/1850), [#1868](https://github.com/open-mmlab/mmpose/pull/1868))
- Fix local registry missing warnings ([1849](https://github.com/open-mmlab/mmpose/pull/1849))
- Remove deprecated scripts for model deployment ([1845](https://github.com/open-mmlab/mmpose/pull/1845))
- Fix a bug in input transformation in BaseHead ([1843](https://github.com/open-mmlab/mmpose/pull/1843))
- Fix an interface mismatch with MMDetection in webcam demo ([1813](https://github.com/open-mmlab/mmpose/pull/1813))
- Fix a bug in heatmap visualization that causes incorrect scale ([1800](https://github.com/open-mmlab/mmpose/pull/1800))
- Add model metafiles ([1768](https://github.com/open-mmlab/mmpose/pull/1768))

New Contributors
* ChenZhenGui made their first contribution in https://github.com/open-mmlab/mmpose/pull/1800
* LareinaM made their first contribution in https://github.com/open-mmlab/mmpose/pull/1845
* xinxinxinxu made their first contribution in https://github.com/open-mmlab/mmpose/pull/1843
* jack0rich made their first contribution in https://github.com/open-mmlab/mmpose/pull/1912
* zwfcrazy made their first contribution in https://github.com/open-mmlab/mmpose/pull/1944
* LKJacky made their first contribution in https://github.com/open-mmlab/mmpose/pull/2024
* tongda made their first contribution in https://github.com/open-mmlab/mmpose/pull/2028
* LRuid made their first contribution in https://github.com/open-mmlab/mmpose/pull/2055
* Zheng-LinXiao made their first contribution in https://github.com/open-mmlab/mmpose/pull/2057

**Full Changelog**: https://github.com/open-mmlab/mmpose/compare/v1.0.0rc0...v1.0.0rc1

Page 2 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.