Mmaction2

Latest version: v1.2.0

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

1.2.0

**Highlights**

- Support the Training of ActionClip
- Support VindLU multi-modality algorithm
- Support MobileOne TSN/TSM

**New Features**

- Support the Training of ActionClip ([2620](https://github.com/open-mmlab/mmaction2/pull/2620))
- Support video retrieval dataset MSVD ([2622](https://github.com/open-mmlab/mmaction2/pull/2622))
- Support VindLU multi-modality algorithm ([2667](https://github.com/open-mmlab/mmaction2/pull/2667))
- Support Dense Regression Network for Video Grounding ([2668](https://github.com/open-mmlab/mmaction2/pull/2668))

**Improvements**

- Support Video Demos ([2602](https://github.com/open-mmlab/mmaction2/pull/2602))
- Support Audio Demos ([2603](https://github.com/open-mmlab/mmaction2/pull/2603))
- Add README_zh-CN.md for Swin and VideoMAE ([2621](https://github.com/open-mmlab/mmaction2/pull/2621))
- Support MobileOne TSN/TSM ([2656](https://github.com/open-mmlab/mmaction2/pull/2656))
- Support SlowOnly K700 feature to train localization models ([2673](https://github.com/open-mmlab/mmaction2/pull/2673))

**Bug Fixes**

- Refine ActionDataSample structure ([2658](https://github.com/open-mmlab/mmaction2/pull/2658))
- Fix MPS device ([2619](https://github.com/open-mmlab/mmaction2/pull/2619))

1.1.0

New Direction: Multi-Modal Video Understanding

We support two novel models for video recognition and retrieval based on open-domain text: [ActionCLIP](https://github.com/open-mmlab/mmaction2/tree/main/projects/actionclip#readme) and [CLIP4Clip](https://github.com/open-mmlab/mmaction2/tree/main/configs/retrieval/clip4clip#readme). These models mark the first step of MMAction2's journey towards multi-modal video understanding. Furthermore, we also introduce a new video retrieval dataset, [MSR-VTT](https://www.microsoft.com/en-us/research/publication/msr-vtt-a-large-video-description-dataset-for-bridging-video-and-language/).

![img_v2_e882ffb4-84c9-4b3a-9ab6-38c251e7d95g](https://github.com/open-mmlab/mmaction2/assets/58767402/58b6fc4b-2680-4bb7-8b2b-8c6df1eeaadc)

For more details, please refer to [ActionCLIP](https://github.com/open-mmlab/mmaction2/tree/main/projects/actionclip#readme), [CLIP4Clip](https://github.com/open-mmlab/mmaction2/tree/main/configs/retrieval/clip4clip#readme) and [MSR-VTT](https://github.com/open-mmlab/mmaction2/tree/main/tools/data/video_retrieval#preparing-msrvtt-dataset).

Supported by Dai-Wenxun in 2470 and 2489.

New Config Type

MMEngine introduced the pure Python style configuration file:

- Support navigating to base configuration file in IDE
- Support navigating to base variable in IDE
- Support navigating to source code of class in IDE
- Support inheriting two configuration files containing the same field
- Load the configuration file without other third-party requirements

Refer to the [tutorial](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta) for more detailed usages.

![img_v2_e882ffb4-84c9-4b3a-9ab6-38c251e7d95g](https://github.com/open-mmlab/mmengine/assets/57566630/7eb41748-9374-488f-901e-fcd7f0d3c8a1)


New Datasets

We are glad to support 3 new datasets:

- (ICCV2019) [HACS](http://hacs.csail.mit.edu/)
- (ICCV2021) [MultiSports](https://github.com/MCG-NJU/MultiSports)
- (Arxiv2022) [Kinetics-710](https://github.com/OpenGVLab/UniFormerV2)

(ICCV2019) HACS

**HACS** is a new large-scale dataset for recognition and temporal localization of human actions collected from Web videos.

https://github.com/open-mmlab/mmaction2/assets/58767402/7b7407e3-994a-4523-975c-5bdee3b54998

For more details, please refer to [HACS](https://github.com/open-mmlab/mmaction2/tree/main/tools/data/hacs#readme).

Supported by hukkai in 2224

(ICCV2021) MultiSports

**MultiSports** is a multi-person video dataset of spatio-temporally localized sports actions.

https://github.com/open-mmlab/mmaction2/assets/58767402/1f94668a-823b-46a0-9ea7-eedf0f29d1d1

For more details, please refer to [MultiSports](https://github.com/open-mmlab/mmaction2/tree/main/tools/data/multisports#readme).

Supported by cir7 in 2280

(Arxiv2022) Kinetics-710

For more details, please refer to [Kinetics710](https://github.com/open-mmlab/mmaction2/tree/main/tools/data/kinetics710#readme).

Supported by cir7 in 2534

Other New Features

- Support rich projects: [Gesture Recognition](https://github.com/open-mmlab/mmaction2/tree/main/projects/gesture_recognition#readme), [Spatio-Temporal Action Detection Tutorial](https://github.com/open-mmlab/mmaction2/tree/main/projects/stad_tutorial), and [Knowledge Distillation](https://github.com/open-mmlab/mmaction2/tree/main/projects/knowledge_distillation#readme)
- Support [TCANet(CVPR'2021)](https://github.com/open-mmlab/mmaction2/tree/main/configs/localization/tcanet#readme)
- Support [VideoMAE V2(CVPR'2023) and VideoMAE(NeurIPS'2022)](https://github.com/open-mmlab/mmaction2/tree/main/configs/detection/videomae#readme) on action detection

What's Changed
* [Doc] Fix document links in readme by cir7 in https://github.com/open-mmlab/mmaction2/pull/2358
* [doc] fix installation doc by cir7 in https://github.com/open-mmlab/mmaction2/pull/2362
* [Enhance] Support automatically assigning issues by cir7 in https://github.com/open-mmlab/mmaction2/pull/2368
* [Doc] Fix model links in README by cir7 in https://github.com/open-mmlab/mmaction2/pull/2372
* [Fix] Restore the wrongly modified config by cir7 in https://github.com/open-mmlab/mmaction2/pull/2375
* [Doc] Fix readme links by cir7 in https://github.com/open-mmlab/mmaction2/pull/2376
* [Fix] update skeleton demo by WILLOSCAR in https://github.com/open-mmlab/mmaction2/pull/2381
* [Fix] Fix a bug in `demo_skeleton.py` by Dai-Wenxun in https://github.com/open-mmlab/mmaction2/pull/2380
* [Update] Update version requirements by Dai-Wenxun in https://github.com/open-mmlab/mmaction2/pull/2383
* [Doc] update readme by cir7 in https://github.com/open-mmlab/mmaction2/pull/2382
* [Doc] Update Installation Related Doc by Dai-Wenxun in https://github.com/open-mmlab/mmaction2/pull/2379
* [Fix] Fix colab tutorial by cir7 in https://github.com/open-mmlab/mmaction2/pull/2384
* [Fix] update colab link in tutorial by cir7 in https://github.com/open-mmlab/mmaction2/pull/2391
* [Doc] Refine Docs by Dai-Wenxun in https://github.com/open-mmlab/mmaction2/pull/2404
* [CI] fix github ci (main) by cir7 in https://github.com/open-mmlab/mmaction2/pull/2421
* [Fix] fix a bug in multi-label classification by Dai-Wenxun in https://github.com/open-mmlab/mmaction2/pull/2425
* [Fix] Fix issue template by cir7 in https://github.com/open-mmlab/mmaction2/pull/2399
* [Doc] Update repo list by cir7 in https://github.com/open-mmlab/mmaction2/pull/2429
* [Fix] Fix a warning caused by `torch.div` by Dai-Wenxun in https://github.com/open-mmlab/mmaction2/pull/2449
* [Fix] Fix readthedoc error raised by incompatible OpenSSL version by cir7 in https://github.com/open-mmlab/mmaction2/pull/2455
* [Fix] Fix incompatibility of ImgAug and latest Numpy by cir7 in https://github.com/open-mmlab/mmaction2/pull/2451
* [Fix] Update branch in dockerfile by cir7 in https://github.com/open-mmlab/mmaction2/pull/2397
* [Doc] Update outdated config in readme by cir7 in https://github.com/open-mmlab/mmaction2/pull/2419
* [Fix] Fix tutorial by cir7 in https://github.com/open-mmlab/mmaction2/pull/2475
* [fix] Fix batch blending bug when use multi-label classification by cir7 in https://github.com/open-mmlab/mmaction2/pull/2466
* [Fix] Fix UniFormer README and metafile by cir7 in https://github.com/open-mmlab/mmaction2/pull/2450
* [Doc] update faq by cir7 in https://github.com/open-mmlab/mmaction2/pull/2476
* [Fix] Fix a bug of MViT when set with_cls_token to False by KeepLost in https://github.com/open-mmlab/mmaction2/pull/2480
* [Fix] Update outdated dependencies of mmcv for downloading fine-gym dataset by yhZhai in https://github.com/open-mmlab/mmaction2/pull/2495
* [Doc] add finetune doc by cir7 in https://github.com/open-mmlab/mmaction2/pull/2453
* [Doc] Update faq doc by cir7 in https://github.com/open-mmlab/mmaction2/pull/2482
* [Doc] Fix document link by cir7 in https://github.com/open-mmlab/mmaction2/pull/2457
* Merge dev-1.x to main by cir7 in https://github.com/open-mmlab/mmaction2/pull/2551

New Contributors
* WILLOSCAR made their first contribution in https://github.com/open-mmlab/mmaction2/pull/2381
* KeepLost made their first contribution in https://github.com/open-mmlab/mmaction2/pull/2480
* yhZhai made their first contribution in https://github.com/open-mmlab/mmaction2/pull/2495

**Full Changelog**: https://github.com/open-mmlab/mmaction2/compare/v1.0.0...v1.1.0

1.0.0

Highlights

We are excited to announce the release of MMAction2 1.0.0 as a part of the OpenMMLab 2.0 project! MMAction2 1.0.0 introduces an updated framework structure for the core package and a new section called `Projects`. This section showcases various engaging and versatile applications built upon the MMAction2 foundation.

In this latest release, we have significantly refactored the core package's code to make it clearer, more comprehensible, and disentangled. This has resulted in improved performance for several existing algorithms, ensuring that they now outperform their previous versions. Additionally, we have incorporated some cutting-edge algorithms, such as VideoSwin and VideoMAE, to further enhance the capabilities of MMAction2 and provide users with a more comprehensive and powerful toolkit. The new `Projects` section serves as an essential addition to MMAction2, created to foster innovation and collaboration among users. This section offers the following attractive features:

- **`Flexible code contribution`**: Unlike the core package, the `Projects` section allows for a more flexible environment for code contributions, enabling faster integration of state-of-the-art models and features.
- **`Showcase of diverse applications`**: Explore various projects built upon the MMAction2 foundation, such as deployment examples and combinations of video recognition with other tasks.
- **`Fostering creativity and collaboration`**: Encourages users to experiment, build upon the MMAction2 platform, and share their innovative applications and techniques, creating an active community of developers and researchers. Discover the possibilities within the "Projects" section and join the vibrant MMAction2 community in pushing the boundaries of video understanding applications!

Exciting Features

RGBPoseConv3D

RGBPoseConv3D is a framework that jointly uses 2D human skeletons and RGB appearance for human action recognition. It is a 3D CNN with two streams, with the architecture borrowed from SlowFast. In [RGBPoseConv3D](https://github.com/open-mmlab/mmaction2/tree/main/configs/skeleton/posec3d/rgbpose_conv3d#readme):

- The RGB stream corresponds to the `slow` stream in SlowFast; The Skeleton stream corresponds to the `fast` stream in SlowFast.
- The input resolution of RGB frames is `4x` larger than the pseudo heatmaps.
- Bilateral connections are used for early feature fusion between the two modalities.


<div align=center>
<img src="https://user-images.githubusercontent.com/34324155/209961351-6def0074-9b05-43fc-8210-a1cdaaed6536.png" width=50%/>
</div>

- Supported by Dai-Wenxun in https://github.com/open-mmlab/mmaction2/pull/2182

Inferencer

In this release, we introduce the MMAction2Inferencer, which is a versatile API for the inference that supports multiple input types. The API enables users to easily specify and customize action recognition models, streamlining the process of performing video prediction using MMAction2.

Usage:

shell
python demo/demo_inferencer.py ${INPUTS} [OPTIONS]


- The `INPUTS` can be a video path or rawframes folder. For more detailed information on `OPTIONS`, please refer to [Inferencer](https://github.com/open-mmlab/mmaction2/tree/main/demo#inferencer).


Example:

shell
python demo/demo_inferencer.py zelda.mp4 --rec tsn --vid-out-dir zelda_out --label-file tools/data/kinetics/label_map_k400.txt


You can find the `zelda.mp4` [here](https://user-images.githubusercontent.com/58767402/232312124-4d237e57-7671-4d86-9e50-f588de007377.mp4). The output video is displayed below:

https://user-images.githubusercontent.com/58767402/232312742-f5eb2e8c-f015-459c-8a4d-99c331a65735.mp4

- Supported by cir7 in https://github.com/open-mmlab/mmaction2/pull/2164

List of Novel Features

MMAction2 V1.0 introduces support for new models and datasets in the field of video understanding, including [MSG3D [Project]](https://github.com/open-mmlab/mmaction2/tree/main/projects/msg3d#readme) (CVPR'2020), [CTRGCN [Project]](https://github.com/open-mmlab/mmaction2/tree/main/projects/ctrgcn#readme) (CVPR'2021), [STGCN++](https://github.com/open-mmlab/mmaction2/tree/main/configs/skeleton/stgcnpp#readme) (Arxiv'2022), [Video Swin Transformer](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/swin#readme) (CVPR'2022), [VideoMAE](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/videomae#readme) (NeurIPS'2022), [C2D](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/c2d#readme) (CVPR'2018), [MViT V2](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/mvit#readme) (CVPR'2022), [UniFormer V1](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/uniformer#readme) (ICLR'2022), and [UniFormer V2](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/uniformerv2#readme) (Arxiv'2022), as well as the spatiotemporal action detection dataset [AVA-Kinetics](https://github.com/open-mmlab/mmaction2/tree/main/tools/data/ava_kinetics#readme) (Arxiv'2022).


![image](https://user-images.githubusercontent.com/58767402/233560708-c12a0b8d-3ab7-43ba-8556-82a5c0107830.png)

- **[Enhanced Omni-Source](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/omnisource#readme):** We enhanced the original [omni-source](https://arxiv.org/abs/2003.13042) technique by dynamically adjusting 3D convolutional network architecture to simultaneously utilize videos and images for training. Taking the `SlowOnlyR50 8x8` as an example, the Top-1 accuracy comparison of the three training methods illustrates that our omni-source training effectively employs the additional `ImageNet` dataset, significantly boosting performance on `Kinetics400`.

<div align=center>
<img src="https://user-images.githubusercontent.com/58767402/233539833-98c8d731-3b9f-4a5e-ad4c-dddf6e1afd68.png" width=50%/>
</div>

- **Mulit-Stream Skeleton Pipeline:** In light of MMAction2's prior support for only `joint` and `bone` modalities, we have extended support to `joint motion` and `bone motion` modalities in MMAction2 V1.0. Furthermore, we have conducted training and evaluation for these four modalities using **NTU60 2D and 3D** keypoint data on [STGCN](https://github.com/open-mmlab/mmaction2/tree/main/configs/skeleton/stgcn#readme), [2s-AGCN](https://github.com/open-mmlab/mmaction2/tree/main/configs/skeleton/2s-agcn#readme), and [STGCN++](https://github.com/open-mmlab/mmaction2/tree/main/configs/skeleton/stgcnpp#readme).

<div align=center>
<img src="https://user-images.githubusercontent.com/58767402/233548094-3ed8b98f-55a0-477c-801f-0b119648416b.png" width=50%/>
</div>

- **Repeat Augment** was initially proposed as a data augmentation method for `ImageNet` training and has been employed in recent Video Transformer works. **Whenever a video is read during training, we use multiple (typically 2-4) random samples from the video for training.** This approach not only enhances the model's generalization capability but also reduces the IO pressure of video reading. We support Repeat Augment in MMAction2 V1.0 and utilize this technique in [MViT V2](https://github.com/open-mmlab/mmaction2/tree/main/configs/recognition/mvit#readme) training. The table below compares the Top-1 accuracy on `Kinetics400` before and after employing Repeat Augment:

<div align=center>
<img src="https://user-images.githubusercontent.com/58767402/233551634-4022b21a-3978-48c5-8276-18cdcaf64879.png" width=50%/>
</div>

Bug Fixes

- [Fix] Fix flip config of TSM for sth2sth v1/v2 dataset by cir7 in https://github.com/open-mmlab/mmaction2/pull/2247
- [Fix] Fix circle ci by cir7 in https://github.com/open-mmlab/mmaction2/pull/2336 and https://github.com/open-mmlab/mmaction2/pull/2334
- [Fix] Fix accepting an unexpected argument local-rank in PyTorch 2.0 by cir7 in https://github.com/open-mmlab/mmaction2/pull/2320
- [Fix] Fix TSM config link by zyx-cv in https://github.com/open-mmlab/mmaction2/pull/2315
- [Fix] Fix numpy version requirement in CI by hukkai in https://github.com/open-mmlab/mmaction2/pull/2284
- [Fix] Fix NTU pose extraction script by cir7 in https://github.com/open-mmlab/mmaction2/pull/2246
- [Fix] Fix TSM-MobileNet V2 by cir7 in https://github.com/open-mmlab/mmaction2/pull/2332
- [Fix] Fix command bugs in localization tasks' README by hukkai in https://github.com/open-mmlab/mmaction2/pull/2244
- [Fix] Fix duplicate name in DecordInit and SampleAVAFrame by cir7 in https://github.com/open-mmlab/mmaction2/pull/2251
- [Fix] Fix channel order when showing video by cir7 in https://github.com/open-mmlab/mmaction2/pull/2308
- [Fix] Specify map_location to cpu when using _load_checkpoint by Zheng-LinXiao in https://github.com/open-mmlab/mmaction2/pull/2252

New Contributors
* Andy1621 made their first contribution in https://github.com/open-mmlab/mmaction2/pull/2153
* zoe08 made their first contribution in https://github.com/open-mmlab/mmaction2/pull/2188
* vansin made their first contribution in https://github.com/open-mmlab/mmaction2/pull/2228
* Zheng-LinXiao made their first contribution in https://github.com/open-mmlab/mmaction2/pull/2252

**Full Changelog**: https://github.com/open-mmlab/mmaction2/compare/v0.24.0...v1.0.0

1.0.0rc3

**Highlights**

- Support Action Recognition model UniFormer V1(ICLR'2022), UniFormer V2(Arxiv'2022).
- Support training MViT V2(CVPR'2022), and MaskFeat(CVPR'2022) fine-tuning.

**New Features**

- Support UniFormer V1/V2 ([2153](https://github.com/open-mmlab/mmaction2/pull/2153))
- Support training MViT, and MaskFeat fine-tuning ([2186](https://github.com/open-mmlab/mmaction2/pull/2186))
- Support a unified inference interface: Inferencer ([2164](https://github.com/open-mmlab/mmaction2/pull/2164))

**Improvements**

- Support load data list from multi-backends ([2176](https://github.com/open-mmlab/mmaction2/pull/2176))

**Bug Fixes**

- Upgrade isort to fix CI ([2198](https://github.com/open-mmlab/mmaction2/pull/2198))
- Fix bug in skeleton demo ([2214](https://github.com/open-mmlab/mmaction2/pull/2214))

**Documentation**

- Add Chinese documentation for config.md ([2188](https://github.com/open-mmlab/mmaction2/pull/2188))
- Add readme for Omnisource ([2205](https://github.com/open-mmlab/mmaction2/pull/2205))

1.0.0rc2

**Highlights**

- Support Action Recognition model VideoMAE(NeurIPS'2022), MVit V2(CVPR'2022), C2D and skeleton-based action recognition model STGCN++
- Support Omni-Source training on ImageNet and Kinetics datasets
- Support exporting spatial-temporal detection models to ONNX

**New Features**

- Support VideoMAE ([1942](https://github.com/open-mmlab/mmaction2/pull/1942))
- Support MViT V2 ([2007](https://github.com/open-mmlab/mmaction2/pull/2007))
- Supoort C2D ([2022](https://github.com/open-mmlab/mmaction2/pull/2022))
- Support AVA-Kinetics dataset ([2080](https://github.com/open-mmlab/mmaction2/pull/2080))
- Support STGCN++ ([2156](https://github.com/open-mmlab/mmaction2/pull/2156))
- Support exporting spatial-temporal detection models to ONNX ([2148](https://github.com/open-mmlab/mmaction2/pull/2148))
- Support Omni-Source training on ImageNet and Kinetics datasets ([2143](https://github.com/open-mmlab/mmaction2/pull/2143))

**Improvements**

- Support repeat batch data augmentation ([2170](https://github.com/open-mmlab/mmaction2/pull/2170))
- Support calculating FLOPs tool powered by fvcore ([1997](https://github.com/open-mmlab/mmaction2/pull/1997))
- Support Spatial-temporal detection demo ([2019](https://github.com/open-mmlab/mmaction2/pull/2019))
- Add SyncBufferHook and add randomness config in train.py ([2044](https://github.com/open-mmlab/mmaction2/pull/2044))
- Refactor gradcam ([2049](https://github.com/open-mmlab/mmaction2/pull/2049))
- Support init_cfg in Swin and ViTMAE ([2055](https://github.com/open-mmlab/mmaction2/pull/2055))
- Refactor STGCN and related pipelines ([2087](https://github.com/open-mmlab/mmaction2/pull/2087))
- Refactor visualization tools ([2092](https://github.com/open-mmlab/mmaction2/pull/2092))
- Update `SampleFrames` transform and improve most models' performance ([1942](https://github.com/open-mmlab/mmaction2/pull/1942))
- Support real-time webcam demo ([2152](https://github.com/open-mmlab/mmaction2/pull/2152))
- Refactor and enhance 2s-AGCN ([2130](https://github.com/open-mmlab/mmaction2/pull/2130))
- Support adjusting fps in `SampleFrame` ([2157](https://github.com/open-mmlab/mmaction2/pull/2157))

**Bug Fixes**

- Fix CI upstream library dependency ([2000](https://github.com/open-mmlab/mmaction2/pull/2000))
- Fix SlowOnly readme typos and results ([2006](https://github.com/open-mmlab/mmaction2/pull/2006))
- Fix VideoSwin readme ([2010](https://github.com/open-mmlab/mmaction2/pull/2010))
- Fix tools and mim error ([2028](https://github.com/open-mmlab/mmaction2/pull/2028))
- Fix Imgaug wrapper ([2024](https://github.com/open-mmlab/mmaction2/pull/2024))
- Remove useless scripts ([2032](https://github.com/open-mmlab/mmaction2/pull/2032))
- Fix multi-view inference ([2045](https://github.com/open-mmlab/mmaction2/pull/2045))
- Update mmcv maximum version to 1.8.0 ([2047](https://github.com/open-mmlab/mmaction2/pull/2047))
- Fix torchserver dependency ([2053](https://github.com/open-mmlab/mmaction2/pull/2053))
- Fix `gen_ntu_rgbd_raw` script ([2076](https://github.com/open-mmlab/mmaction2/pull/2076))
- Update AVA-Kinetics experiment configs and results ([2099](https://github.com/open-mmlab/mmaction2/pull/2099))
- Add `joint.pkl` and `bone.pkl` used in multi-stream fusion tool ([2106](https://github.com/open-mmlab/mmaction2/pull/2106))
- Fix lint CI config ([2110](https://github.com/open-mmlab/mmaction2/pull/2110))
- Update testing accuracy for modified `SampleFrames` ([2117](https://github.com/open-mmlab/mmaction2/pull/2117)), ([#2121](https://github.com/open-mmlab/mmaction2/pull/2121)), ([#2122](https://github.com/open-mmlab/mmaction2/pull/2122)), ([#2124](https://github.com/open-mmlab/mmaction2/pull/2124)), ([#2125](https://github.com/open-mmlab/mmaction2/pull/2125)), ([#2126](https://github.com/open-mmlab/mmaction2/pull/2126)), ([#2129](https://github.com/open-mmlab/mmaction2/pull/2129)), ([#2128](https://github.com/open-mmlab/mmaction2/pull/2128))
- Fix timm related bug ([1976](https://github.com/open-mmlab/mmaction2/pull/1976))
- Fix `check_videos.py` script ([2134](https://github.com/open-mmlab/mmaction2/pull/2134))
- Update CI maximum torch version to 1.13.0 ([2118](https://github.com/open-mmlab/mmaction2/pull/2118))

**Documentation**

- Add MMYOLO description in README ([2011](https://github.com/open-mmlab/mmaction2/pull/2011))
- Add v1.x introduction in README ([2023](https://github.com/open-mmlab/mmaction2/pull/2023))
- Fix link in README ([2035](https://github.com/open-mmlab/mmaction2/pull/2035))
- Refine some docs ([2038](https://github.com/open-mmlab/mmaction2/pull/2038)), ([#2040](https://github.com/open-mmlab/mmaction2/pull/2040)), ([#2058](https://github.com/open-mmlab/mmaction2/pull/2058))
- Update TSN/TSM Readme ([2082](https://github.com/open-mmlab/mmaction2/pull/2082))
- Add chinese document ([2083](https://github.com/open-mmlab/mmaction2/pull/2083))
- Adjust docment structure ([2088](https://github.com/open-mmlab/mmaction2/pull/2088))
- Fix Sth-Sth and Jester dataset links ([2103](https://github.com/open-mmlab/mmaction2/pull/2103))
- Fix doc link ([2131](https://github.com/open-mmlab/mmaction2/pull/2131))

1.0.0rc1

**Highlights**

- Support Video Swin Transformer

**New Features**

- Support Video Swin Transformer ([1939](https://github.com/open-mmlab/mmaction2/pull/1939))

**Improvements**

- Add colab tutorial for 1.x ([1956](https://github.com/open-mmlab/mmaction2/pull/1956))
- Support skeleton-based action recognition demo ([1920](https://github.com/open-mmlab/mmaction2/pull/1920))

**Bug Fixes**

- Fix link in doc ([1986](https://github.com/open-mmlab/mmaction2/pull/1986), [#1967](https://github.com/open-mmlab/mmaction2/pull/1967), [#1951](https://github.com/open-mmlab/mmaction2/pull/1951), [#1926](https://github.com/open-mmlab/mmaction2/pull/1926),[#1944](https://github.com/open-mmlab/mmaction2/pull/1944), [#1944](https://github.com/open-mmlab/mmaction2/pull/1944), [#1927](https://github.com/open-mmlab/mmaction2/pull/1927), [#1925](https://github.com/open-mmlab/mmaction2/pull/1925))
- Fix CI ([1987](https://github.com/open-mmlab/mmaction2/pull/1987), [#1930](https://github.com/open-mmlab/mmaction2/pull/1930), [#1923](https://github.com/open-mmlab/mmaction2/pull/1923))
- Fix pre-commit hook config ([1971](https://github.com/open-mmlab/mmaction2/pull/1971))
- Fix TIN config ([1912](https://github.com/open-mmlab/mmaction2/pull/1912))
- Fix UT for BMN and BSN ([1966](https://github.com/open-mmlab/mmaction2/pull/1966))
- Fix UT for Recognizer2D ([1937](https://github.com/open-mmlab/mmaction2/pull/1937))
- Fix BSN and BMN configs for localization ([1913](https://github.com/open-mmlab/mmaction2/pull/1913))
- Modeify ST-GCN configs ([1913](https://github.com/open-mmlab/mmaction2/pull/1914))
- Fix typo in migration doc ([1931](https://github.com/open-mmlab/mmaction2/pull/1931))
- Remove Onnx related tools ([1928](https://github.com/open-mmlab/mmaction2/pull/1928))
- Update TANet readme ([1916](https://github.com/open-mmlab/mmaction2/pull/1916), [#1890](https://github.com/open-mmlab/mmaction2/pull/1890))
- Update 2S-AGCN readme ([1915](https://github.com/open-mmlab/mmaction2/pull/1915))
- Fix TSN configs ([1905](https://github.com/open-mmlab/mmaction2/pull/1905))
- Fix configs for detection ([1903](https://github.com/open-mmlab/mmaction2/pull/1903))
- Fix typo in TIN config ([1904](https://github.com/open-mmlab/mmaction2/pull/1904))
- Fix PoseC3D readme ([1899](https://github.com/open-mmlab/mmaction2/pull/1899))
- Fix ST-GCN configs ([1891](https://github.com/open-mmlab/mmaction2/pull/1891))
- Fix audio recognition readme ([1898](https://github.com/open-mmlab/mmaction2/pull/1898))
- Fix TSM readme ([1887](https://github.com/open-mmlab/mmaction2/pull/1887))
- Fix SlowOnly readme ([1889](https://github.com/open-mmlab/mmaction2/pull/1889))
- Fix TRN readme ([1888](https://github.com/open-mmlab/mmaction2/pull/1888))
- Fix typo in get_started doc ([1895](https://github.com/open-mmlab/mmaction2/pull/1895))

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.