Modelscope

Latest version: v1.14.0

Safety actively analyzes 624698 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

1.13.2

Highlight features

1. Dataset refactoring, to be compatible with HF datasets structure. (Breaking Changes)
2. Unified datasets storage and management with GIT. (Breaking Changes)

What's Changed
* upload marigold monocular depth estimation core files by Ranqing in https://github.com/modelscope/modelscope/pull/703
* chore: Formal LICENSE content by tisonkun in https://github.com/modelscope/modelscope/pull/799
* add ViViT-demo by ccyhxg in https://github.com/modelscope/modelscope/pull/796
* move doc to classroom by tastelikefeet in https://github.com/modelscope/modelscope/pull/802
* Fix error "modelscope attributeerror: 'dict' object has no attribute … by wertycn in https://github.com/modelscope/modelscope/pull/800
* fix download file with spical name as 'Image+Title.png' by liuyhwangyh in https://github.com/modelscope/modelscope/pull/805
* Dataset refactor by wangxingjun778 in https://github.com/modelscope/modelscope/pull/807

New Contributors
* Ranqing made their first contribution in https://github.com/modelscope/modelscope/pull/703
* tisonkun made their first contribution in https://github.com/modelscope/modelscope/pull/799
* wertycn made their first contribution in https://github.com/modelscope/modelscope/pull/800

**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.13.1...v1.13.2

1.13.1

New models
| No. | Model-id and links |
| --- | --- |
| 1 | [GeoMVSNet:基于几何感知的多视图深度估计](https://www.modelscope.cn/models/Damo_XR_Lab/cv_geomvsnet_multi-view-depth-estimation_general/summary) |
| 2 | [Res2Net说话人确认-中文-3D-Speaker-16k](https://www.modelscope.cn/models/iic/speech_res2net_sv_zh-cn_3dspeaker_16k/summary) |
| 3 | [ResNet34说话人确认-中文-3D-Speaker-16k](https://www.modelscope.cn/models/iic/speech_resnet34_sv_zh-cn_3dspeaker_16k/summary) |
| 4 | [自监督深度补全](https://www.modelscope.cn/models/Damo_XR_Lab/Self_Supervised_Depth_Completion/summary) |


Highlight features

1. Support importing AWQConfig from modelscope
2. Support stream_generate for LLMPipeline


What's Changed
* UViT ImageNet by ccyhxg in https://github.com/modelscope/modelscope/pull/763
* Add DiT_ImageNet_Demo by ccyhxg in https://github.com/modelscope/modelscope/pull/767
* Add a QA example based on llamaindex (with qwen1.5) by yhxx511 in https://github.com/modelscope/modelscope/pull/759
* Fix words by co63oc in https://github.com/modelscope/modelscope/pull/747
* Fix word pubicly by co63oc in https://github.com/modelscope/modelscope/pull/748
* Fix word pipleline by co63oc in https://github.com/modelscope/modelscope/pull/749
* Fix word orignal by co63oc in https://github.com/modelscope/modelscope/pull/750
* add awqconfig by Jintao-Huang in https://github.com/modelscope/modelscope/pull/761
* add SiT_ImageNet_Demo by ccyhxg in https://github.com/modelscope/modelscope/pull/770
* Fix words StableDiffuisonExporter -> StableDiffusionExporter by co63oc in https://github.com/modelscope/modelscope/pull/746
* add self supervised depth completion. by heyyxd in https://github.com/modelscope/modelscope/pull/711
* To solve the "ImportError: always import a name 'LlamaTokenizer' from… by Siu-Ming in https://github.com/modelscope/modelscope/pull/745
* Fix code of conduct by tastelikefeet in https://github.com/modelscope/modelscope/pull/774
* add res2net resnet models by yfchenmodelscope in https://github.com/modelscope/modelscope/pull/772
* upgrade image build by liuyhwangyh in https://github.com/modelscope/modelscope/pull/773
* change output video format from mp4v to mp4 h264 by qslia in https://github.com/modelscope/modelscope/pull/757
* fix text_to_video_synthesis_model device by slin000111 in https://github.com/modelscope/modelscope/pull/751
* add qwen1.5_doc_search_QA_based_on_langchain.ipynb by ccyhxg in https://github.com/modelscope/modelscope/pull/769
* Add files via upload by ccyhxg in https://github.com/modelscope/modelscope/pull/788
* pre-commit passed by tastelikefeet in https://github.com/modelscope/modelscope/pull/789
* fix build issue by liuyhwangyh in https://github.com/modelscope/modelscope/pull/786
* fix SeqGPTPipeline input force cuda issue by RainJayTsai in https://github.com/modelscope/modelscope/pull/738
* fix image_portrait_enhancement_pipeline rgb channel issue by sun11 in https://github.com/modelscope/modelscope/pull/740
* remove download interval check by liuyhwangyh in https://github.com/modelscope/modelscope/pull/771
* Fix initilize initialize, etc by co63oc in https://github.com/modelscope/modelscope/pull/781
* fix pre-commit by tastelikefeet in https://github.com/modelscope/modelscope/pull/794
* Features/cv_geomvsnet_multi_view_depth_estimation_general by shengzhesz in https://github.com/modelscope/modelscope/pull/790
* Support stream_generate for LLMPipeline by Firmament-cyou in https://github.com/modelscope/modelscope/pull/768

New Contributors
* heyyxd made their first contribution in https://github.com/modelscope/modelscope/pull/711
* Siu-Ming made their first contribution in https://github.com/modelscope/modelscope/pull/745
* yfchenmodelscope made their first contribution in https://github.com/modelscope/modelscope/pull/772
* qslia made their first contribution in https://github.com/modelscope/modelscope/pull/757
* RainJayTsai made their first contribution in https://github.com/modelscope/modelscope/pull/738
* sun11 made their first contribution in https://github.com/modelscope/modelscope/pull/740
* shengzhesz made their first contribution in https://github.com/modelscope/modelscope/pull/790

**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.12.0...v1.13.1

1.12.0

中文版本

新模型推荐
| 序号 | 模型名称&快捷链接 |
| --- | --- |
| 1 | [支持qwen1.5系列模型](https://www.modelscope.cn/models/qwen/Qwen1.5-4B-Chat/summary) |
| 2 | [RIFE视频插帧](https://www.modelscope.cn/models/Damo_XR_Lab/cv_rife_video-frame-interpolation/summary) |
| 3 | [VFI-RAFT视频插帧](https://www.modelscope.cn/models/iic/cv_raft_video-frame-interpolation/summary) |
| 4 | [轻量级快速图像特征点匹配](https://www.modelscope.cn/models/Damo_XR_Lab/cv_transformer_image-matching_fast/summary) |




高亮功能

- add rife-video-frame-interpolation and model (685)
- image normal estimation (683)
- add image matching fast model based on lightglue (694)
- Feature/LoFTR_image_local_feature_matching (687)
- support qwen1.5 models
- upgrade funasr1.0

BugFix

- fix anydoor pre-commit flake8 and isort errors (707)
- fix some ci case issue.

1.11.0

New Models Recommended
| No | Model Name & Link |
| --- | --- |
| 0 | [Emu2-Gen](https://modelscope.cn/models/AI-ModelScope/Emu2-Gen/summary) |
| 1 | [qanything_models](https://modelscope.cn/models/netease-youdao/qanything_models/summary) |
| 2 | [Emu2-Chat](https://modelscope.cn/models/AI-ModelScope/Emu2-Chat/summary) |
| 3 | [Emu2](https://modelscope.cn/models/AI-ModelScope/Emu2/summary) |
| 4 | [TinyLlama-1.1B-Chat-v1.0](https://modelscope.cn/models/AI-ModelScope/TinyLlama-1.1B-Chat-v1.0/summary) |
| 5 | [notux-8x7b-v1](https://modelscope.cn/models/AI-ModelScope/notux-8x7b-v1/summary) |
| 6 | [Machine_Mindset_en_ENFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ENFJ/summary) |
| 7 | [Machine_Mindset_en_ENFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ENFP/summary) |
| 8 | [Machine_Mindset_en_ENTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ENTJ/summary) |
| 9 | [Machine_Mindset_en_ENTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ENTP/summary) |
| 10 | [Machine_Mindset_en_ESFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ESFJ/summary) |
| 11 | [Machine_Mindset_en_ESFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ESFP/summary) |
| 12 | [Machine_Mindset_en_ESTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ESTJ/summary) |
| 13 | [Machine_Mindset_en_ESTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ESTP/summary) |
| 14 | [Machine_Mindset_en_INFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_INFJ/summary) |
| 15 | [Machine_Mindset_en_INFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_INFP/summary) |
| 16 | [Machine_Mindset_en_INTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_INTJ/summary) |
| 17 | [Machine_Mindset_en_INTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_INTP/summary) |
| 18 | [Machine_Mindset_en_ISFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ISFJ/summary) |
| 19 | [Machine_Mindset_en_ISFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ISFP/summary) |
| 20 | [Machine_Mindset_en_ISTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ISTJ/summary) |
| 21 | [Machine_Mindset_en_ISTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_en_ISTP/summary) |
| 22 | [Machine_Mindset_zh_ENFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ENFJ/summary) |
| 23 | [Machine_Mindset_zh_ENFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ENFP/summary) |
| 24 | [Machine_Mindset_zh_ENTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ENTJ/summary) |
| 25 | [Machine_Mindset_zh_ENTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ENTP/summary) |
| 26 | [Machine_Mindset_zh_ESFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ESFJ/summary) |
| 27 | [Machine_Mindset_zh_ESFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ESFP/summary) |
| 28 | [Machine_Mindset_zh_ESTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ESTJ/summary) |
| 29 | [Machine_Mindset_zh_ESTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ESTP/summary) |
| 30 | [Machine_Mindset_zh_INFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_INFJ/summary) |
| 31 | [Machine_Mindset_zh_INFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_INFP/summary) |
| 32 | [Machine_Mindset_zh_INTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_INTJ/summary) |
| 33 | [Machine_Mindset_zh_ISFJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ISFJ/summary) |
| 34 | [Machine_Mindset_zh_ISFP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ISFP/summary) |
| 35 | [Machine_Mindset_zh_ISTJ](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ISTJ/summary) |
| 36 | [Machine_Mindset_zh_ISTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_ISTP/summary) |
| 37 | [WavMark](https://modelscope.cn/models/AI-ModelScope/WavMark/summary) |
| 38 | [speech_eres2net_large_200k_sv_zh-cn_16k-common](https://modelscope.cn/models/damo/speech_eres2net_large_200k_sv_zh-cn_16k-common/summary) |
| 39 | [emotion2vec_base](https://modelscope.cn/models/damo/emotion2vec_base/summary) |
| 30 | [QAnything](https://modelscope.cn/models/netease-youdao/QAnything/summary) |
| 41 | [speech_fsmn_vad_zh-cn-8k-common-onnx](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-8k-common-onnx/summary) |
| 42 | [speech_paraformer_asr_nat-zh-cn-8k-common-vocab8358-tensorflow1-onnx](https://modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-8k-common-vocab8358-tensorflow1-onnx/summary) |
| 43 | [dolphin-2.6-mistral-7b](https://modelscope.cn/models/AI-ModelScope/dolphin-2.6-mistral-7b/summary) |
| 44 | [scepter_scedit](https://modelscope.cn/models/damo/scepter_scedit/summary) |
| 45 | [deepseek-moe-16b-base](https://modelscope.cn/models/deepseek-ai/deepseek-moe-16b-base/summary) |
| 46 | [deepseek-moe-16b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-moe-16b-chat/summary) |
| 47 | [phi-2](https://modelscope.cn/models/AI-ModelScope/phi-2/summary) |
| 48 | [llava-internlm-7b](https://modelscope.cn/models/xtuner/llava-internlm-7b/summary) |
| 49 | [llava-v1.5-7b-xtuner](https://modelscope.cn/models/xtuner/llava-v1.5-7b-xtuner/summary) |
| 50 | [llava-v1.5-7b-xtuner-pretrain](https://modelscope.cn/models/xtuner/llava-v1.5-7b-xtuner-pretrain/summary) |
| 51 | [llava-internlm-7b-pretrain](https://modelscope.cn/models/xtuner/llava-internlm-7b-pretrain/summary) |
| 52 | [speech_ngram_lm_zh-cn-ai-wesp-fst-token8358](https://modelscope.cn/models/damo/speech_ngram_lm_zh-cn-ai-wesp-fst-token8358/summary) |
| 53 | [realisticVisionV51_v51VAE](https://modelscope.cn/models/AI-ModelScope/realisticVisionV51_v51VAE/summary) |
| 54 | [THUDM_chatglm-6b](https://modelscope.cn/models/MindNLP/THUDM_chatglm-6b/summary) |
| 55 | [AnyDoor](https://modelscope.cn/models/damo/AnyDoor/summary) |
| 56 | [cv_gaussian-splatting-recon_damo](https://modelscope.cn/models/Damo_XR_Lab/cv_gaussian-splatting-recon_damo/summary) |
| 57 | [AnimateDiff_ms](https://modelscope.cn/models/AI-ModelScope/AnimateDiff_ms/summary) |
| 58 | [cv_omnidata_image-normal-estimation_normal](https://modelscope.cn/models/Damo_XR_Lab/cv_omnidata_image-normal-estimation_normal/summary) |
| 59 | [cv_rife_video-frame-interpolation](https://modelscope.cn/models/Damo_XR_Lab/cv_rife_video-frame-interpolation/summary) |
| 60 | [Qwen-7B-Chat-GGUF](https://modelscope.cn/models/Xorbits/Qwen-7B-Chat-GGUF/summary) |
| 61 | [stable-zero123](https://modelscope.cn/models/AI-ModelScope/stable-zero123/summary) |
| 62 | [cv_adabins_image-depth-prediction_indoor](https://modelscope.cn/models/Damo_XR_Lab/cv_adabins_image-depth-prediction_indoor/summary) |
| 63 | [Qwen-14B-Chat-GGUF](https://modelscope.cn/models/Xorbits/Qwen-14B-Chat-GGUF/summary) |
| 64 | [wav2vec2-large-xlsr-53-english](https://modelscope.cn/models/AI-ModelScope/wav2vec2-large-xlsr-53-english/summary) |
| 65 | [speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch](https://modelscope.cn/models/damo/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) |
| 66 | [cv_resnet-transformer_local-feature-matching_outdoor-data](https://modelscope.cn/models/Damo_XR_Lab/cv_resnet-transformer_local-feature-matching_outdoor-data/summary) |
| 67 | [naturalspeech2_libritts](https://modelscope.cn/models/AI-ModelScope/naturalspeech2_libritts/summary) |
| 68 | [text_to_audio](https://modelscope.cn/models/AI-ModelScope/text_to_audio/summary) |
| 69 | [valle_libritts](https://modelscope.cn/models/AI-ModelScope/valle_libritts/summary) |
| 70 | [vits_ljspeech](https://modelscope.cn/models/AI-ModelScope/vits_ljspeech/summary) |
| 71 | [hifigan_speech_bigdata](https://modelscope.cn/models/AI-ModelScope/hifigan_speech_bigdata/summary) |
| 72 | [BigVGAN_singing_bigdata](https://modelscope.cn/models/AI-ModelScope/BigVGAN_singing_bigdata/summary) |
| 73 | [singing_voice_conversion](https://modelscope.cn/models/AI-ModelScope/singing_voice_conversion/summary) |
| 74 | [Machine_Mindset_zh_INTP](https://modelscope.cn/models/FarReelAILab/Machine_Mindset_zh_INTP/summary) |
| 75 | [bel_canto](https://modelscope.cn/models/ccmusic/bel_canto/summary) |
| 76 | [music_genre](https://modelscope.cn/models/ccmusic/music_genre/summary) |
| 77 | [chest_falsetto](https://modelscope.cn/models/ccmusic/chest_falsetto/summary) |
| 78 | [llava-v1.5-13b-xtuner](https://modelscope.cn/models/xtuner/llava-v1.5-13b-xtuner/summary) |
| 79 | [llava-v1.5-13b-xtuner-pretrain](https://modelscope.cn/models/xtuner/llava-v1.5-13b-xtuner-pretrain/summary) |
| 80 | [Mistral-7B-Instruct-v0.2-GGUF](https://modelscope.cn/models/Xorbits/Mistral-7B-Instruct-v0.2-GGUF/summary) |
| 81 | [cv_transformer_image-matching_fast](https://modelscope.cn/models/Damo_XR_Lab/cv_transformer_image-matching_fast/summary) |
| 82 | [cv_efficientsam-s_image-instance-segmentation_sa1b](https://modelscope.cn/models/damo/cv_efficientsam-s_image-instance-segmentation_sa1b/summary) |
| 83 | [Ziya-Visual-Lyrics-14B](https://modelscope.cn/models/Fengshenbang/Ziya-Visual-Lyrics-14B/summary) |
| 84 | [dpo-sdxl-text2image-v1](https://modelscope.cn/models/AI-ModelScope/dpo-sdxl-text2image-v1/summary) |
| 85 | [mistral-ft-optimized-1218](https://modelscope.cn/models/AI-ModelScope/mistral-ft-optimized-1218/summary) |
| 86 | [IP-Adapter-FaceID](https://modelscope.cn/models/AI-ModelScope/IP-Adapter-FaceID/summary) |
| 87 | [mask_refine](https://modelscope.cn/models/damo/mask_refine/summary) |
| 88 | [MindChat-Qwen-1_8B](https://modelscope.cn/models/X-D-Lab/MindChat-Qwen-1_8B/summary) |
| 89 | [AnyDoor_models](https://modelscope.cn/models/damo/AnyDoor_models/summary) |
| 90 | [dolphin-2.5-mixtral-8x7b](https://modelscope.cn/models/AI-ModelScope/dolphin-2.5-mixtral-8x7b/summary) |
| 91 | [SVHN-Recognition](https://modelscope.cn/models/MuGeminorum/SVHN-Recognition/summary) |
| 92 | [cv_anytext_text_generation_editing](https://modelscope.cn/models/damo/cv_anytext_text_generation_editing/summary) |
| 93 | [SOLAR-10.7B-Instruct-v1.0](https://modelscope.cn/models/AI-ModelScope/SOLAR-10.7B-Instruct-v1.0/summary) |
| 94 | [insecta](https://modelscope.cn/models/MuGeminorum/insecta/summary) |
| 95 | [pianos](https://modelscope.cn/models/ccmusic/pianos/summary) |
| 96 | [kagentlms_baichuan2_13b_mat](https://modelscope.cn/models/KwaiKEG/kagentlms_baichuan2_13b_mat/summary) |
| 97 | [kagentlms_qwen_7b_mat](https://modelscope.cn/models/KwaiKEG/kagentlms_qwen_7b_mat/summary) |
| 98 | [cv_raft_dense-optical-flow_things](https://modelscope.cn/models/Damo_XR_Lab/cv_raft_dense-optical-flow_things/summary) |
| 99 | [HEp2](https://modelscope.cn/models/MuGeminorum/HEp2/summary) |
| 100 | [cv_marigold_monocular-depth-estimation](https://modelscope.cn/models/Damo_XR_Lab/cv_marigold_monocular-depth-estimation/summary) |
| 101 | [OpenDalleV1.1](https://modelscope.cn/models/AI-ModelScope/OpenDalleV1.1/summary) |
| 102 | [speech_sambert-hifigan_nsf_tts_emily_en-gb_24k](https://modelscope.cn/models/damo/speech_sambert-hifigan_nsf_tts_emily_en-gb_24k/summary) |
| 103 | [speech_sambert-hifigan_nsf_tts_eric_en-gb_24k](https://modelscope.cn/models/damo/speech_sambert-hifigan_nsf_tts_eric_en-gb_24k/summary) |
| 104 | [knowlm-13b-zhixi](https://modelscope.cn/models/ZJUNLP/knowlm-13b-zhixi/summary) |
| 105 | [RankingGPT-qwen-7b](https://modelscope.cn/models/damo/RankingGPT-qwen-7b/summary) |
| 106 | [hoyoGPT](https://modelscope.cn/models/MuGeminorum/hoyoGPT/summary) |
| 107 | [RankingGPT-baichuan2-7b](https://modelscope.cn/models/damo/RankingGPT-baichuan2-7b/summary) |
| 108 | [RankingGPT-llama2-7b](https://modelscope.cn/models/damo/RankingGPT-llama2-7b/summary) |
| 109 | [RankingGPT-bloom-7b](https://modelscope.cn/models/damo/RankingGPT-bloom-7b/summary) |
| 110 | [RankingGPT-bloom-3b](https://modelscope.cn/models/damo/RankingGPT-bloom-3b/summary) |
| 111 | [RankingGPT-bloom-1b1](https://modelscope.cn/models/damo/RankingGPT-bloom-1b1/summary) |
| 112 | [RankingGPT-bloom-560m](https://modelscope.cn/models/damo/RankingGPT-bloom-560m/summary) |
| 113 | [knowlm-13b-base-v1.0](https://modelscope.cn/models/ZJUNLP/knowlm-13b-base-v1.0/summary) |
| 114 | [knowlm-13b-ie](https://modelscope.cn/models/ZJUNLP/knowlm-13b-ie/summary) |
| 115 | [scepter_20240103-212734](https://modelscope.cn/models/tastelikefeet1/scepter_20240103-212734/summary) |


Highlight
- Add AnyDoor support (688)
- Add syncdreamer as a image-to-3d pipeline (679)
- Add audio codec and codec-based TTS model


Improvements
- Update cuda to 12.1.0
- Update transformers to 4.36.2
- Update ckpt to general_v0.1 (696)


BugFix
- Fix embedding and inference device in faq question answering pipeline
- Remove DOCKER_BUILDKIT=0 for cpu build issue
- Fix mmcv-full issue

1.10.0

中文版本

新模型推荐
| 序号 | 模型名称&快捷链接 |
| --- | --- |
| 0 | [Yi-34B-Chat-4bits](https://modelscope.cn/models/01ai/Yi-34B-Chat-4bits) |
| 1 | [Yi-34B-Chat-8bits](https://modelscope.cn/models/01ai/Yi-34B-Chat-8bits) |
| 2 | [Yi-6B-Chat](https://modelscope.cn/models/01ai/Yi-6B-Chat) |
| 3 | [Yi-6B-Chat-4bits](https://modelscope.cn/models/01ai/Yi-6B-Chat-4bits) |
| 4 | [Yi-6B-Chat-8bits](https://modelscope.cn/models/01ai/Yi-6B-Chat-8bits) |
| 5 | [Yi-34B-Chat](https://modelscope.cn/models/01ai/Yi-34B-Chat) |
| 6 | [Video-LLaVA-V1.5](https://modelscope.cn/models/PKU-YuanLab/Video-LLaVA-V1.5) |
| 7 | [Video-LLaVA-7B](https://modelscope.cn/models/PKU-YuanLab/Video-LLaVA-7B) |
| 8 | [LanguageBind_Video](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Video) |
| 9 | [LanguageBind_Video_FT](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Video_FT) |
| 10 | [LanguageBind_Image](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Image) |
| 11 | [LanguageBind_Video_merge](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Video_merge) |
| 12 | [LanguageBind_Audio](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Audio) |
| 13 | [LanguageBind_Depth](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Depth) |
| 14 | [LanguageBind_Thermal](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Thermal) |
| 15 | [Video-LLaVA-Pretrain-7B](https://modelscope.cn/models/PKU-YuanLab/Video-LLaVA-Pretrain-7B) |
| 16 | [Aquila2-70B-Expr](https://modelscope.cn/models/BAAI/Aquila2-70B-Expr) |
| 17 | [AquilaChat2-70B-Expr](https://modelscope.cn/models/BAAI/AquilaChat2-70B-Expr) |
| 18 | [AquilaChat2-34B-Int4-GPTQ](https://modelscope.cn/models/BAAI/AquilaChat2-34B-Int4-GPTQ) |
| 19 | [AquilaChat2-34B-16K](https://modelscope.cn/models/BAAI/AquilaChat2-34B-16K) |
| 20 | [speech_rwkv_transducer_asr-en-16k-gigaspeech-vocab5001-pytorch-online](https://modelscope.cn/models/damo/speech_rwkv_transducer_asr-en-16k-gigaspeech-vocab5001-pytorch-online) |
| 21 | [funasr-runtime-win-cpu-x64](https://modelscope.cn/models/damo/funasr-runtime-win-cpu-x64) |
| 22 | [ModelScope-Agent-14B](https://modelscope.cn/models/damo/ModelScope-Agent-14B) |
| 23 | [speech_sambert-hifigan_nsf_tts_donna_en-us_24k](https://modelscope.cn/models/damo/speech_sambert-hifigan_nsf_tts_donna_en-us_24k) |
| 24 | [speech_sambert-hifigan_nsf_tts_david_en-us_24k](https://modelscope.cn/models/damo/speech_sambert-hifigan_nsf_tts_david_en-us_24k) |
| 25 | [MiniGPT-v2](https://modelscope.cn/models/damo/MiniGPT-v2) |
| 26 | [speech_sambert-hifigan_tts_waan_Thai_16k](https://modelscope.cn/models/damo/speech_sambert-hifigan_tts_waan_Thai_16k) |
| 27 | [cv_background_generation_sd](https://modelscope.cn/models/damo/cv_background_generation_sd) |
| 28 | [speech_eres2net_base_250k_sv_zh-cn_16k-common](https://modelscope.cn/models/damo/speech_eres2net_base_250k_sv_zh-cn_16k-common) |
| 29 | [Pose-driven-image-generation-HumanSD](https://modelscope.cn/models/damo/Pose-driven-image-generation-HumanSD) |
| 30 | [cv_stable-diffusion-v2_image-feature](https://modelscope.cn/models/damo/cv_stable-diffusion-v2_image-feature) |
| 31 | [nlp_minilm_ibkd_sentence-embedding_english-sts](https://modelscope.cn/models/damo/nlp_minilm_ibkd_sentence-embedding_english-sts) |
| 32 | [nlp_minilm_ibkd_sentence-embedding_english-msmarco](https://modelscope.cn/models/damo/nlp_minilm_ibkd_sentence-embedding_english-msmarco) |
| 33 | [speech_eres2net_large_mej_lre_16k_common](https://modelscope.cn/models/damo/speech_eres2net_large_mej_lre_16k_common) |
| 34 | [speech_eres2net_base_mej_lre_16k_common](https://modelscope.cn/models/damo/speech_eres2net_base_mej_lre_16k_common) |
| 35 | [Ziya2-13B-Base](https://modelscope.cn/models/Fengshenbang/Ziya2-13B-Base) |
| 36 | [SUS-Chat-34B](https://modelscope.cn/models/SUSTC/SUS-Chat-34B) |
| 37 | [animatediff-motion-adapter-v1-5](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-adapter-v1-5) |
| 38 | [animatediff-motion-adapter-v1-4](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-adapter-v1-4) |
| 39 | [animatediff-motion-adapter-v1-5-2](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-adapter-v1-5-2) |
| 40 | [animatediff-motion-lora-zoom-in](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-zoom-in) |
| 41 | [animatediff-motion-lora-pan-left](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-pan-left) |
| 42 | [animatediff-motion-lora-tilt-up](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-tilt-up) |
| 43 | [animatediff-motion-lora-rolling-clockwise](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-rolling-clockwise) |
| 44 | [animatediff-motion-lora-zoom-out](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-zoom-out) |
| 45 | [animatediff-motion-lora-pan-right](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-pan-right) |
| 46 | [animatediff-motion-lora-tilt-down](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-tilt-down) |
| 47 | [animatediff-motion-lora-rolling-anticlockwise](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-rolling-anticlockwise) |
| 48 | [Qwen-1_8B-Chat-Int4](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat-Int4) |
| 49 | [Qwen-1_8B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat-Int8) |
| 50 | [Qwen-72B-Chat-Int4](https://modelscope.cn/models/qwen/Qwen-72B-Chat-Int4) |
| 51 | [Qwen-72B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-72B-Chat-Int8) |
| 52 | [Qwen-Audio-Chat](https://modelscope.cn/models/qwen/Qwen-Audio-Chat) |
| 53 | [Qwen-72B-Chat](https://modelscope.cn/models/qwen/Qwen-72B-Chat) |
| 54 | [Qwen-72B](https://modelscope.cn/models/qwen/Qwen-72B) |
| 55 | [Qwen-1_8B](https://modelscope.cn/models/qwen/Qwen-1_8B) |
| 56 | [Qwen-1_8B-Chat](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat) |
| 57 | [Qwen-Audio](https://modelscope.cn/models/qwen/Qwen-Audio) |
| 58 | [cogvlm-chat](https://modelscope.cn/models/ZhipuAI/cogvlm-chat) |
| 59 | [cogvlm-base-224](https://modelscope.cn/models/ZhipuAI/cogvlm-base-224) |
| 60 | [cogvlm-base-490](https://modelscope.cn/models/ZhipuAI/cogvlm-base-490) |
| 61 | [cogvlm-grounding-base](https://modelscope.cn/models/ZhipuAI/cogvlm-grounding-base) |
| 62 | [cogvlm-grounding-generalist](https://modelscope.cn/models/ZhipuAI/cogvlm-grounding-generalist) |
| 63 | [deepseek-llm-7b-base](https://modelscope.cn/models/deepseek-ai/deepseek-llm-7b-base) |
| 64 | [deepseek-llm-67b-base](https://modelscope.cn/models/deepseek-ai/deepseek-llm-67b-base) |
| 65 | [deepseek-llm-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-llm-7b-chat) |
| 66 | [deepseek-llm-67b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-llm-67b-chat) |
| 67 | [xlm-MLM-en-2048](https://modelscope.cn/models/mindnlp/xlm-MLM-en-2048) |
| 68 | [OrionStar-Yi-34B-Chat](https://modelscope.cn/models/OrionStar-Yi-34B-Chat/OrionStar-Yi-34B-Chat) |
| 69 | [tigerbot-180b-base-v2](https://modelscope.cn/models/TigerResearch/tigerbot-180b-base-v2) |
| 70 | [tigerbot-13b-chat-v5](https://modelscope.cn/models/TigerResearch/tigerbot-13b-chat-v5) |
| 71 | [tigerbot-13b-chat-v5-4bit-exl2](https://modelscope.cn/models/TigerResearch/tigerbot-13b-chat-v5-4bit-exl2) |
| 72 | [tigerbot-70b-chat-v4-4bit-exl2](https://modelscope.cn/models/TigerResearch/tigerbot-70b-chat-v4-4bit-exl2) |
| 73 | [tigerbot-180b-chat-v2](https://modelscope.cn/models/TigerResearch/tigerbot-180b-chat-v2) |
| 74 | [tigerbot-13b-chat-v5-4k](https://modelscope.cn/models/TigerResearch/tigerbot-13b-chat-v5-4k) |
| 75 | [tigerbot-13b-base-v3](https://modelscope.cn/models/TigerResearch/tigerbot-13b-base-v3) |
| 76 | [tigerbot-70b-chat-v4-4k](https://modelscope.cn/models/TigerResearch/tigerbot-70b-chat-v4-4k) |
| 77 | [tigerbot-70b-base-v2](https://modelscope.cn/models/TigerResearch/tigerbot-70b-base-v2) |
| 78 | [tigerbot-70b-chat-v4](https://modelscope.cn/models/TigerResearch/tigerbot-70b-chat-v4) |
| 79 | [BlueLM-7B-Base](https://modelscope.cn/models/vivo-ai/BlueLM-7B-Base) |
| 80 | [BlueLM-7B-Chat-32K](https://modelscope.cn/models/vivo-ai/BlueLM-7B-Chat-32K) |
| 81 | [BlueLM-7B-Chat-4bits](https://modelscope.cn/models/vivo-ai/BlueLM-7B-Chat-4bits) |
| 82 | [Sunsimiao-Qwen-7B](https://modelscope.cn/models/X-D-Lab/Sunsimiao-Qwen-7B) |
| 83 | [MindChat-Qwen-7B-v2-self_lora](https://modelscope.cn/models/X-D-Lab/MindChat-Qwen-7B-v2-self_lora) |
| 84 | [jina-embeddings-v2-base-en](https://modelscope.cn/models/Xorbits/jina-embeddings-v2-base-en) |
| 85 | [jina-embeddings-v2-small-en](https://modelscope.cn/models/Xorbits/jina-embeddings-v2-small-en) |
| 86 | [qwen-chat-7B-ggml](https://modelscope.cn/models/Xorbits/qwen-chat-7B-ggml) |
| 87 | [qwen-chat-14B-ggml](https://modelscope.cn/models/Xorbits/qwen-chat-14B-ggml) |
| 88 | [bge-reranker-large](https://modelscope.cn/models/Xorbits/bge-reranker-large) |
| 89 | [bge-reranker-base](https://modelscope.cn/models/Xorbits/bge-reranker-base) |


高亮功能

- 支持本地拉起测试推理服务
- 支持vllm推理
- LLMPipeline 支持vllm
- 官方镜像升级到python3.10,pytorch升级2.1.0, tensorflow 1.14.0 ubuntu22.04
- upgrade to python3.10

功能列表

- Support VLLM in LLMPipeline (604)
- add bpemodel path in asr_trainer
- add llm riddles (621)
- feat: deploy checker for swingdeploy


功能提升

- python311 support for whl
- llm pipeline support chatglm3 (618)
- Support transformers==4.35.0 (633)


BugFix

- Fix _set_gradient_checkpointing bug (660)
- fix test reliability issue (657)
- fix: DocumentGroundedDialogRetrievalModel qry_encoder.encoder.embeddings.position_ids error (647)
- fix asr paraformer finetune bug
- fix uie trainer: eval failed (617)
- Fix vllm: change if condition (607)
- fix shop_segmentation to use old timm lib and bump version to 1.9.4rc2
- fix the numpy bug for card detection correction
- fix issues for 3dhuman models
- fix logger: remove file handler for original user logging (645)

English Version

Highlight

- local launch inference server
- support vllm
- LLMPipeline support vllm
- Image upgrade to python3.10, pytorch2.1.0,tensorflow2.14.0, ubuntu22.04

Breaking changes

Feature
- Support VLLM in LLMPipeline (604)
- add bpemodel path in asr_trainer
- add llm riddles (621)
- feat: deploy checker for swingdeploy

Improvements

- python311 support for whl
- llm pipeline support chatglm3 (618)
- Support transformers==4.35.0 (633)

BugFix

- Fix _set_gradient_checkpointing bug (660)
- fix test reliability issue (657)
- fix: DocumentGroundedDialogRetrievalModel qry_encoder.encoder.embeddings.position_ids error (647)
- fix asr paraformer finetune bug
- fix uie trainer: eval failed (617)
- Fix vllm: change if condition (607)
- fix shop_segmentation to use old timm lib and bump version to 1.9.4rc2
- fix the numpy bug for card detection correction
- fix issues for 3dhuman models
- fix logger: remove file handler for original user logging (645)

1.9.4

中文版本

Feature
- 新增句子向量模型,支持gte, bloom
- stable diffusion新增freeU方法
- LLMPipeline 支持[Swift](https://github.com/modelscope/swift) adapter模型推理
- 镜像制作时自动升级funasr transformer最新版本
- venv强制依赖移除,以便更好地支持windows系统 575

bugfix

- 修复 shop_segmentation pipeline兼容timm 0.5.2
- 修复huggingface position_ids兼容性问题
- 修复chatglm sp_tokenizer属性确实问题
- 修复ofa模型transformers新版兼容性问题
- 修复trainer中work_dir设置不生效问题 573
- 修复hf相关的bug 569 567


新增模型推荐
| 序号 | 模型名称&快捷链接 |
| --- | --- |
| 1 | [GTE文本向量-中文-通用领域-large](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-large/summary) |
| 2 | [GTE文本向量-英文-通用领域-large](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_english-large/summary) |
| 3 | [GTE文本向量-英文-通用领域-small](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_english-small/summary) |
| 4 | [GTE文本向量-英文-通用领域-base](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_english-base/summary) |
| 5 | [GTE文本向量-中文-通用领域-small](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-small/summary) |
| 6 | [X-vector说话人转换点定位-两人-中文](https://modelscope.cn/models/damo/speech_xvector_transformer_scl_zh-cn_16k-common/summary) |
| 7 | [Udever 多语言通用文本表示模型 3b](https://modelscope.cn/models/damo/udever-bloom-3b/summary) |
| 8 | [Udever 多语言通用文本表示模型 1b1](https://modelscope.cn/models/damo/udever-bloom-1b1/summary) |
| 9 | [GTE文本向量-中文-通用领域-base](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-base/summary) |
| 10 | [基于扩散模型的人物多视图生成模型](https://modelscope.cn/models/damo/multimodal_multiview_avatar_gen/summary) |
| 11 | [Udever 多语言通用文本表示模型 560m](https://modelscope.cn/models/damo/udever-bloom-560m/summary) |
| 12 | [Udever 多语言通用文本表示模型 7b1](https://modelscope.cn/models/damo/udever-bloom-7b1/summary) |
| 13 | [通义千问-14B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-14B-Chat-Int8/summary) |
| 14 | [通义千问-7B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-7B-Chat-Int8/summary) |
| 15 | [CT-Transformer标点-中英文-通用-large-onnx](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx/summary) |
| 16 | [CodeFuse-QWen-14B](https://modelscope.cn/models/codefuse-ai/CodeFuse-QWen-14B/summary) |
| 17 | [ECAPA-TDNN说话人确认-中文-CNCeleb-16k](https://modelscope.cn/models/damo/speech_ecapa-tdnn_sv_zh-cn_cnceleb_16k/summary) |
| 18 | [ECAPA-TDNN说话人确认-中文-3D-Speaker-16k](https://modelscope.cn/models/damo/speech_ecapa-tdnn_sv_zh-cn_3dspeaker_16k/summary) |
| 19 | [中文字体风格迁移模型](https://modelscope.cn/models/WordArt/font_style_transfer_model/summary) |
| 20 | [Whisper语音识别
-英文-small](https://modelscope.cn/models/damo/speech_whisper-small_asr_english/summary) |
| 21 | [Whisper语音识别-多语言-large](https://modelscope.cn/models/damo/speech_whisper-large_asr_multilingual/summary) |
| 22 | [中文字体生成基础模型](https://modelscope.cn/models/WordArt/font_generation_base_model/summary) |
| 23 | [FreeU文本生成图像模型](https://modelscope.cn/models/damo/multi-modal_freeu_stable_diffusion/summary) |
| 24 | [Paraformer语音识别-英文-通用-16k-离线-长音频版](https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) |
| 25 | [Paraformer分角色语音识别-中文-通用](https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn/summary) |
| 26 | [Paraformer语音识别-英文-通用-16k-离线-large-onnx](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-en-16k-common-vocab10020-onnx/summary) |
| 27 | [PASDv2图像超分辨率](https://modelscope.cn/models/damo/PASD_v2_image_super_resolutions/summary) |
| 28 | [Transducer语音识别-英文-gigaspeech-16k-实时](https://modelscope.cn/models/damo/speech_conformer_transducer_asr-en-16k-gigaspeech-vocab5001-pytorch-online/summary) |
| 29 | [PMR-base](https://modelscope.cn/models/damo/PMR-base/summary) |
| 30 | [PMR-large](https://modelscope.cn/models/damo/PMR-large/summary) |
| 31 | [EQA-PMR-large](https://modelscope.cn/models/damo/EQA-PMR-large/summary) |
| 32 | [CodeFuse-StarCoder-15B](https://modelscope.cn/models/codefuse-ai/CodeFuse-StarCoder-15B/summary) |
| 33 | [基于NER微调的机器阅读理解模型](https://modelscope.cn/models/damo/NER-PMR-Large/summary) |
| 34 | [CodeFuse-CodeLlama-34B-4bits](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B-4bits/summary) |
| 35 | [零样本文本分类-SSTuning-base-多语](https://modelscope.cn/models/damo/zero-shot-classify-SSTuning-XLM-R/summary) |
| 36 | [通义千问-14B-Chat-Int4](https://modelscope.cn/models/qwen/Qwen-14B-Chat-Int4/summary) |
| 37 | [零样本文本分类-SSTuning-base-英语](https://modelscope.cn/models/damo/zero-shot-classify-SSTuning-base/summary) |
| 38 | [人脸检测与五官定位](https://modelscope.cn/models/damo/cv_face_detection_landmark/summary) |
| 39 | [多语言Conformer Listener](https://modelscope.cn/models/damo/speech_conformer_larger_asr_multi_language-16k-common-vocab30392-pytorch/summary) |
| 40 | [SambertHifigan语音合成-多语言-多人预训练-16k](https://modelscope.cn/models/damo/speech_sambert-hifigan_tts_multilingual_multisp_pretrain_16k/summary) |
| 41 | [音频量化编码-freqcodec_magphase-英文-libritts-16k-gr8nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch/summary) |
| 42 | [音频量化编码-freqcodec_magphase-英文-libritts-16k-gr1nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch/summary) |
| 43 | [音频量化编码-Encodec-中英文-通用-16k-nq32ds640-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch/summary) |
| 44 | [音频量化编码-Encodec-中英文-通用-16k-nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch/summary) |
| 45 | [音频量化编码-Encodec-英文-libritts-16k-nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch/summary) |
| 46 | [ERes2Net-Large说话人日志-对话场景角色区分-通用](https://modelscope.cn/models/damo/speech_eres2net-large_speaker-diarization_common/summary) |
| 47 | [3DHuman-Syn三维角色驱动](https://modelscope.cn/models/damo/cv_3d-human-animation/summary) |
| 48 | [音频量化编码-Encodec-英文-libritts-16k-nq32ds640-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch/summary) |
| 49 | [文本生成3D头部模型](https://modelscope.cn/models/damo/cv_HRN_text-to-head/summary) |
| 50 | [文本引导模型纹理生成-三维视觉](https://modelscope.cn/models/damo/cv_diffuser_text-texture-generation/summary) |
| 51 | [3DHuman-Syn生成式3D人物模型库](https://modelscope.cn/models/damo/cv_3d-human-synthesis-library/summary) |



English Version

Feature
- Added sentence vector model, supporting gte and bloom.
- Stable diffusion introduces a new `freeU` method.
- LLMPipeline now supports [Swift](https://github.com/modelscope/swift) adapter model inference.
- Automatically upgrade to the latest version of funasr transformer during image creation.
- Forced venv dependency removed to better support Windows system. 575

bugfix

- Fixed shop_segmentation pipeline compatibility with timm 0.5.2.
- Resolved compatibility issues with huggingface position_ids.
- Fixed the missing sp_tokenizer attribute in chatglm.
- Addressed compatibility issues of ofa model with newer transformers version.
- Resolved the issue where the `work_dir` setting in trainer was not taking effect. 573
- Fixed hf-related bugs. 569 567.

New Models Recommended

| No | Model Name & Link |
| --- | --- |
| 1 | [nlp_gte_sentence-embedding_chinese-large](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-large/summary) |
| 2 | [nlp_gte_sentence-embedding_english-large](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_english-large/summary) |
| 3 | [nlp_gte_sentence-embedding_english-small](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_english-small/summary) |
| 4 | [nlp_gte_sentence-embedding_english-base](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_english-base/summary) |
| 5 | [nlp_gte_sentence-embedding_chinese-small](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-small/summary) |
| 6 | [speech_xvector_transformer_scl_zh-cn_16k-common](https://modelscope.cn/models/damo/speech_xvector_transformer_scl_zh-cn_16k-common/summary) |
| 7 | [udever-bloom-3b](https://modelscope.cn/models/damo/udever-bloom-3b/summary) |
| 8 | [udever-bloom-1b1](https://modelscope.cn/models/damo/udever-bloom-1b1/summary) |
| 9 | [nlp_gte_sentence-embedding_chinese-base](https://modelscope.cn/models/damo/nlp_gte_sentence-embedding_chinese-base/summary) |
| 10 | [multimodal_multiview_avatar_gen](https://modelscope.cn/models/damo/multimodal_multiview_avatar_gen/summary) |
| 11 | [udever-bloom-560m](https://modelscope.cn/models/damo/udever-bloom-560m/summary) |
| 12 | [udever-bloom-7b1](https://modelscope.cn/models/damo/udever-bloom-7b1/summary) |
| 13 | [Qwen-14B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-14B-Chat-Int8/summary) |
| 14 | [Qwen-7B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-7B-Chat-Int8/summary) |
| 15 | [punc_ct-transformer_cn-en-common-vocab471067-large-onnx](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx/summary) |
| 16 | [CodeFuse-QWen-14B](https://modelscope.cn/models/codefuse-ai/CodeFuse-QWen-14B/summary) |
| 17 | [speech_ecapa-tdnn_sv_zh-cn_cnceleb_16k](https://modelscope.cn/models/damo/speech_ecapa-tdnn_sv_zh-cn_cnceleb_16k/summary) |
| 18 | [speech_ecapa-tdnn_sv_zh-cn_3dspeaker_16k](https://modelscope.cn/models/damo/speech_ecapa-tdnn_sv_zh-cn_3dspeaker_16k/summary) |
| 19 | [font_style_transfer_model](https://modelscope.cn/models/WordArt/font_style_transfer_model/summary) |
| 20 | [speech_whisper-small_asr_english](https://modelscope.cn/models/damo/speech_whisper-small_asr_english/summary) |
| 21 | [speech_whisper-large_asr_multilingual](https://modelscope.cn/models/damo/speech_whisper-large_asr_multilingual/summary) |
| 22 | [font_generation_base_model](https://modelscope.cn/models/WordArt/font_generation_base_model/summary) |
| 23 | [multi-modal_freeu_stable_diffusion](https://modelscope.cn/models/damo/multi-modal_freeu_stable_diffusion/summary) |
| 24 | [speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020](https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) |
| 25 | [speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn](https://modelscope.cn/models/damo/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn/summary) |
| 26 | [speech_paraformer-large_asr_nat-en-16k-common-vocab10020-onnx](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-en-16k-common-vocab10020-onnx/summary) |
| 27 | [PASD_v2_image_super_resolutions](https://modelscope.cn/models/damo/PASD_v2_image_super_resolutions/summary) |
| 28 | [speech_conformer_transducer_asr-en-16k-gigaspeech-vocab5001-pytorch-online](https://modelscope.cn/models/damo/speech_conformer_transducer_asr-en-16k-gigaspeech-vocab5001-pytorch-online/summary) |
| 29 | [PMR-base](https://modelscope.cn/models/damo/PMR-base/summary) |
| 30 | [PMR-large](https://modelscope.cn/models/damo/PMR-large/summary) |
| 31 | [EQA-PMR-large](https://modelscope.cn/models/damo/EQA-PMR-large/summary) |
| 32 | [CodeFuse-StarCoder-15B](https://modelscope.cn/models/codefuse-ai/CodeFuse-StarCoder-15B/summary) |
| 33 | [NER-PMR-Large](https://modelscope.cn/models/damo/NER-PMR-Large/summary) |
| 34 | [CodeFuse-CodeLlama-34B-4bits](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B-4bits/summary) |
| 35 | [zero-shot-classify-SSTuning-XLM-R](https://modelscope.cn/models/damo/zero-shot-classify-SSTuning-XLM-R/summary) |
| 36 | [Qwen-14B-Chat-Int4](https://modelscope.cn/models/qwen/Qwen-14B-Chat-Int4/summary) |
| 37 | [zero-shot-classify-SSTuning-base](https://modelscope.cn/models/damo/zero-shot-classify-SSTuning-base/summary) |
| 38 | [cv_face_detection_landmark](https://modelscope.cn/models/damo/cv_face_detection_landmark/summary) |
| 39 | [speech_conformer_larger_asr_multi_language-16k-common-vocab30392-pytorch](https://modelscope.cn/models/damo/speech_conformer_larger_asr_multi_language-16k-common-vocab30392-pytorch/summary) |
| 40 | [speech_sambert-hifigan_tts_multilingual_multisp_pretrain_16k](https://modelscope.cn/models/damo/speech_sambert-hifigan_tts_multilingual_multisp_pretrain_16k/summary) |
| 41 | [audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr8nq32ds320-pytorch/summary) |
| 42 | [audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-freqcodec_magphase-en-libritts-16k-gr1nq32ds320-pytorch/summary) |
| 43 | [audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch/summary) |
| 44 | [audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-zh_en-general-16k-nq32ds320-pytorch/summary) |
| 45 | [audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds320-pytorch/summary) |
| 46 | [speech_eres2net-large_speaker-diarization_common](https://modelscope.cn/models/damo/speech_eres2net-large_speaker-diarization_common/summary) |
| 47 | [cv_3d-human-animation](https://modelscope.cn/models/damo/cv_3d-human-animation/summary) |
| 48 | [audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch](https://modelscope.cn/models/damo/audio_codec-encodec-en-libritts-16k-nq32ds640-pytorch/summary) |
| 49 | [cv_HRN_text-to-head](https://modelscope.cn/models/damo/cv_HRN_text-to-head/summary) |
| 50 | [cv_diffuser_text-texture-generation](https://modelscope.cn/models/damo/cv_diffuser_text-texture-generation/summary) |
| 51 | [cv_3d-human-synthesis-library](https://modelscope.cn/models/damo/cv_3d-human-synthesis-library/summary) |

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.