中文版本
新模型推荐
| 序号 | 模型名称&快捷链接 |
| --- | --- |
| 0 | [Yi-34B-Chat-4bits](https://modelscope.cn/models/01ai/Yi-34B-Chat-4bits) |
| 1 | [Yi-34B-Chat-8bits](https://modelscope.cn/models/01ai/Yi-34B-Chat-8bits) |
| 2 | [Yi-6B-Chat](https://modelscope.cn/models/01ai/Yi-6B-Chat) |
| 3 | [Yi-6B-Chat-4bits](https://modelscope.cn/models/01ai/Yi-6B-Chat-4bits) |
| 4 | [Yi-6B-Chat-8bits](https://modelscope.cn/models/01ai/Yi-6B-Chat-8bits) |
| 5 | [Yi-34B-Chat](https://modelscope.cn/models/01ai/Yi-34B-Chat) |
| 6 | [Video-LLaVA-V1.5](https://modelscope.cn/models/PKU-YuanLab/Video-LLaVA-V1.5) |
| 7 | [Video-LLaVA-7B](https://modelscope.cn/models/PKU-YuanLab/Video-LLaVA-7B) |
| 8 | [LanguageBind_Video](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Video) |
| 9 | [LanguageBind_Video_FT](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Video_FT) |
| 10 | [LanguageBind_Image](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Image) |
| 11 | [LanguageBind_Video_merge](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Video_merge) |
| 12 | [LanguageBind_Audio](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Audio) |
| 13 | [LanguageBind_Depth](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Depth) |
| 14 | [LanguageBind_Thermal](https://modelscope.cn/models/PKU-YuanLab/LanguageBind_Thermal) |
| 15 | [Video-LLaVA-Pretrain-7B](https://modelscope.cn/models/PKU-YuanLab/Video-LLaVA-Pretrain-7B) |
| 16 | [Aquila2-70B-Expr](https://modelscope.cn/models/BAAI/Aquila2-70B-Expr) |
| 17 | [AquilaChat2-70B-Expr](https://modelscope.cn/models/BAAI/AquilaChat2-70B-Expr) |
| 18 | [AquilaChat2-34B-Int4-GPTQ](https://modelscope.cn/models/BAAI/AquilaChat2-34B-Int4-GPTQ) |
| 19 | [AquilaChat2-34B-16K](https://modelscope.cn/models/BAAI/AquilaChat2-34B-16K) |
| 20 | [speech_rwkv_transducer_asr-en-16k-gigaspeech-vocab5001-pytorch-online](https://modelscope.cn/models/damo/speech_rwkv_transducer_asr-en-16k-gigaspeech-vocab5001-pytorch-online) |
| 21 | [funasr-runtime-win-cpu-x64](https://modelscope.cn/models/damo/funasr-runtime-win-cpu-x64) |
| 22 | [ModelScope-Agent-14B](https://modelscope.cn/models/damo/ModelScope-Agent-14B) |
| 23 | [speech_sambert-hifigan_nsf_tts_donna_en-us_24k](https://modelscope.cn/models/damo/speech_sambert-hifigan_nsf_tts_donna_en-us_24k) |
| 24 | [speech_sambert-hifigan_nsf_tts_david_en-us_24k](https://modelscope.cn/models/damo/speech_sambert-hifigan_nsf_tts_david_en-us_24k) |
| 25 | [MiniGPT-v2](https://modelscope.cn/models/damo/MiniGPT-v2) |
| 26 | [speech_sambert-hifigan_tts_waan_Thai_16k](https://modelscope.cn/models/damo/speech_sambert-hifigan_tts_waan_Thai_16k) |
| 27 | [cv_background_generation_sd](https://modelscope.cn/models/damo/cv_background_generation_sd) |
| 28 | [speech_eres2net_base_250k_sv_zh-cn_16k-common](https://modelscope.cn/models/damo/speech_eres2net_base_250k_sv_zh-cn_16k-common) |
| 29 | [Pose-driven-image-generation-HumanSD](https://modelscope.cn/models/damo/Pose-driven-image-generation-HumanSD) |
| 30 | [cv_stable-diffusion-v2_image-feature](https://modelscope.cn/models/damo/cv_stable-diffusion-v2_image-feature) |
| 31 | [nlp_minilm_ibkd_sentence-embedding_english-sts](https://modelscope.cn/models/damo/nlp_minilm_ibkd_sentence-embedding_english-sts) |
| 32 | [nlp_minilm_ibkd_sentence-embedding_english-msmarco](https://modelscope.cn/models/damo/nlp_minilm_ibkd_sentence-embedding_english-msmarco) |
| 33 | [speech_eres2net_large_mej_lre_16k_common](https://modelscope.cn/models/damo/speech_eres2net_large_mej_lre_16k_common) |
| 34 | [speech_eres2net_base_mej_lre_16k_common](https://modelscope.cn/models/damo/speech_eres2net_base_mej_lre_16k_common) |
| 35 | [Ziya2-13B-Base](https://modelscope.cn/models/Fengshenbang/Ziya2-13B-Base) |
| 36 | [SUS-Chat-34B](https://modelscope.cn/models/SUSTC/SUS-Chat-34B) |
| 37 | [animatediff-motion-adapter-v1-5](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-adapter-v1-5) |
| 38 | [animatediff-motion-adapter-v1-4](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-adapter-v1-4) |
| 39 | [animatediff-motion-adapter-v1-5-2](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-adapter-v1-5-2) |
| 40 | [animatediff-motion-lora-zoom-in](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-zoom-in) |
| 41 | [animatediff-motion-lora-pan-left](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-pan-left) |
| 42 | [animatediff-motion-lora-tilt-up](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-tilt-up) |
| 43 | [animatediff-motion-lora-rolling-clockwise](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-rolling-clockwise) |
| 44 | [animatediff-motion-lora-zoom-out](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-zoom-out) |
| 45 | [animatediff-motion-lora-pan-right](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-pan-right) |
| 46 | [animatediff-motion-lora-tilt-down](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-tilt-down) |
| 47 | [animatediff-motion-lora-rolling-anticlockwise](https://modelscope.cn/models/Shanghai_AI_Laboratory/animatediff-motion-lora-rolling-anticlockwise) |
| 48 | [Qwen-1_8B-Chat-Int4](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat-Int4) |
| 49 | [Qwen-1_8B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat-Int8) |
| 50 | [Qwen-72B-Chat-Int4](https://modelscope.cn/models/qwen/Qwen-72B-Chat-Int4) |
| 51 | [Qwen-72B-Chat-Int8](https://modelscope.cn/models/qwen/Qwen-72B-Chat-Int8) |
| 52 | [Qwen-Audio-Chat](https://modelscope.cn/models/qwen/Qwen-Audio-Chat) |
| 53 | [Qwen-72B-Chat](https://modelscope.cn/models/qwen/Qwen-72B-Chat) |
| 54 | [Qwen-72B](https://modelscope.cn/models/qwen/Qwen-72B) |
| 55 | [Qwen-1_8B](https://modelscope.cn/models/qwen/Qwen-1_8B) |
| 56 | [Qwen-1_8B-Chat](https://modelscope.cn/models/qwen/Qwen-1_8B-Chat) |
| 57 | [Qwen-Audio](https://modelscope.cn/models/qwen/Qwen-Audio) |
| 58 | [cogvlm-chat](https://modelscope.cn/models/ZhipuAI/cogvlm-chat) |
| 59 | [cogvlm-base-224](https://modelscope.cn/models/ZhipuAI/cogvlm-base-224) |
| 60 | [cogvlm-base-490](https://modelscope.cn/models/ZhipuAI/cogvlm-base-490) |
| 61 | [cogvlm-grounding-base](https://modelscope.cn/models/ZhipuAI/cogvlm-grounding-base) |
| 62 | [cogvlm-grounding-generalist](https://modelscope.cn/models/ZhipuAI/cogvlm-grounding-generalist) |
| 63 | [deepseek-llm-7b-base](https://modelscope.cn/models/deepseek-ai/deepseek-llm-7b-base) |
| 64 | [deepseek-llm-67b-base](https://modelscope.cn/models/deepseek-ai/deepseek-llm-67b-base) |
| 65 | [deepseek-llm-7b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-llm-7b-chat) |
| 66 | [deepseek-llm-67b-chat](https://modelscope.cn/models/deepseek-ai/deepseek-llm-67b-chat) |
| 67 | [xlm-MLM-en-2048](https://modelscope.cn/models/mindnlp/xlm-MLM-en-2048) |
| 68 | [OrionStar-Yi-34B-Chat](https://modelscope.cn/models/OrionStar-Yi-34B-Chat/OrionStar-Yi-34B-Chat) |
| 69 | [tigerbot-180b-base-v2](https://modelscope.cn/models/TigerResearch/tigerbot-180b-base-v2) |
| 70 | [tigerbot-13b-chat-v5](https://modelscope.cn/models/TigerResearch/tigerbot-13b-chat-v5) |
| 71 | [tigerbot-13b-chat-v5-4bit-exl2](https://modelscope.cn/models/TigerResearch/tigerbot-13b-chat-v5-4bit-exl2) |
| 72 | [tigerbot-70b-chat-v4-4bit-exl2](https://modelscope.cn/models/TigerResearch/tigerbot-70b-chat-v4-4bit-exl2) |
| 73 | [tigerbot-180b-chat-v2](https://modelscope.cn/models/TigerResearch/tigerbot-180b-chat-v2) |
| 74 | [tigerbot-13b-chat-v5-4k](https://modelscope.cn/models/TigerResearch/tigerbot-13b-chat-v5-4k) |
| 75 | [tigerbot-13b-base-v3](https://modelscope.cn/models/TigerResearch/tigerbot-13b-base-v3) |
| 76 | [tigerbot-70b-chat-v4-4k](https://modelscope.cn/models/TigerResearch/tigerbot-70b-chat-v4-4k) |
| 77 | [tigerbot-70b-base-v2](https://modelscope.cn/models/TigerResearch/tigerbot-70b-base-v2) |
| 78 | [tigerbot-70b-chat-v4](https://modelscope.cn/models/TigerResearch/tigerbot-70b-chat-v4) |
| 79 | [BlueLM-7B-Base](https://modelscope.cn/models/vivo-ai/BlueLM-7B-Base) |
| 80 | [BlueLM-7B-Chat-32K](https://modelscope.cn/models/vivo-ai/BlueLM-7B-Chat-32K) |
| 81 | [BlueLM-7B-Chat-4bits](https://modelscope.cn/models/vivo-ai/BlueLM-7B-Chat-4bits) |
| 82 | [Sunsimiao-Qwen-7B](https://modelscope.cn/models/X-D-Lab/Sunsimiao-Qwen-7B) |
| 83 | [MindChat-Qwen-7B-v2-self_lora](https://modelscope.cn/models/X-D-Lab/MindChat-Qwen-7B-v2-self_lora) |
| 84 | [jina-embeddings-v2-base-en](https://modelscope.cn/models/Xorbits/jina-embeddings-v2-base-en) |
| 85 | [jina-embeddings-v2-small-en](https://modelscope.cn/models/Xorbits/jina-embeddings-v2-small-en) |
| 86 | [qwen-chat-7B-ggml](https://modelscope.cn/models/Xorbits/qwen-chat-7B-ggml) |
| 87 | [qwen-chat-14B-ggml](https://modelscope.cn/models/Xorbits/qwen-chat-14B-ggml) |
| 88 | [bge-reranker-large](https://modelscope.cn/models/Xorbits/bge-reranker-large) |
| 89 | [bge-reranker-base](https://modelscope.cn/models/Xorbits/bge-reranker-base) |
高亮功能
- 支持本地拉起测试推理服务
- 支持vllm推理
- LLMPipeline 支持vllm
- 官方镜像升级到python3.10,pytorch升级2.1.0, tensorflow 1.14.0 ubuntu22.04
- upgrade to python3.10
功能列表
- Support VLLM in LLMPipeline (604)
- add bpemodel path in asr_trainer
- add llm riddles (621)
- feat: deploy checker for swingdeploy
功能提升
- python311 support for whl
- llm pipeline support chatglm3 (618)
- Support transformers==4.35.0 (633)
BugFix
- Fix _set_gradient_checkpointing bug (660)
- fix test reliability issue (657)
- fix: DocumentGroundedDialogRetrievalModel qry_encoder.encoder.embeddings.position_ids error (647)
- fix asr paraformer finetune bug
- fix uie trainer: eval failed (617)
- Fix vllm: change if condition (607)
- fix shop_segmentation to use old timm lib and bump version to 1.9.4rc2
- fix the numpy bug for card detection correction
- fix issues for 3dhuman models
- fix logger: remove file handler for original user logging (645)
English Version
Highlight
- local launch inference server
- support vllm
- LLMPipeline support vllm
- Image upgrade to python3.10, pytorch2.1.0,tensorflow2.14.0, ubuntu22.04
Breaking changes
Feature
- Support VLLM in LLMPipeline (604)
- add bpemodel path in asr_trainer
- add llm riddles (621)
- feat: deploy checker for swingdeploy
Improvements
- python311 support for whl
- llm pipeline support chatglm3 (618)
- Support transformers==4.35.0 (633)
BugFix
- Fix _set_gradient_checkpointing bug (660)
- fix test reliability issue (657)
- fix: DocumentGroundedDialogRetrievalModel qry_encoder.encoder.embeddings.position_ids error (647)
- fix asr paraformer finetune bug
- fix uie trainer: eval failed (617)
- Fix vllm: change if condition (607)
- fix shop_segmentation to use old timm lib and bump version to 1.9.4rc2
- fix the numpy bug for card detection correction
- fix issues for 3dhuman models
- fix logger: remove file handler for original user logging (645)