Modelscope

Latest version: v1.24.1

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 8

1.22.1

English Version
1. Support upload file and folder in the hub api 1152
2. Update doc with llama_index 1180
3. Unify datasets cache dir 1178
4. Fix/text gen 1177
5. Add repo_id and repo_type in snapshot_download 1172
6. Support all other ollama models 1174
7. Fix path concatenation to be windows compatible 1176
8. support ms-swift 3.0.0 1166
9. Update downloading progress 1167
10. Fix iic/nlp_structbert_address-parsing_chinese_base does not support capitalize 1170
11. logger.warning when using remote code 1171
12. Support latest datasets 1163


中文版本
1. 支持使用CLI和hub api上传文件和文件夹 1152
2. 优化llama_index的文档 1180
3. 统一数据集缓存目录 1178
4. 修复文本生成模型 1177
5. 在snapshot_download中添加repo_id和repo_type 1172
6. 支持所有其他ollama模型 1174
7. 修复路径拼接以支持windows 1176
8. 支持ms-swift 3.0.0 1166
9. 更新下载进度条显示 1167
10. 修复iic/nlp_structbert_address-parsing_chinese_base不支持capitalize 1170
11. 使用远程代码时添加logger.warning 1171
12. 支持最新的datasets 1163


**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.22.0...v1.22.1

1.21.1

Hotfix

English Version
1. Fix command line for downloading datasets
2. Add thread_utils wrapper


中文版本
1. 修复数据集下载命令行

1.21.0

English Version
1. Optimize log recording 1080 1081 1093 1089 1099
2. Support launching LLM using the `llamafile` option via the command line 1087
3. Support automatic GPU usage for `llamafile` 1097
4. Better support for importing AutoClassfrom modelscope: 1098 1106 1107
5. The `llm_first` parameter in `create_pipeline` has been changed to `external_engine_for_llm`; the input for `LLMPipeline` must be in the `messages` format 1094
6. `snapshot_download` and `dataset_snapshot_download` support multi-threaded downloads, controlled by the `max_workers` parameter, with a default of 8 1095 1108
7. Support hash verification for downloaded files 1116


中文版本
1. 优化log记录 1080 1081 1093 1089 1099
2. 支持命令行使用`llamafile`选项启动LLM 1087
3. 支持`llamafile`自动使用GPU 1097
4. 更完整的支持从modelscope导入AutoClass: 1098 1106 1107
5. `create_pipeline`参数`llm_first`变更为`external_engine_for_llm` ;`LLMPipeline`的input需为`messages`格式 1094
6. `snapshot_download`和`dataset_snapshot_download`支持多线程下载,通过`max_workers`参数控制并发数量,默认数量为8 1095 1108
7. 支持下载文件的hash校验 1116

What's Changed
- Update docker from release/1.20 by Jintao-Huang in 1077
- Fix facial 68ldk detection by Jintao-Huang in 1078
- Fix log for downloading to local dir by yingdachen in 1080
- Log reduce by yingdachen in 1081
- Fix dependency by yingdachen in 1085
- Add llamafile support to command line by yingdachen in 1087
- Refactor(models/audio/ans): Optimize zipformer and scaling layers by Mashiro009 and ZipEnhancer 1088
- Fix tensorflow warning by Jintao-Huang in 1093
- Add AutoModelForImageSegmentation and T5EncoderModel by yingdachen 1096
- Format llm pipeline by yingdachen in 1094
- Lazy print ast logs by Jintao-Huang in 1089
- Llamafile support gpu flag by yingdachen in 1097
- Change warning to debug by yingdachen in 1099
- More automodel by yingdachen in 1098
- Add multi-thread download by Yunnglin in 1095
- Fix potential double definition for OCR pipeline by yingdachen in 1102
- Add transformer support for Qwen2vl by yingdachen in 1106
- Add transformers compatibility for Vision2seq by yingdachen in 1107
- Merge release 1.20 docker by Jintao-Huang in 1109
- Support tag ci_image by tastelikefeet in 1112
- Release transformers version to 4.33-4.46 by Jintao-Huang in 1111
- Fix tqdm bar by Yunnglin in 1108
- Handle unsupported Transformers class, and add more auto classes by yingdachen in 1113
- Remove unnecessary code by Jintao-Huang in 1115
- Add hash verification into cache file existence check by yingdachen in 1116
- Fix accuracy case sensitiveness by yingdachen in 1118
- Fix windows path by yingdachen in 1119
- Make hash validation optional by yingdachen in 1124
- Fix missing import by yingdachen in 1126
- Skip obsolete sd pipeline by tastelikefeet in 1131

New Contributors
- Yunnglin made their first contribution in 1095

**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.20.1...v1.21.0

1.20.1

Bug Fix:

1. Fix an import error which may cause `snapshot_download` fails when using a clean python env
2. Reduce the log of `snapshot_download`
3. Fix a bug which may cause the failure of `facial_68ldk_detection` pipeline


Bug修复:
1. 修复了一个在干净的python环境中可能引起`snapshot_download`依赖报错的问题
2. 减少`snapshot_download`的日志
4. 修复facial_68ldk_detection推理失败的问题

**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.20.0...v1.20.1

1.20.0

English Version
1. New Models
1. [iic/speech_zipenhancer_ans_multiloss_16k_base](https://modelscope.cn/models/iic/speech_zipenhancer_ans_multiloss_16k_base). https://github.com/modelscope/modelscope/pull/1019
2. [AIDC-AI/Ovis1.6-Gemma2-9B](https://modelscope.cn/models/AIDC-AI/Ovis1.6-Gemma2-9B). https://github.com/modelscope/modelscope/pull/1057
2. Hub Side:
1. Created symbolic links in snapshot_download to avoid the issue of models with '.' in their names not being found. https://github.com/modelscope/modelscope/pull/1063
2. Improved model upload by removing the requirement for configuration.json. https://github.com/modelscope/modelscope/pull/1062
3. Added hub-api to check if the repository exists. https://github.com/modelscope/modelscope/pull/1060
3. Enhanced the template to_llama function to support more models. https://github.com/modelscope/modelscope/pull/1039, https://github.com/modelscope/modelscope/pull/1070.
4. Docker optimization and upgrades, removed unnecessary dependencies from the LLM image.


中文版本

1. 新模型
1. [iic/speech_zipenhancer_ans_multiloss_16k_base](https://modelscope.cn/models/iic/speech_zipenhancer_ans_multiloss_16k_base). https://github.com/modelscope/modelscope/pull/1019
2. [AIDC-AI/Ovis1.6-Gemma2-9B](https://modelscope.cn/models/AIDC-AI/Ovis1.6-Gemma2-9B). https://github.com/modelscope/modelscope/pull/1057
2. Hub端:
1. snapshot_download中创建软链接,避免模型名称中含'.'的模型无法被找到的问题。https://github.com/modelscope/modelscope/pull/1063
2. 改善上传模型,移除对 configuration.json 的要求。https://github.com/modelscope/modelscope/pull/1062
3. 添加仓库是否存在的hub-api。https://github.com/modelscope/modelscope/pull/1060
3. template to_llama功能增加更多支持的模型。https://github.com/modelscope/modelscope/pull/1039, https://github.com/modelscope/modelscope/pull/1070
4. docker优化与升级, llm镜像移除不需要的依赖。


What's Changed
* Fix timestamp in docker build by tastelikefeet in https://github.com/modelscope/modelscope/pull/1049
* feat(audio/ans): Add ZipEnhancer and related layers for acoustic nois… by Mashiro009 in https://github.com/modelscope/modelscope/pull/1019
* Fix the slow downloading by tastelikefeet in https://github.com/modelscope/modelscope/pull/1051
* Fix bash and transformers version by tastelikefeet in https://github.com/modelscope/modelscope/pull/1053
* Fix some bugs by tastelikefeet in https://github.com/modelscope/modelscope/pull/1056
* fix(audio ans pipeline): Restore file reading from string input in ANSZip… by Mashiro009 in https://github.com/modelscope/modelscope/pull/1055
* OCR pipeline shall depend on TF only when necessary by yingdachen in https://github.com/modelscope/modelscope/pull/1059
* fix: text error correction batch run bug by smartmark-pro in https://github.com/modelscope/modelscope/pull/1052
* add log for download location by yingdachen in https://github.com/modelscope/modelscope/pull/1061
* add repo existence check hub-api by yingdachen in https://github.com/modelscope/modelscope/pull/1060
* improve upload model, remove requirment for configuration.json by yingdachen in https://github.com/modelscope/modelscope/pull/1062
* Template.to_ollama: add new argument `split` by suluyana in https://github.com/modelscope/modelscope/pull/1039
* Feat(multimodal model):ovis vl pipeline by suluyana in https://github.com/modelscope/modelscope/pull/1057
* default install tf-keras by tastelikefeet in https://github.com/modelscope/modelscope/pull/1064
* Symbolic link by yingdachen in https://github.com/modelscope/modelscope/pull/1063
* Fix the missing __init__.py file. by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1066
* try to reduce the image size of llm by tastelikefeet in https://github.com/modelscope/modelscope/pull/1067
* fix numpy build error by tastelikefeet in https://github.com/modelscope/modelscope/pull/1068
* fix docker numpy version by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1069
* feat ollama template: llama3.2-vision by suluyana in https://github.com/modelscope/modelscope/pull/1070
* update docker evalscope version by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1071
* update docker by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1073
* update docker by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1075
* Update llm docker by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1076

New Contributors
* Mashiro009 made their first contribution in https://github.com/modelscope/modelscope/pull/1019
* smartmark-pro made their first contribution in https://github.com/modelscope/modelscope/pull/1052

**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.19.2...v1.20.0

1.19.2

Hotfix: Set datasets<=3.0.1 to fix datasets import error

Page 2 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.