Modelscope

Latest version: v1.20.1

Safety actively analyzes 682404 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

1.20.1

Bug Fix:

1. Fix an import error which may cause `snapshot_download` fails when using a clean python env
2. Reduce the log of `snapshot_download`
3. Fix a bug which may cause the failure of `facial_68ldk_detection` pipeline


Bug修复:
1. 修复了一个在干净的python环境中可能引起`snapshot_download`依赖报错的问题
2. 减少`snapshot_download`的日志
4. 修复facial_68ldk_detection推理失败的问题

**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.20.0...v1.20.1

1.20.0

English Version
1. New Models
1. [iic/speech_zipenhancer_ans_multiloss_16k_base](https://modelscope.cn/models/iic/speech_zipenhancer_ans_multiloss_16k_base). https://github.com/modelscope/modelscope/pull/1019
2. [AIDC-AI/Ovis1.6-Gemma2-9B](https://modelscope.cn/models/AIDC-AI/Ovis1.6-Gemma2-9B). https://github.com/modelscope/modelscope/pull/1057
2. Hub Side:
1. Created symbolic links in snapshot_download to avoid the issue of models with '.' in their names not being found. https://github.com/modelscope/modelscope/pull/1063
2. Improved model upload by removing the requirement for configuration.json. https://github.com/modelscope/modelscope/pull/1062
3. Added hub-api to check if the repository exists. https://github.com/modelscope/modelscope/pull/1060
3. Enhanced the template to_llama function to support more models. https://github.com/modelscope/modelscope/pull/1039, https://github.com/modelscope/modelscope/pull/1070.
4. Docker optimization and upgrades, removed unnecessary dependencies from the LLM image.


中文版本

1. 新模型
1. [iic/speech_zipenhancer_ans_multiloss_16k_base](https://modelscope.cn/models/iic/speech_zipenhancer_ans_multiloss_16k_base). https://github.com/modelscope/modelscope/pull/1019
2. [AIDC-AI/Ovis1.6-Gemma2-9B](https://modelscope.cn/models/AIDC-AI/Ovis1.6-Gemma2-9B). https://github.com/modelscope/modelscope/pull/1057
2. Hub端:
1. snapshot_download中创建软链接,避免模型名称中含'.'的模型无法被找到的问题。https://github.com/modelscope/modelscope/pull/1063
2. 改善上传模型,移除对 configuration.json 的要求。https://github.com/modelscope/modelscope/pull/1062
3. 添加仓库是否存在的hub-api。https://github.com/modelscope/modelscope/pull/1060
3. template to_llama功能增加更多支持的模型。https://github.com/modelscope/modelscope/pull/1039, https://github.com/modelscope/modelscope/pull/1070
4. docker优化与升级, llm镜像移除不需要的依赖。


What's Changed
* Fix timestamp in docker build by tastelikefeet in https://github.com/modelscope/modelscope/pull/1049
* feat(audio/ans): Add ZipEnhancer and related layers for acoustic nois… by Mashiro009 in https://github.com/modelscope/modelscope/pull/1019
* Fix the slow downloading by tastelikefeet in https://github.com/modelscope/modelscope/pull/1051
* Fix bash and transformers version by tastelikefeet in https://github.com/modelscope/modelscope/pull/1053
* Fix some bugs by tastelikefeet in https://github.com/modelscope/modelscope/pull/1056
* fix(audio ans pipeline): Restore file reading from string input in ANSZip… by Mashiro009 in https://github.com/modelscope/modelscope/pull/1055
* OCR pipeline shall depend on TF only when necessary by yingdachen in https://github.com/modelscope/modelscope/pull/1059
* fix: text error correction batch run bug by smartmark-pro in https://github.com/modelscope/modelscope/pull/1052
* add log for download location by yingdachen in https://github.com/modelscope/modelscope/pull/1061
* add repo existence check hub-api by yingdachen in https://github.com/modelscope/modelscope/pull/1060
* improve upload model, remove requirment for configuration.json by yingdachen in https://github.com/modelscope/modelscope/pull/1062
* Template.to_ollama: add new argument `split` by suluyana in https://github.com/modelscope/modelscope/pull/1039
* Feat(multimodal model):ovis vl pipeline by suluyana in https://github.com/modelscope/modelscope/pull/1057
* default install tf-keras by tastelikefeet in https://github.com/modelscope/modelscope/pull/1064
* Symbolic link by yingdachen in https://github.com/modelscope/modelscope/pull/1063
* Fix the missing __init__.py file. by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1066
* try to reduce the image size of llm by tastelikefeet in https://github.com/modelscope/modelscope/pull/1067
* fix numpy build error by tastelikefeet in https://github.com/modelscope/modelscope/pull/1068
* fix docker numpy version by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1069
* feat ollama template: llama3.2-vision by suluyana in https://github.com/modelscope/modelscope/pull/1070
* update docker evalscope version by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1071
* update docker by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1073
* update docker by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1075
* Update llm docker by Jintao-Huang in https://github.com/modelscope/modelscope/pull/1076

New Contributors
* Mashiro009 made their first contribution in https://github.com/modelscope/modelscope/pull/1019
* smartmark-pro made their first contribution in https://github.com/modelscope/modelscope/pull/1052

**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.19.2...v1.20.0

1.19.2

Hotfix: Set datasets<=3.0.1 to fix datasets import error

1.19.1

English Version
1. Update version of outlines and vllm in docker file: https://github.com/modelscope/modelscope/pull/1034
2. Fix processor and feature extractor in hf_util: https://github.com/modelscope/modelscope/pull/1031
3. Fix device_map issue for gptq: https://github.com/modelscope/modelscope/pull/1027
4. Fix Lint issue:https://github.com/modelscope/modelscope/pull/1025
5. Fix if-else condition: https://github.com/modelscope/modelscope/pull/1024

中文版本
1. 增加dockerfile中outlines版本限制和更新vllm版本限制: https://github.com/modelscope/modelscope/pull/1034
2. 修复hf_util中processor和feature extractor: https://github.com/modelscope/modelscope/pull/1031
3. 修复gptq使用中的device_map问题: https://github.com/modelscope/modelscope/pull/1027
4. 修复commit格式:https://github.com/modelscope/modelscope/pull/1025
5. 修复判断条件: https://github.com/modelscope/modelscope/pull/1024



**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.19.0...release/1.19

1.19.0

English Version
1. Add clear-cache for modelscope command line , refer to: https://github.com/modelscope/modelscope/pull/1009
2. Adapt datasets>=3.0,refer to: https://github.com/modelscope/modelscope/pull/1002
3. Add template module: https://github.com/modelscope/modelscope/pull/995
4. Ignore pt/pth format files for AutoTokenizer: https://github.com/modelscope/modelscope/pull/1008
5. Unify the log format in MsDataset: https://github.com/modelscope/modelscope/pull/997
6. Improve the coverage of loading&previewing datasets from modelscope hub.

中文版本
1. modelscope command line, clear-cache功能,参考: https://github.com/modelscope/modelscope/pull/1009
2. datasets适配最新的3.0.x版本,参考:https://github.com/modelscope/modelscope/pull/1002
3. 支持template(将swift中的template模块迁移到modelscope lib): https://github.com/modelscope/modelscope/pull/995
4. tokenizer加载时忽略以下格式文件:.pt,.pth: https://github.com/modelscope/modelscope/pull/1008
5. 统一dataset 下载log: https://github.com/modelscope/modelscope/pull/997
6. 提升数据集加载和预览的覆盖率



**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.18.1...release/1.19

1.18.1

English Version

1. Fix a bug that zero-sized file cannot be download
2. Support the hub patching to fit the latest version of vllm or other frameworks

中文版本
1. 修复了零大小文件无法下载的问题
2. 支持对modelhub的patch,以适配最新的vllm或其他的开源代码库的魔搭模型下载功能

What's Changed
* use tqdm auto by yingdachen in https://github.com/modelscope/modelscope/pull/982
* Support create file with size 0 by tastelikefeet in https://github.com/modelscope/modelscope/pull/984
* patch hf hub by tastelikefeet in https://github.com/modelscope/modelscope/pull/987
* Refactor zero sized file downloading by tastelikefeet in https://github.com/modelscope/modelscope/pull/991


**Full Changelog**: https://github.com/modelscope/modelscope/compare/v1.18.0...v1.18.1

Page 1 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.