change of readme and req for vllm new version
This commit is contained in:
parent
4446f60195
commit
804733811f
|
@ -11,8 +11,8 @@ Read this in [English](README_en.md)
|
|||
|
||||
## 项目更新
|
||||
|
||||
- 🔥🔥 **News**: ```2024/11/01```: 本仓库依赖进行升级,请更新`requirements.txt`中的依赖以保证正常运行模型。[glm-4-9b-chat-hf](https://huggingface.co/THUDM/glm-4-9b-chat-hf) 是适配 `transformers>=4.46` 的模型权重,使用 transforemrs 库中的 `GlmModel` 类实现。
|
||||
同时,[glm-4-9b-chat](https://huggingface.co/THUDM/glm-4-9b-chat), [glm-4v-9b](https://huggingface.co/THUDM/glm-4v-9b) 中的 `tokenzier_chatglm.py` 已经更新以适配最新版本的 `transforemrs`库。请前往 HuggingFace 更新文件。
|
||||
- 🔥🔥 **News**: ```2024/11/01```: 本仓库依赖进行升级,请更新`requirements.txt`中的依赖以保证正常运行模型。[glm-4-9b-chat-hf](https://huggingface.co/THUDM/glm-4-9b-chat-hf) 是适配 `transformers>=4.46.2` 的模型权重,使用 `transformers` 库中的 `GlmModel` 类实现。
|
||||
同时,[glm-4-9b-chat](https://huggingface.co/THUDM/glm-4-9b-chat), [glm-4v-9b](https://huggingface.co/THUDM/glm-4v-9b) 中的 `tokenzier_chatglm.py` 已经更新以适配最新版本的 `transformers`库。请前往 HuggingFace 更新文件。
|
||||
- 🔥 **News**: ```2024/10/27```: 我们开源了 [LongReward](https://github.com/THUDM/LongReward),这是一个使用 AI 反馈改进长上下文大型语言模型。
|
||||
- 🔥 **News**: ```2024/10/25```: 我们开源了端到端中英语音对话模型 [GLM-4-Voice](https://github.com/THUDM/GLM-4-Voice)。
|
||||
- 🔥 **News**: ```2024/09/05``` 我们开源了使LLMs能够在长上下文问答中生成细粒度引用的模型 [longcite-glm4-9b](https://huggingface.co/THUDM/LongCite-glm4-9b) 以及数据集 [LongCite-45k](https://huggingface.co/datasets/THUDM/LongCite-45k), 欢迎在 [Huggingface Space](https://huggingface.co/spaces/THUDM/LongCite) 在线体验。
|
||||
|
|
|
@ -11,8 +11,8 @@
|
|||
|
||||
- 🔥🔥 **News**: ```2024/11/01```: Dependencies have been updated in this repository. Please update the dependencies in
|
||||
`requirements.txt` to ensure the model runs correctly. The model weights
|
||||
for [glm-4-9b-chat-hf](https://huggingface.co/THUDM/glm-4-9b-chat-hf) are compatible with `transformers>=4.46` and can
|
||||
be implemented using the `GlmModel` class in the transformers library. Additionally, `tokenizer_chatglm.py`
|
||||
for [glm-4-9b-chat-hf](https://huggingface.co/THUDM/glm-4-9b-chat-hf) are compatible with `transformers>=4.46.2` and can
|
||||
be implemented using the `GlmModel` class in the `transformers` library. Additionally, `tokenizer_chatglm.py`
|
||||
in [glm-4-9b-chat](https://huggingface.co/THUDM/glm-4-9b-chat) and [glm-4v-9b](https://huggingface.co/THUDM/glm-4v-9b)
|
||||
has been updated for the latest version of `transformers`. Please update the files on HuggingFace.
|
||||
- 🔥 **News**: ```2024/10/27```: We have open-sourced [LongReward](https://github.com/THUDM/LongReward), a model that
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
torch>=2.5.0
|
||||
torch>=2.5.1 # vLLM will use it
|
||||
torchvision>=0.20.0
|
||||
transformers>=4.46.0
|
||||
huggingface-hub>=0.25.1
|
||||
transformers>=4.46.2
|
||||
sentencepiece>=0.2.0
|
||||
jinja2>=3.1.4
|
||||
pydantic>=2.9.2
|
||||
|
@ -11,13 +10,12 @@ numpy==1.26.4 # Need less than 2.0.0
|
|||
accelerate>=1.0.1
|
||||
sentence_transformers>=3.1.1
|
||||
gradio==4.44.1 # web demo
|
||||
openai>=1.51.0 # openai demo
|
||||
openai>=1.54.0 # openai demo
|
||||
einops>=0.8.0
|
||||
pillow>=10.4.0
|
||||
sse-starlette>=2.1.3
|
||||
bitsandbytes>=0.43.3 # INT4 Loading
|
||||
bitsandbytes>=0.44.1 # INT4 Loading
|
||||
|
||||
# vllm>=0.6.3 # using with VLLM Framework
|
||||
# flash-attn>=2.6.3 # using with flash-attention 2
|
||||
vllm>=0.6.4.post1 # using with VLLM Framework
|
||||
# PEFT model, not need if you don't use PEFT finetune model.
|
||||
# peft>=0.13.0 # Using with finetune model
|
||||
# peft>=0.14.0 # Using with finetune model
|
|
@ -1,7 +1,9 @@
|
|||
jieba==0.42.1
|
||||
datasets==2.20.0
|
||||
peft==0.12.2
|
||||
peft==0.14.0
|
||||
deepspeed==0.14.4
|
||||
nltk==3.8.1
|
||||
rouge_chinese==1.0.3
|
||||
ruamel.yaml==0.18.6
|
||||
ruamel.yaml==0.18.6
|
||||
typer==0.13.0
|
||||
tqdm==4.67.0
|
Loading…
Reference in New Issue