fix readme

This commit is contained in:
zR 2024-08-11 18:00:42 +08:00
commit 6af85cd400
2 changed files with 18 additions and 3 deletions

View File

@ -16,6 +16,11 @@ Read this in [English](README_en.md)
- 🔥 **News**: ```2024/07/24```:
我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85)
查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告。
- 🔥🔥 **News**: ```2024/07/24```:
我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85)
查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告。
- 🔥 **News**: ``2024/7/16``: GLM-4-9B-Chat 模型依赖的`transformers`版本升级到 `4.42.4`,
请更新模型配置文件并参考 `basic_demo/requirements.txt` 更新依赖。
- 🔥 **News**: ``2024/7/9``: GLM-4-9B-Chat
模型已适配 [Ollama](https://github.com/ollama/ollama),[Llama.cpp](https://github.com/ggerganov/llama.cpp)
,您可以在[PR](https://github.com/ggerganov/llama.cpp/pull/8031) 查看具体的细节。
@ -256,7 +261,8 @@ with torch.no_grad():
+ [Xorbits Inference](https://github.com/xorbitsai/inference): 性能强大且功能全面的分布式推理框架,轻松一键部署你自己的模型或内置的前沿开源模型。
+ [LangChain-ChatChat](https://github.com/chatchat-space/Langchain-Chatchat): 基于 Langchain 与 ChatGLM 等语言模型的 RAG
与 Agent 应用
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/GLM-4): Datawhale 团队的提供的 GLM-4-9B 系列模型使用教程。
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/models/GLM-4): Datawhale 团队的提供的 GLM-4-9B
系列模型使用教程。
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): 类似 llama.cpp 的量化加速推理方案,实现笔记本上实时对话
## 协议

View File

@ -10,12 +10,20 @@
## Update
- 🔥🔥 **News**: ```2024/08/12```: The `transformers` version required for the GLM-4-9B-Chat model has been upgraded
to `4.44.0`. Please re-download all files except for the model weights (`*.safetensor` files and `tokenizer.model`),
to `4.44.0`. Please pull all files again except for the model weights (`*.safetensor` files and `tokenizer.model`),
and strictly update the dependencies as per `basic_demo/requirements.txt`.
- 🔥 **News**: ```2024/07/24```: we released the latest technical interpretation related to long texts. Check
out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view
our
technical report on long context technology in the training of the open-source GLM-4-9B model.
- 🔥🔥 **News**: ```2024/07/24```: we released the latest technical interpretation related to long texts. Check
out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view
our
technical report on long context technology in the training of the open-source GLM-4-9B model.
- 🔥 **News**: ``2024/7/16``: The `transformers` version that the GLM-4-9B-Chat model depends on has been upgraded
to `4.42.4`. Please update the model configuration file and refer to `basic_demo/requirements.txt` to update the
dependencies.
- 🔥 **News**: ``2024/7/9``: The GLM-4-9B-Chat model has been adapted to [Ollama](https://github.com/ollama/ollama)
and [Llama.cpp](https://github.com/ggerganov/llama.cpp), you can check the specific details
in [PR](https://github.com/ggerganov/llama.cpp/pull/8031).
@ -275,7 +283,8 @@ with basic GLM-4-9B usage and development code through the following content
framework, easily deploy your own models or import cutting-edge open source models with one click.
+ [LangChain-ChatChat](https://github.com/chatchat-space/Langchain-Chatchat): RAG and Agent applications based on
language models such as Langchain and ChatGLM
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/GLM-4): Datawhale's self-llm project, which includes
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/models/GLM-4): Datawhale's self-llm project, which
includes
the GLM-4-9B open source model cookbook.
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time inference on your laptop accelerated by quantization,
similar to llama.cpp.