更新友情链接
This commit is contained in:
parent
6c6d4637fb
commit
6ae0f088ac
12
README.md
12
README.md
|
@ -35,13 +35,6 @@ GLM-4V-9B。**GLM-4V-9B** 具备 1120 * 1120 高分辨率下的中英双语多
|
||||||
| GLM-4-9B-Chat-1M | Chat | 1M | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat-1m) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat-1m) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4-9B-Chat-1M) | / |
|
| GLM-4-9B-Chat-1M | Chat | 1M | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat-1m) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat-1m) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4-9B-Chat-1M) | / |
|
||||||
| GLM-4V-9B | Chat | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4v-9b) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4v-9b) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4V-9B ) | [🤖 ModelScope](https://modelscope.cn/studios/ZhipuAI/glm-4v-9b-Demo/summary) |
|
| GLM-4V-9B | Chat | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4v-9b) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4v-9b) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4V-9B ) | [🤖 ModelScope](https://modelscope.cn/studios/ZhipuAI/glm-4v-9b-Demo/summary) |
|
||||||
|
|
||||||
## 友情链接
|
|
||||||
|
|
||||||
以下优秀开源仓库已经对 GLM-4-9B 模型深度支持,欢迎大家扩展学习。
|
|
||||||
|
|
||||||
推理加速:
|
|
||||||
|
|
||||||
* [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): 类似 llama.cpp 的量化加速推理方案,实现笔记本上实时对话
|
|
||||||
|
|
||||||
## 评测结果
|
## 评测结果
|
||||||
|
|
||||||
|
@ -247,10 +240,11 @@ with torch.no_grad():
|
||||||
## 友情链接
|
## 友情链接
|
||||||
|
|
||||||
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): 高效开源微调框架,已支持 GLM-4-9B-Chat 语言模型微调。
|
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): 高效开源微调框架,已支持 GLM-4-9B-Chat 语言模型微调。
|
||||||
+ [SWIFT](https://github.com/modelscope/swift): 魔搭社区的大模型/多模态大模型训练框架,已支持 GLM4-9B-Chat/GLM4v-9B-Chat
|
+ [SWIFT](https://github.com/modelscope/swift): 魔搭社区的大模型/多模态大模型训练框架,已支持 GLM-4-9B-Chat / GLM-4V-9B 模型微调。
|
||||||
模型微调。
|
|
||||||
+ [Xorbits Inference](https://github.com/xorbitsai/inference): 性能强大且功能全面的分布式推理框架,轻松一键部署你自己的模型或内置的前沿开源模型。
|
+ [Xorbits Inference](https://github.com/xorbitsai/inference): 性能强大且功能全面的分布式推理框架,轻松一键部署你自己的模型或内置的前沿开源模型。
|
||||||
|
+ [LangChain-ChatChat](https://github.com/chatchat-space/Langchain-Chatchat): 基于 Langchain 与 ChatGLM 等语言模型的 RAG 与 Agent 应用
|
||||||
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/GLM-4): Datawhale 团队的提供的 GLM-4-9B 系列模型使用教程。
|
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/GLM-4): Datawhale 团队的提供的 GLM-4-9B 系列模型使用教程。
|
||||||
|
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): 类似 llama.cpp 的量化加速推理方案,实现笔记本上实时对话
|
||||||
|
|
||||||
## 协议
|
## 协议
|
||||||
|
|
||||||
|
|
12
README_en.md
12
README_en.md
|
@ -39,14 +39,6 @@ GPT-4-turbo-2024-04-09, Gemini 1.0 Pro, Qwen-VL-Max, and Claude 3 Opus.
|
||||||
| GLM-4-9B-Chat-1M | Chat | 1M | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat-1m) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat-1m) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4-9B-Chat-1M) | / |
|
| GLM-4-9B-Chat-1M | Chat | 1M | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4-9b-chat-1m) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4-9b-chat-1m) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4-9B-Chat-1M) | / |
|
||||||
| GLM-4V-9B | Chat | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4v-9b) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4v-9b) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4V-9B) | [🤖 ModelScope](https://modelscope.cn/studios/ZhipuAI/glm-4v-9b-Demo/summary) |
|
| GLM-4V-9B | Chat | 8K | [🤗 Huggingface](https://huggingface.co/THUDM/glm-4v-9b) [🤖 ModelScope](https://modelscope.cn/models/ZhipuAI/glm-4v-9b) [🟣 WiseModel](https://wisemodel.cn/models/ZhipuAI/GLM-4V-9B) | [🤖 ModelScope](https://modelscope.cn/studios/ZhipuAI/glm-4v-9b-Demo/summary) |
|
||||||
|
|
||||||
## Projects
|
|
||||||
|
|
||||||
The following excellent open source repositories have in-depth support for the GLM-4-9B model, and everyone is welcome to expand their learning.
|
|
||||||
|
|
||||||
Inference acceleration:
|
|
||||||
|
|
||||||
* [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time inference on your laptop accelerated by quantization, similar to llama.cpp.
|
|
||||||
|
|
||||||
## BenchMark
|
## BenchMark
|
||||||
|
|
||||||
### Typical Tasks
|
### Typical Tasks
|
||||||
|
@ -259,11 +251,13 @@ with basic GLM-4-9B usage and development code through the following content
|
||||||
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): Efficient open-source fine-tuning framework,
|
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): Efficient open-source fine-tuning framework,
|
||||||
already supports GLM-4-9B-Chat language model fine-tuning.
|
already supports GLM-4-9B-Chat language model fine-tuning.
|
||||||
+ [SWIFT](https://github.com/modelscope/swift): LLM/VLM training framework from ModelScope, supports
|
+ [SWIFT](https://github.com/modelscope/swift): LLM/VLM training framework from ModelScope, supports
|
||||||
GLM4-9B-Chat/GLM4v-9b-chat fine-tuning.
|
GLM-4-9B-Chat / GLM-4V-9b fine-tuning.
|
||||||
+ [Xorbits Inference](https://github.com/xorbitsai/inference): Performance-enhanced and comprehensive global inference
|
+ [Xorbits Inference](https://github.com/xorbitsai/inference): Performance-enhanced and comprehensive global inference
|
||||||
framework, easily deploy your own models or import cutting-edge open source models with one click.
|
framework, easily deploy your own models or import cutting-edge open source models with one click.
|
||||||
|
+ [LangChain-ChatChat](https://github.com/chatchat-space/Langchain-Chatchat): RAG and Agent applications based on language models such as Langchain and ChatGLM
|
||||||
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/GLM-4): Datawhale's self-llm project, which includes
|
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/GLM-4): Datawhale's self-llm project, which includes
|
||||||
the GLM-4-9B open source model cookbook.
|
the GLM-4-9B open source model cookbook.
|
||||||
|
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time inference on your laptop accelerated by quantization, similar to llama.cpp.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue