Merge pull request #701 from openvino-dev-samples/main
Update the introduction to Intel Demo
This commit is contained in:
commit
64476493cf
|
@ -286,6 +286,10 @@ for o in outputs:
|
||||||
+ PEFT (LORA, P-Tuning) 微调代码
|
+ PEFT (LORA, P-Tuning) 微调代码
|
||||||
+ SFT 微调代码
|
+ SFT 微调代码
|
||||||
|
|
||||||
|
+ [intel_device_demo](intel_device_demo/): 在这里包含了
|
||||||
|
+ 使用 OpenVINO 部署模型代码
|
||||||
|
+ 使用 Intel® Extension for Transformers 部署模型代码
|
||||||
|
|
||||||
## 友情链接
|
## 友情链接
|
||||||
|
|
||||||
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): 高效开源微调框架,已支持 GLM-4-9B-Chat 语言模型微调。
|
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): 高效开源微调框架,已支持 GLM-4-9B-Chat 语言模型微调。
|
||||||
|
@ -297,6 +301,8 @@ for o in outputs:
|
||||||
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/models/GLM-4): Datawhale 团队的提供的 GLM-4-9B
|
+ [self-llm](https://github.com/datawhalechina/self-llm/tree/master/models/GLM-4): Datawhale 团队的提供的 GLM-4-9B
|
||||||
系列模型使用教程。
|
系列模型使用教程。
|
||||||
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): 类似 llama.cpp 的量化加速推理方案,实现笔记本上实时对话
|
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): 类似 llama.cpp 的量化加速推理方案,实现笔记本上实时对话
|
||||||
|
+ [OpenVINO](https://github.com/openvinotoolkit):
|
||||||
|
Intel 开发的高性能 CPU,GPU及NPU 加速推理方案,可以参考此 [步骤](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-chatbot/llm-chatbot-generate-api.ipynb) 部署 glm-4-9b-chat 模型。
|
||||||
|
|
||||||
## 协议
|
## 协议
|
||||||
|
|
||||||
|
|
21
README_en.md
21
README_en.md
|
@ -304,17 +304,21 @@ If you want to learn more about the GLM-4-9B series open source models, this ope
|
||||||
with basic GLM-4-9B usage and development code through the following content
|
with basic GLM-4-9B usage and development code through the following content
|
||||||
|
|
||||||
+ [basic_demo](basic_demo/README.md): Contains
|
+ [basic_demo](basic_demo/README.md): Contains
|
||||||
+ Interaction code using transformers and vLLM backend
|
+ Interaction code using transformers and vLLM backend
|
||||||
+ OpenAI API backend interaction code
|
+ OpenAI API backend interaction code
|
||||||
+ Batch reasoning code
|
+ Batch reasoning code
|
||||||
|
|
||||||
+ [composite_demo](composite_demo/README.md): Contains
|
+ [composite_demo](composite_demo/README.md): Contains
|
||||||
+ Fully functional demonstration code for GLM-4-9B and GLM-4V-9B open source models, including All Tools capabilities,
|
+ Fully functional demonstration code for GLM-4-9B and GLM-4V-9B open source models, including All Tools capabilities,
|
||||||
long document interpretation, and multimodal capabilities.
|
long document interpretation, and multimodal capabilities.
|
||||||
|
|
||||||
+ [fintune_demo](finetune_demo/README.md): Contains
|
+ [fintune_demo](finetune_demo/README.md): Contains
|
||||||
+ PEFT (LORA, P-Tuning) fine-tuning code
|
+ PEFT (LORA, P-Tuning) fine-tuning code
|
||||||
+ SFT fine-tuning code
|
+ SFT fine-tuning code
|
||||||
|
|
||||||
|
+ [intel_device_demo](intel_device_demo/): Contains
|
||||||
|
+ OpenVINO deployment code
|
||||||
|
+ Intel® Extension for Transformers deployment code
|
||||||
|
|
||||||
## Friendly Links
|
## Friendly Links
|
||||||
|
|
||||||
|
@ -331,6 +335,9 @@ with basic GLM-4-9B usage and development code through the following content
|
||||||
the GLM-4-9B open source model cookbook.
|
the GLM-4-9B open source model cookbook.
|
||||||
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time inference on your laptop accelerated by quantization,
|
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time inference on your laptop accelerated by quantization,
|
||||||
similar to llama.cpp.
|
similar to llama.cpp.
|
||||||
|
+ [OpenVINO](https://github.com/openvinotoolkit): glm-4-9b-chat already supports the use of OpenVINO. The toolkit accelerates inference and has a greater inference speed improvement on Intel's GPU, GPU and NPU devices. For
|
||||||
|
specific usage, please refer to [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-chatbot/llm-chatbot-generate-api.ipynb)
|
||||||
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue