Update README_en.md
This commit is contained in:
parent
8d089917df
commit
7a4dfa354b
|
@ -316,6 +316,10 @@ with basic GLM-4-9B usage and development code through the following content
|
||||||
+ PEFT (LORA, P-Tuning) fine-tuning code
|
+ PEFT (LORA, P-Tuning) fine-tuning code
|
||||||
+ SFT fine-tuning code
|
+ SFT fine-tuning code
|
||||||
|
|
||||||
|
+ [intel_device_demo](intel_device_demo/): Contains
|
||||||
|
+ OpenVINO deployment code
|
||||||
|
+ Intel® Extension for Transformers deployment code
|
||||||
|
|
||||||
## Friendly Links
|
## Friendly Links
|
||||||
|
|
||||||
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): Efficient open-source fine-tuning framework,
|
+ [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): Efficient open-source fine-tuning framework,
|
||||||
|
@ -331,6 +335,9 @@ with basic GLM-4-9B usage and development code through the following content
|
||||||
the GLM-4-9B open source model cookbook.
|
the GLM-4-9B open source model cookbook.
|
||||||
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time inference on your laptop accelerated by quantization,
|
+ [chatglm.cpp](https://github.com/li-plus/chatglm.cpp): Real-time inference on your laptop accelerated by quantization,
|
||||||
similar to llama.cpp.
|
similar to llama.cpp.
|
||||||
|
+ [OpenVINO](https://github.com/openvinotoolkit): glm-4-9b-chat already supports the use of OpenVINO. The toolkit accelerates inference and has a greater inference speed improvement on Intel's GPU, GPU and NPU devices. For
|
||||||
|
specific usage, please refer to [OpenVINO notebooks](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/llm-chatbot/llm-chatbot-generate-api.ipynb)
|
||||||
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue