commit
23773d94e2
|
@ -142,5 +142,15 @@ python openai_api_request.py
|
||||||
python trans_stress_test.py
|
python trans_stress_test.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 使用昇腾卡运行代码
|
||||||
|
|
||||||
|
用户可以在昇腾硬件环境下运行以上代码,只需将transformers修改为openmind,将device中的cuda设备修改为npu:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
#from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
from openmind import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
#device = 'cuda'
|
||||||
|
device = 'npu'
|
||||||
|
```
|
||||||
|
|
||||||
|
|
|
@ -147,3 +147,15 @@ Users can use this code to test the generation speed of the model on the transfo
|
||||||
```shell
|
```shell
|
||||||
python trans_stress_test.py
|
python trans_stress_test.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Use Ascend card to run code
|
||||||
|
|
||||||
|
Users can run the above code in the Ascend hardware environment. They only need to change the transformers to openmind and the cuda device in device to npu.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
#from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
from openmind import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
#device = 'cuda'
|
||||||
|
device = 'npu'
|
||||||
|
```
|
||||||
|
|
Loading…
Reference in New Issue