diff --git a/basic_demo/README.md b/basic_demo/README.md
index 8f76576..cd747ea 100644
--- a/basic_demo/README.md
+++ b/basic_demo/README.md
@@ -142,5 +142,15 @@ python openai_api_request.py
 python trans_stress_test.py
 ```
 
+##使用昇腾卡运行代码
 
+用户可以在昇腾硬件环境下运行以上代码,只需将transformers修改为openmind,将device中的cuda设备修改为npu:
+
+```shell
+#from transformers import AutoModelForCausalLM, AutoTokenizer
+from openmind import AutoModelForCausalLM, AutoTokenizer
+
+#device = 'cuda'
+device = 'npu'
+```
 
diff --git a/basic_demo/README_en.md b/basic_demo/README_en.md
index 9fc53ab..434c08c 100644
--- a/basic_demo/README_en.md
+++ b/basic_demo/README_en.md
@@ -147,3 +147,15 @@ Users can use this code to test the generation speed of the model on the transfo
 ```shell
 python trans_stress_test.py
 ```
+
+##Use Ascend card to run code
+
+Users can run the above code in the Ascend hardware environment. They only need to change the transformers to openmind and the cuda device in device to npu.
+
+```shell
+#from transformers import AutoModelForCausalLM, AutoTokenizer
+from openmind import AutoModelForCausalLM, AutoTokenizer
+
+#device = 'cuda'
+device = 'npu'
+```