diff --git a/README.md b/README.md
index 50ee4c0..c6600af 100644
--- a/README.md
+++ b/README.md
@@ -10,10 +10,15 @@
 Read this in [English](README_en.md)
 
 ## 项目更新
-- 🔥🔥 **News**: ```2024/07/24```: 我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) 查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告。
-- 🔥 **News**: ``2024/7/16``: GLM-4-9B-Chat 模型依赖的` transformers`版本升级到 `4.42.4`, 请更新模型配置文件并参考 `basic_demo/requirements.txt` 更新依赖。
+
+- 🔥🔥 **News**: ```2024/08/12```: GLM-4-9B-Chat 模型依赖的`transformers`版本升级到 `4.44.0`,请重新拉取除模型权重(
+  `*.safetensor` 文件 和 `tokenizer.model`)外的文件并参考 `basic_demo/requirements.txt` 严格更新依赖。
+- 🔥 **News**: ```2024/07/24```:
+  我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85)
+  查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告。
 - 🔥 **News**: ``2024/7/9``: GLM-4-9B-Chat
-  模型已适配 [Ollama](https://github.com/ollama/ollama),[Llama.cpp](https://github.com/ggerganov/llama.cpp),您可以在[PR](https://github.com/ggerganov/llama.cpp/pull/8031) 查看具体的细节。
+  模型已适配 [Ollama](https://github.com/ollama/ollama),[Llama.cpp](https://github.com/ggerganov/llama.cpp)
+  ,您可以在[PR](https://github.com/ggerganov/llama.cpp/pull/8031) 查看具体的细节。
 - 🔥 **News**: ``2024/7/1``: 我们更新了 GLM-4V-9B 的微调,您需要更新我们的模型仓库的运行文件和配置文件,
   以支持这个功能,更多微调细节 (例如数据集格式,显存要求),请前往 [查看](finetune_demo)。
 - 🔥 **News**: ``2024/6/28``: 我们与英特尔技术团队合作,改进了 GLM-4-9B-Chat 的 ITREX 和 OpenVINO 部署教程。您可以使用英特尔
diff --git a/README_en.md b/README_en.md
index 6d07987..936e61a 100644
--- a/README_en.md
+++ b/README_en.md
@@ -8,11 +8,14 @@
 </p>
 
 ## Update
-- 🔥🔥 **News**: ```2024/07/24```:  we released the latest technical interpretation related to long texts. Check
-out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view our
-technical report on long context technology in the training of the open-source GLM-4-9B model.
-- 🔥 **News**: ``2024/7/16``: The ` transformers` version that the GLM-4-9B-Chat model depends on has been upgraded 
-to `4.42.4`. Please update the model configuration file and refer to `basic_demo/requirements.txt` to update the dependencies.
+
+- 🔥🔥 **News**: ```2024/08/12```: The `transformers` version required for the GLM-4-9B-Chat model has been upgraded
+  to `4.44.0`. Please re-download all files except for the model weights (`*.safetensor` files and `tokenizer.model`),
+  and strictly update the dependencies as per `basic_demo/requirements.txt`.
+- 🔥 **News**: ```2024/07/24```:  we released the latest technical interpretation related to long texts. Check
+  out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view
+  our
+  technical report on long context technology in the training of the open-source GLM-4-9B model.
 - 🔥 **News**: ``2024/7/9``: The GLM-4-9B-Chat model has been adapted to [Ollama](https://github.com/ollama/ollama)
   and [Llama.cpp](https://github.com/ggerganov/llama.cpp), you can check the specific details
   in [PR](https://github.com/ggerganov/llama.cpp/pull/8031).
diff --git a/basic_demo/requirements.txt b/basic_demo/requirements.txt
index b688d52..932759e 100644
--- a/basic_demo/requirements.txt
+++ b/basic_demo/requirements.txt
@@ -1,22 +1,23 @@
-torch>=2.3.0
-torchvision>=0.18.0
-transformers>=4.42.4
-huggingface-hub>=0.24.0
+torch>=2.4.0
+torchvision>=0.19.0
+transformers==4.44.0
+huggingface-hub>=0.24.5
 sentencepiece>=0.2.0
 jinja2>=3.1.4
 pydantic>=2.8.2
-timm>=1.0.7
+timm>=1.0.8
 tiktoken>=0.7.0
-accelerate>=0.32.1
+numpy==1.26.4 # Need less than 2.0.0
+accelerate>=0.33.0
 sentence_transformers>=3.0.1
-gradio>=4.38.1 # web demo
-openai>=1.35.0 # openai demo
+gradio>=4.41.0 # web demo
+openai>=1.40.3 # openai demo
 einops>=0.8.0
 pillow>=10.4.0
-sse-starlette>=2.1.2
-bitsandbytes>=0.43.1 # INT4 Loading
+sse-starlette>=2.1.3
+bitsandbytes>=0.43.3 # INT4 Loading
 
-# vllm>=0.5.2
-# flash-attn>=2.5.9 # using with flash-attention 2
+# vllm==0.5.4 # using with VLLM Framework
+# flash-attn>=2.6.1 # using with flash-attention 2
 # PEFT model, not need if you don't use PEFT finetune model.
-# peft>=0.11.1
\ No newline at end of file
+# peft>=0.12.2 # Using with finetune model
\ No newline at end of file
diff --git a/composite_demo/requirements.txt b/composite_demo/requirements.txt
index c0d3133..ecc5919 100644
--- a/composite_demo/requirements.txt
+++ b/composite_demo/requirements.txt
@@ -3,13 +3,13 @@
 ipykernel>=6.26.0
 ipython>=8.18.1
 jupyter_client>=8.6.0
-langchain>=0.2.10
-langchain-community>=0.2.9
-matplotlib>=3.9.0
-pymupdf>=1.24.5
+langchain>=0.2.12
+langchain-community>=0.2.11
+matplotlib>=3.9.1
+pymupdf>=1.24.9
 python-docx>=1.1.2
 python-pptx>=0.6.23
 pyyaml>=6.0.1
 requests>=2.31.0
-streamlit>=1.36.0
-zhipuai>=2.1.3
\ No newline at end of file
+streamlit>=1.37.1
+zhipuai>=2.1.4
\ No newline at end of file
diff --git a/finetune_demo/requirements.txt b/finetune_demo/requirements.txt
index 4485eec..702c04c 100644
--- a/finetune_demo/requirements.txt
+++ b/finetune_demo/requirements.txt
@@ -1,7 +1,7 @@
-jieba>=0.42.1
-datasets>=2.20.0
-peft>=0.11.1
-deepspeed>=0.14.4
+jieba==0.42.1
+datasets==2.20.0
+peft==0.12.2
+deepspeed==0.14.4
 nltk==3.8.1
-rouge_chinese>=1.0.3
-ruamel.yaml>=0.18.6
\ No newline at end of file
+rouge_chinese==1.0.3
+ruamel.yaml==0.18.6
\ No newline at end of file