first commit

This commit is contained in:
xxl 2024-11-22 15:08:15 +08:00
parent bc93f7504f
commit 2772ae92f3
17 changed files with 2372 additions and 2 deletions

BIN
LOGO.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

48
MODEL_LICENSE.md Normal file
View File

@ -0,0 +1,48 @@
# CodeFuse COMMUNITY LICENSE AGREEMENT
CodeFuse Release Date: September 8, 2023
By clicking to agree or by using or distributing any portion or element of the Materials, you will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
1. Definitions.
a. This CodeFuse COMMUNITY LICENSE AGREEMENT (this "Agreement") shall mean the terms and conditions for use, reproduction, distribution and modification of the Materials as defined by this Agreement.
b. "Ant" or "We" (or "Us") shall mean Ant Group.
c. "CodeFuse" shall mean the large language models (including CodeFuse-13B and CodeFuse-CodeLlaMa-34B), and software and algorithms, consisting of trained model weights, parameters (including optimizer states), machine-learning model code, and other elements of the foregoing distributed by Us.
d. "Documentation" shall mean the specifications, manuals and documentation accompanying CodeFuse distributed by Us.
e. "Materials" shall mean, collectively, Ant's proprietary CodeFuse and Documentation (and any portion thereof) made available under this Agreement.
f. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
g. "Source" form shall mean the preferred form for making modifications, including but not limited to model source code, documentation source, and configuration files.
h. "Third Parties" (or "Third Party") shall mean individuals or legal entities that are not controlling, controlled by Us or You, or under common control with Us or You.
i. "You" (or "Your") shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Materials for any purpose and in any field of use.
2. Grant of Rights.
You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Ant's intellectual property or other rights owned by Ant embodied in the Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Materials.
3. Redistribution.
You may distribute or make the Materials or derivative works thereof available to a Third Party in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
a. You shall provide a copy of this Agreement to such Third Party;
b. if You modify the CodeFuse model, You shall provide a prominent notice, stating how You have modified the CodeFuse model, to such Third Party; and
c. You shall retain in all copies of the Materials that You distribute the following attribution notices within a "Notice" text file distributed as a part of such copies: "CodeFuse is licensed under the CodeFuse COMMUNITY LICENSE AGREEMENT, Copyright (c) Ant Group. All Rights Reserved."
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such derivative works as a whole, provided Your use, reproduction, and distribution of the work otherwise complies with the terms and conditions of this Agreement.
4. Rules of Use.
You shall comply with applicable laws and regulations (including without limitation export controls or restrictions) in Your use of the Materials.
5. Intellectual Property.
a. Ant retains ownership of all intellectual property rights in and to the Materials and derivatives made by or for Ant. Conditioned upon compliance with the terms and conditions of this Agreement, with respect to any derivative works and modifications of the Materials that are made by You, You are and will be the owner of such derivative works and modifications.
b. No trademark license is granted to use the trade names, trademarks, service marks, or product names of Ant, except as required to fulfill notice requirements under this Agreement or as required for reasonable and customary use in describing and redistributing the Materials.
c. If You commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against Ant or any entity alleging that the Materials or any output therefrom, or any part of the foregoing, infringe any intellectual property or other right owned or licensable by You, then all licences granted to You under this Agreement shall terminate as of the date such lawsuit or other proceeding is commenced or brought.
6. Disclaimer of Warranty and Limitation of Liability.
a. Ant is not obligated to support, update, provide training for, or develop any further version of the Materials or to grant any license thereto.
b. THE MATERIALS ARE PROVIDED "AS IS" WITHOUT ANY EXPRESS OR IMPLIED WARRANTY OF ANY KIND INCLUDING WARRANTIES OF TITLE, MERCHANTABILITY, NONINFRINGEMENT, OR FITNESS FOR A PARTICULAR PURPOSE. WE MAKE NO WARRANTY AND ASSUME NO RESPONSIBILITY FOR THE SAFETY OR STABILITY OF THE MATERIALS AND ANY OUTPUT THEREFROM.
c. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE MATERIALS AND ANY OUTPUT AND RESULTS. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT OR ARISING FROM YOUR USE OR INABILITY TO USE THE MATERIALS OR ANY OUTPUT OF IT, FOR ANY DIRECT, OR INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, NO MATTER HOW IT'S CAUSED OR EVEN IF ANT OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
d. You will defend, indemnify and hold harmless Ant from and against any claim by any Third Party arising out of or related to Your use or distribution of the Materials.
7. Survival and Termination.
a. The term of this Agreement shall commence upon Your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
b. We may terminate this Agreement if You breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, You must delete and cease use of the Materials. Sections 6 and 8 shall survive the termination of this Agreement.
8. Governing Law and Jurisdiction.
a. This Agreement and any dispute arising out of or relating to it, whether in contract, tort, negligence, products liability, or otherwise, will be governed by the laws of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
b. The People's Courts in Hangzhou City shall have exclusive jurisdiction over any dispute arising out of this Agreement.

317
README.md
View File

@ -1,3 +1,316 @@
# CodeFuse-CodeGeeX2-6B_a13683088223236096543380 ---
frameworks:
- Pytorch
license: other
tasks:
- text-generation
---
# Model Card for CodeFuse-CodeGeeX2-6B
<p align="center">
<img src="https://modelscope.cn/api/v1/models/codefuse-ai/CodeFuse-CodeGeeX2-6B/repo?Revision=master&FilePath=LOGO.jpg&View=true" width="800"/>
<p>
CodeFuse-CodeGeeX2-6B [[中文]](#chinese) [[English]](#english)
#### Clone with HTTP
```bash
git clone https://www.modelscope.cn/codefuse-ai/CodeFuse-CodeGeeX2-6B.git
```
<a id="english"></a>
## Model Description
CodeFuse-CodeGeeX2-6B is a 6B Code-LLM finetuned by LoRA of multiple code tasks on the base model CodeGeeX2.
<br>
## News and Updates
🔥🔥 2023-11-10 CodeFuse-CodeGeeX2-6B has been released, achieving a pass@1 (greedy decoding) score of 45.12% on HumanEval, which is a 9.22% increase compared to CodeGeeX2 35.9%.
🔥🔥 2023-10-20 CodeFuse-QWen-14B technical documentation has been released. For those interested, please refer to the CodeFuse article on our WeChat official account via the provided link.(https://mp.weixin.qq.com/s/PCQPkvbvfxSPzsqjOILCDw)
🔥🔥 2023-10-16 CodeFuse-QWen-14B has been released, achieving a pass@1 (greedy decoding) score of 48.78% on HumanEval, which is a 16% increase compared to Qwen-14b's 32.3%.
🔥🔥 2023-09-27 CodeFuse-StarCoder-15B has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval, which is a 21% increase compared to StarCoder's 33.6%.
🔥🔥🔥 2023-09-26 We are pleased to announce the release of the [4-bit quantized version](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B-4bits/summary) of [CodeFuse-CodeLlama-34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary). Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary) has achived 74.4% of pass@1 (greedy decoding) on HumanEval, which is SOTA results for openspurced LLMs at present.
<br>
## Code Community
**Homepage**: 🏡 https://github.com/codefuse-ai (**Please give us your support with a Star🌟 + Fork🚀 + Watch👀**)
+ If you wish to fine-tune the model yourself, you can visit ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
+ If you wish to deploy the model yourself, you can visit ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
+ If you wish to see a demo of the model, you can visit ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
<br>
## Performance
| Model | HumanEval(pass@1) | Date |
|:----------------------------|:-----------------:|:-------:|
| **CodeFuse-CodeLlama-34B** | **74.4%** | 2023.9 |
|**CodeFuse-CodeLlama-34B-4bits** | **73.8%** | 2023.9 |
| WizardCoder-Python-34B-V1.0 | 73.2% | 2023.8 |
| GPT-4(zero-shot) | 67.0% | 2023.3 |
| PanGu-Coder2 15B | 61.6% | 2023.8 |
| CodeLlama-34b-Python | 53.7% | 2023.8 |
| CodeLlama-34b | 48.8% | 2023.8 |
| GPT-3.5(zero-shot) | 48.1% | 2022.11 |
| OctoCoder | 46.2% | 2023.8 |
| StarCoder-15B | 33.6% | 2023.5 |
| Qwen-14b | 32.3% | 2023.10 |
| **CodeFuse-StarCoder-15B** | **54.9%** | 2023.9 |
| **CodeFuse-QWen-14B** | **48.78%** | 2023.10 |
| **CodeFuse-CodeGeeX2-6B** | **45.12%** | 2023.11 |
<br>
## Requirements
* python>=3.8
* pytorch>=2.0.0
* transformers==4.33.2
* Sentencepiece
* CUDA 11.4
<br>
## Inference String Format
The inference string is a concatenated string formed by combining conversation data(system, human and bot contents) in the training data format. It is used as input during the inference process.
Here is an example format of the concatenated string:
```python
"""
<s>system
System instruction
<s>human
Human 1st round input
<s>bot
Bot 1st round output<|endoftext|>
<s>human
Human 2nd round input
<s>bot
Bot 2nd round output<|endoftext|>
...
...
...
<s>human
Human nth round input
<s>bot
{Bot output to be genreated}<|endoftext|>
"""
```
When applying inference, you always make your input string end with "\<s\>bot" to ask the model generating answers.
## Quickstart
```bash
pip install transformers modelscope cpm_kernels -U
pip install -r requirements.txt
```
```python
import torch
from modelscope import (
AutoTokenizer,
AutoModelForCausalLM,
snapshot_download
)
model_dir = snapshot_download('codefuse-ai/CodeFuse-CodeGeeX2-6B',revision = 'v1.0.0')
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
tokenizer.padding_side = "left"
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<unk>")
tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("</s>")
tokenizer.pad_token = "<unk>"
tokenizer.eos_token = "</s>"
# try 4bit loading if cuda memory not enough
model = AutoModelForCausalLM.from_pretrained(model_dir,
trust_remote_code=True,
load_in_4bit=False,
device_map="auto",
torch_dtype=torch.bfloat16)
model.eval()
HUMAN_ROLE_START_TAG = "<s>human\n"
BOT_ROLE_START_TAG = "<s>bot\n"
text = f"{HUMAN_ROLE_START_TAG}write a python function of quick sort.\n{BOT_ROLE_START_TAG}"
inputs = tokenizer(text, return_tensors='pt', padding=True, add_special_tokens=False).to("cuda")
outputs = model.generate(
inputs=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=512,
top_p=0.95,
temperature=0.1,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id
)
gen_text = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(gen_text)
```
<a id="chinese"></a>
## 模型简介
CodeFuse-CodeGeeX2-6B 是一个通过LoRA对基座模型CodeGeeeX2进行多代码任务微调的代码大模型。
<br>
## 新闻
🔥🔥 2023-11-10 开源了CodeFuse-CodeGeeX2-6B模型在HumanEval pass@1(greedy decoding)上可以达到48.12%, 比CodeGeeX2提高了9.22%的代码能力HumanEval
🔥🔥 2023-10-20 公布了CodeFuse-QWen-14B技术文档感兴趣详见微信公众号CodeFuse文章https://mp.weixin.qq.com/s/PCQPkvbvfxSPzsqjOILCDw
🔥🔥 2023-10-16开源了CodeFuse-QWen-14B模型在HumanEval pass@1(greedy decoding)上可以达到48.78%, 比Qwen-14b提高了16%的代码能力HumanEval
🔥🔥 2023-09-27开源了CodeFuse-StarCoder-15B模型在HumanEval pass@1(greedy decoding)上可以达到54.9%, 比StarCoder提高了21%的代码能力HumanEval
🔥🔥🔥 2023-09-26 [CodeFuse-CodeLlama-34B 4bits](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B-4bits/summary)量化版本发布量化后模型在HumanEval pass@1指标为73.8% (贪婪解码)。
🔥🔥🔥 2023-09-11 [CodeFuse-CodeLlama-34B](https://modelscope.cn/models/codefuse-ai/CodeFuse-CodeLlama-34B/summary)发布HumanEval pass@1指标达到74.4% (贪婪解码), 为当前开源SOTA。
<br>
## 代码社区
**大本营** 🏡 https://github.com/codefuse-ai **请支持我们的项目Star🌟 + Fork🚀 + Watch👀**
+ 如果您想自己微调该模型,可以访问 ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨
+ 如果您想自己部署该模型,可以访问 ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨
+ 如果您想观看该模型示例,可以访问 ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨
<br>
## 评测表现
### 代码
| 模型 | HumanEval(pass@1) | 日期 |
|:----------------------------|:-----------------:|:-------:|
| **CodeFuse-CodeLlama-34B** | **74.4%** | 2023.9 |
|**CodeFuse-CodeLlama-34B-4bits** | **73.8%** | 2023.9 |
| WizardCoder-Python-34B-V1.0 | 73.2% | 2023.8 |
| GPT-4(zero-shot) | 67.0% | 2023.3 |
| PanGu-Coder2 15B | 61.6% | 2023.8 |
| CodeLlama-34b-Python | 53.7% | 2023.8 |
| CodeLlama-34b | 48.8% | 2023.8 |
| GPT-3.5(zero-shot) | 48.1% | 2022.11 |
| OctoCoder | 46.2% | 2023.8 |
| StarCoder-15B | 33.6% | 2023.5 |
| Qwen-14b | 32.3% | 2023.10 |
| **CodeFuse-StarCoder-15B** | **54.9%** | 2023.9 |
| **CodeFuse-QWen-14B** | **48.78%** | 2023.8 |
| **CodeFuse-CodeGeeX2-6B** | **45.12%** | 2023.11 |
## Requirements
* python>=3.8
* pytorch>=2.0.0
* transformers==4.33.2
* Sentencepiece
* CUDA 11.4
<br>
## 推理数据格式
推理数据为模型在训练数据格式下拼接的字符串形式它也是推理时输入prompt拼接的方式
```python
"""
<s>system
这是System指令
<s>human
这是第1轮用户输入的问题
<s>bot
这是第1轮模型生成的内容<|endoftext|>
<s>human
这是第2轮用户输入的问题
<s>bot
这是第2轮模型生成的内容<|endoftext|>
...
...
...
<s>human
这是第n轮用户输入的问题
<s>bot
{模型现在要生成的内容}<|endoftext|>
"""
```
推理时请确保拼接的prompt字符串以"\<s\>bot\n"结尾,引导模型生成回答。
## 快速使用
```bash
pip install transformers modelscope cpm_kernels -U
pip install -r requirements.txt
```
```python
import torch
from modelscope import (
AutoTokenizer,
AutoModelForCausalLM,
snapshot_download
)
model_dir = snapshot_download('codefuse-ai/CodeFuse-CodeGeeX2-6B',revision = 'v1.0.0')
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
tokenizer.padding_side = "left"
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<unk>")
tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("</s>")
tokenizer.pad_token = "<unk>"
tokenizer.eos_token = "</s>"
# try 4bit loading if cuda memory not enough
model = AutoModelForCausalLM.from_pretrained(model_dir,
trust_remote_code=True,
load_in_4bit=False,
device_map="auto",
torch_dtype=torch.bfloat16)
model.eval()
HUMAN_ROLE_START_TAG = "<s>human\n"
BOT_ROLE_START_TAG = "<s>bot\n"
text = f"{HUMAN_ROLE_START_TAG}write a python function of quick sort.\n{BOT_ROLE_START_TAG}"
inputs = tokenizer(text, return_tensors='pt', padding=True, add_special_tokens=False).to("cuda")
outputs = model.generate(
inputs=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=512,
top_p=0.95,
temperature=0.1,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id
)
gen_text = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(gen_text)
```

49
config.json Normal file
View File

@ -0,0 +1,49 @@
{
"_name_or_path": "/mnt/user/qumu/download_models/codegeex2-6b",
"add_bias_linear": false,
"add_qkv_bias": true,
"apply_query_key_layer_scaling": true,
"apply_residual_connection_post_layernorm": false,
"architectures": [
"ChatGLMForConditionalGeneration"
],
"attention_dropout": 0.0,
"attention_softmax_in_fp32": true,
"auto_map": {
"AutoConfig": "configuration_chatglm.ChatGLMConfig",
"AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration",
"AutoModelForCausalLM": "modeling_chatglm.ChatGLMForConditionalGeneration",
"AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration"
},
"bias_dropout_fusion": true,
"eos_token": "</s>",
"eos_token_id": 2,
"ffn_hidden_size": 13696,
"fp32_residual_connection": false,
"hidden_dropout": 0.0,
"hidden_size": 4096,
"interleaved_qkv": false,
"kv_channels": 128,
"layernorm_epsilon": 1e-05,
"model_type": "chatglm",
"multi_query_attention": true,
"multi_query_group_num": 2,
"num_attention_heads": 32,
"num_layers": 28,
"original_rope": true,
"pad_token": "<unk>",
"pad_token_id": 0,
"padded_vocab_size": 65024,
"post_layer_norm": true,
"pre_seq_len": null,
"prefix_projection": false,
"quantization_bit": 0,
"rmsnorm": true,
"rotary_percent": 0.5,
"seq_length": 8192,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.33.2",
"use_cache": true,
"vocab_size": 65024
}

1
configuration.json Normal file
View File

@ -0,0 +1 @@
{"framework":"Pytorch","task":"text-generation"}

59
configuration_chatglm.py Normal file
View File

@ -0,0 +1,59 @@
from transformers import PretrainedConfig
class ChatGLMConfig(PretrainedConfig):
model_type = "chatglm"
def __init__(
self,
num_layers=28,
padded_vocab_size=65024,
hidden_size=4096,
ffn_hidden_size=13696,
kv_channels=128,
num_attention_heads=32,
seq_length=2048,
hidden_dropout=0.0,
attention_dropout=0.0,
layernorm_epsilon=1e-5,
rmsnorm=True,
apply_residual_connection_post_layernorm=False,
post_layer_norm=True,
add_bias_linear=False,
add_qkv_bias=False,
bias_dropout_fusion=True,
multi_query_attention=False,
multi_query_group_num=1,
apply_query_key_layer_scaling=True,
attention_softmax_in_fp32=True,
fp32_residual_connection=False,
quantization_bit=0,
pre_seq_len=None,
prefix_projection=False,
**kwargs
):
self.num_layers = num_layers
self.vocab_size = padded_vocab_size
self.padded_vocab_size = padded_vocab_size
self.hidden_size = hidden_size
self.ffn_hidden_size = ffn_hidden_size
self.kv_channels = kv_channels
self.num_attention_heads = num_attention_heads
self.seq_length = seq_length
self.hidden_dropout = hidden_dropout
self.attention_dropout = attention_dropout
self.layernorm_epsilon = layernorm_epsilon
self.rmsnorm = rmsnorm
self.apply_residual_connection_post_layernorm = apply_residual_connection_post_layernorm
self.post_layer_norm = post_layer_norm
self.add_bias_linear = add_bias_linear
self.add_qkv_bias = add_qkv_bias
self.bias_dropout_fusion = bias_dropout_fusion
self.multi_query_attention = multi_query_attention
self.multi_query_group_num = multi_query_group_num
self.apply_query_key_layer_scaling = apply_query_key_layer_scaling
self.attention_softmax_in_fp32 = attention_softmax_in_fp32
self.fp32_residual_connection = fp32_residual_connection
self.quantization_bit = quantization_bit
self.pre_seq_len = pre_seq_len
self.prefix_projection = prefix_projection
super().__init__(**kwargs)

5
generation_config.json Normal file
View File

@ -0,0 +1,5 @@
{
"_from_model_config": true,
"eos_token_id": 2,
"transformers_version": "4.33.2"
}

1197
modeling_chatglm.py Normal file

File diff suppressed because it is too large Load Diff

BIN
pytorch_model-00001-of-00002.bin (Stored with Git LFS) Normal file

Binary file not shown.

BIN
pytorch_model-00002-of-00002.bin (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,207 @@
{
"metadata": {
"total_size": 12487168064
},
"weight_map": {
"transformer.embedding.word_embeddings.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.final_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.0.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.0.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.0.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.0.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.0.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.1.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.1.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.1.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.1.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.1.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.10.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.10.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.10.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.10.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.10.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.10.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.10.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.11.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.11.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.11.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.11.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.11.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.11.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.11.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.12.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.12.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.12.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.12.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.12.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.12.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.12.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.13.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.13.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.13.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.13.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.13.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.13.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.13.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.14.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.14.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.14.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.14.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.14.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.14.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.14.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.15.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.15.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.15.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.15.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.15.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.15.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.15.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.16.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.16.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.16.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.16.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.16.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.16.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.16.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.17.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.17.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.17.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.17.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.17.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.17.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.17.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.18.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.18.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.18.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.18.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.18.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.18.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.18.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.19.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.19.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.19.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.19.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.19.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.19.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.19.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.2.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.2.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.2.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.2.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.2.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.2.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.2.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.20.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.20.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.20.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.20.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.20.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.20.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.20.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.21.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.21.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.21.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.21.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.21.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.21.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.21.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.22.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.22.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.22.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.22.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.22.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.22.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.22.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.23.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.23.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.23.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.23.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.23.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.23.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.23.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.24.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.24.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.24.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.24.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.24.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.24.self_attention.query_key_value.bias": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.24.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.25.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.25.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.25.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.25.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.25.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.25.self_attention.query_key_value.bias": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.25.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.26.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.26.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.26.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.26.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.26.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.26.self_attention.query_key_value.bias": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.26.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.27.input_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.27.mlp.dense_4h_to_h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.27.mlp.dense_h_to_4h.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.27.post_attention_layernorm.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.27.self_attention.dense.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.27.self_attention.query_key_value.bias": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.27.self_attention.query_key_value.weight": "pytorch_model-00002-of-00002.bin",
"transformer.encoder.layers.3.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.3.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.3.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.3.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.3.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.3.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.3.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.4.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.4.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.4.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.4.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.4.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.4.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.4.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.5.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.5.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.5.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.5.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.5.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.5.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.5.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.6.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.6.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.6.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.6.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.6.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.6.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.6.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.7.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.7.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.7.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.7.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.7.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.7.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.7.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.8.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.8.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.8.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.8.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.8.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.8.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.8.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.9.input_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.9.mlp.dense_4h_to_h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.9.mlp.dense_h_to_4h.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.9.post_attention_layernorm.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.9.self_attention.dense.weight": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.9.self_attention.query_key_value.bias": "pytorch_model-00001-of-00002.bin",
"transformer.encoder.layers.9.self_attention.query_key_value.weight": "pytorch_model-00001-of-00002.bin",
"transformer.output_layer.weight": "pytorch_model-00002-of-00002.bin",
"transformer.rotary_pos_emb.inv_freq": "pytorch_model-00001-of-00002.bin"
}
}

188
quantization.py Normal file

File diff suppressed because one or more lines are too long

14
requirements.txt Normal file
View File

@ -0,0 +1,14 @@
numpy
pandas
einops
sentencepiece
deepspeed==0.9.3
transformers==4.33.2
accelerate==0.21.0
peft==0.4.0
BitsAndBytes==0.40.2
xformers==0.0.21
ujson
jsonlines
tiktoken
transformers_stream_generator

1
special_tokens_map.json Normal file
View File

@ -0,0 +1 @@
{}

264
tokenization_chatglm.py Normal file
View File

@ -0,0 +1,264 @@
import os
import torch
from typing import List, Optional, Union, Dict
from sentencepiece import SentencePieceProcessor
from transformers import PreTrainedTokenizer
from transformers.utils import logging, PaddingStrategy
from transformers.tokenization_utils_base import EncodedInput, BatchEncoding
class SPTokenizer:
def __init__(self, model_path: str):
# reload tokenizer
assert os.path.isfile(model_path), model_path
self.sp_model = SentencePieceProcessor(model_file=model_path)
# BOS / EOS token IDs
self.n_words: int = self.sp_model.vocab_size()
self.bos_id: int = self.sp_model.bos_id()
self.eos_id: int = self.sp_model.eos_id()
self.pad_id: int = self.sp_model.unk_id()
assert self.sp_model.vocab_size() == self.sp_model.get_piece_size()
special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "sop", "eop"]
self.special_tokens = {}
self.index_special_tokens = {}
for token in special_tokens:
self.special_tokens[token] = self.n_words
self.index_special_tokens[self.n_words] = token
self.n_words += 1
def tokenize(self, s: str):
return self.sp_model.EncodeAsPieces(s)
def encode(self, s: str, bos: bool = False, eos: bool = False) -> List[int]:
assert type(s) is str
t = self.sp_model.encode(s)
if bos:
t = [self.bos_id] + t
if eos:
t = t + [self.eos_id]
return t
def decode(self, t: List[int]) -> str:
return self.sp_model.decode(t)
def decode_tokens(self, tokens: List[str]) -> str:
text = self.sp_model.DecodePieces(tokens)
return text
def convert_token_to_id(self, token):
""" Converts a token (str) in an id using the vocab. """
if token in self.special_tokens:
return self.special_tokens[token]
return self.sp_model.PieceToId(token)
def convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
if index in self.index_special_tokens or index in [self.eos_id, self.bos_id, self.pad_id] or index < 0:
return ""
return self.sp_model.IdToPiece(index)
class ChatGLMTokenizer(PreTrainedTokenizer):
vocab_files_names = {"vocab_file": "tokenizer.model"}
model_input_names = ["input_ids", "attention_mask", "position_ids"]
def __init__(self, vocab_file, padding_side="left", clean_up_tokenization_spaces=False, **kwargs):
super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
self.name = "GLMTokenizer"
self.vocab_file = vocab_file
self.tokenizer = SPTokenizer(vocab_file)
self.special_tokens = {
"<bos>": self.tokenizer.bos_id,
"<eos>": self.tokenizer.eos_id,
"<pad>": self.tokenizer.pad_id
}
def get_command(self, token):
if token in self.special_tokens:
return self.special_tokens[token]
assert token in self.tokenizer.special_tokens, f"{token} is not a special token for {self.name}"
return self.tokenizer.special_tokens[token]
@property
def unk_token(self) -> str:
return "<unk>"
@property
def pad_token(self) -> str:
return "<unk>"
@property
def pad_token_id(self):
return self.get_command("<pad>")
@property
def eos_token(self) -> str:
return "</s>"
@property
def eos_token_id(self):
return self.get_command("<eos>")
@property
def vocab_size(self):
return self.tokenizer.n_words
def get_vocab(self):
""" Returns vocab as a dict """
vocab = {self._convert_id_to_token(i): i for i in range(self.vocab_size)}
vocab.update(self.added_tokens_encoder)
return vocab
def _tokenize(self, text, **kwargs):
return self.tokenizer.tokenize(text)
def _convert_token_to_id(self, token):
""" Converts a token (str) in an id using the vocab. """
return self.tokenizer.convert_token_to_id(token)
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
return self.tokenizer.convert_id_to_token(index)
def convert_tokens_to_string(self, tokens: List[str]) -> str:
return self.tokenizer.decode_tokens(tokens)
def save_vocabulary(self, save_directory, filename_prefix=None):
"""
Save the vocabulary and special tokens file to a directory.
Args:
save_directory (`str`):
The directory in which to save the vocabulary.
filename_prefix (`str`, *optional*):
An optional prefix to add to the named of the saved files.
Returns:
`Tuple(str)`: Paths to the files saved.
"""
if os.path.isdir(save_directory):
vocab_file = os.path.join(
save_directory, self.vocab_files_names["vocab_file"]
)
else:
vocab_file = save_directory
with open(self.vocab_file, 'rb') as fin:
proto_str = fin.read()
with open(vocab_file, "wb") as writer:
writer.write(proto_str)
return (vocab_file,)
def get_prefix_tokens(self):
prefix_tokens = [self.get_command("[gMASK]"), self.get_command("sop")]
return prefix_tokens
def build_prompt(self, query, history=None):
if history is None:
history = []
prompt = ""
for i, (old_query, response) in enumerate(history):
prompt += "[Round {}]\n\n问:{}\n\n答:{}\n\n".format(i + 1, old_query, response)
prompt += "[Round {}]\n\n问:{}\n\n答:".format(len(history) + 1, query)
return prompt
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
adding special tokens. A BERT sequence has the following format:
- single sequence: `[CLS] X [SEP]`
- pair of sequences: `[CLS] A [SEP] B [SEP]`
Args:
token_ids_0 (`List[int]`):
List of IDs to which the special tokens will be added.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
Returns:
`List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
"""
prefix_tokens = self.get_prefix_tokens()
token_ids_0 = prefix_tokens + token_ids_0
if token_ids_1 is not None:
token_ids_0 = token_ids_0 + token_ids_1 + [self.get_command("<eos>")]
return token_ids_0
def _pad(
self,
encoded_inputs: Union[Dict[str, EncodedInput], BatchEncoding],
max_length: Optional[int] = None,
padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None,
) -> dict:
"""
Pad encoded inputs (on left/right and up to predefined length or max length in the batch)
Args:
encoded_inputs:
Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
max_length: maximum length of the returned list and optionally padding length (see below).
Will truncate by taking into account the special tokens.
padding_strategy: PaddingStrategy to use for padding.
- PaddingStrategy.LONGEST Pad to the longest sequence in the batch
- PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
- PaddingStrategy.DO_NOT_PAD: Do not pad
The tokenizer padding sides are defined in self.padding_side:
- 'left': pads on the left of the sequences
- 'right': pads on the right of the sequences
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
`>= 7.5` (Volta).
return_attention_mask:
(optional) Set to False to avoid returning attention mask (default: set to model specifics)
"""
# Load from model defaults
# assert self.padding_side == "left"
required_input = encoded_inputs[self.model_input_names[0]]
seq_length = len(required_input)
if padding_strategy == PaddingStrategy.LONGEST:
max_length = len(required_input)
if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) != max_length
# Initialize attention mask if not present.
if "attention_mask" not in encoded_inputs:
encoded_inputs["attention_mask"] = [1] * seq_length
if "position_ids" not in encoded_inputs:
encoded_inputs["position_ids"] = list(range(seq_length))
if needs_to_be_padded:
difference = max_length - len(required_input)
if self.padding_side == "left":
if "attention_mask" in encoded_inputs:
encoded_inputs["attention_mask"] = [0] * difference + encoded_inputs["attention_mask"]
if "position_ids" in encoded_inputs:
encoded_inputs["position_ids"] = [0] * difference + encoded_inputs["position_ids"]
encoded_inputs[self.model_input_names[0]] = [self.pad_token_id] * difference + required_input
else:
if "attention_mask" in encoded_inputs:
encoded_inputs["attention_mask"] = encoded_inputs["attention_mask"] + [0] * difference
if "position_ids" in encoded_inputs:
encoded_inputs["position_ids"] = encoded_inputs["position_ids"] + [0] * difference
encoded_inputs[self.model_input_names[0]] = required_input + [self.pad_token_id] * difference
return encoded_inputs

BIN
tokenizer.model (Stored with Git LFS) Normal file

Binary file not shown.

15
tokenizer_config.json Normal file
View File

@ -0,0 +1,15 @@
{
"auto_map": {
"AutoTokenizer": [
"tokenization_chatglm.ChatGLMTokenizer",
null
]
},
"clean_up_tokenization_spaces": false,
"do_lower_case": false,
"legacy": false,
"model_max_length": 1000000000000000019884624838656,
"padding_side": "left",
"remove_space": false,
"tokenizer_class": "ChatGLMTokenizer"
}