first commit

This commit is contained in:
xxl 2024-12-31 10:37:32 +08:00
parent e4cc0cb95c
commit 203f9a2043
12 changed files with 1761 additions and 2 deletions

67
LICENSE Normal file
View File

@ -0,0 +1,67 @@
Bilibili Index 许可协议
版本 1.02024 年 6 月 11 日
版权所有 (c) 2024 Bilibili Index
第一部分:前言
大型生成模型正在被广泛采用和使用,但也存在对其潜在滥用的担忧,无论是由于其技术限制还是伦理考虑。本许可证旨在促进所附模型的开放和负责任的下游使用。
因此,现在您和 Bilibili Index 同意如下:
1. 定义
“许可证”是指本文件中定义的使用、复制和分发的条款和条件。
“数据”是指从与模型一起使用的数据集提取的信息和/或内容的集合,包括用于训练、预训练或以其他方式评估模型的数据。数据不受本许可证的许可。
“输出”是指操作模型的结果,以由此产生的信息内容体现。
“模型”是指任何伴随的机器学习基础组件(包括检查点),由学习的权重、参数(包括优化器状态)组成,对应于补充材料中体现的模型架构,这些权重、参数是在数据上使用补充材料进行训练或调整的,全部或部分。
“模型的衍生品”是指对模型的所有修改、基于模型的作品或任何其他通过将模型的权重、参数、激活或输出的模式转移到另一个模型而创建或初始化的模型,以便使另一个模型的性能类似于模型,包括但不限于涉及使用中间数据表示的蒸馏方法或基于模型生成合成数据用于训练另一个模型的方法。
“补充材料”是指用于定义、运行、加载、基准测试或评估模型的伴随源代码和脚本,如果有,还包括用于准备数据进行训练或评估的任何伴随文档、教程、示例等。
“分发”是指将模型或模型的衍生物传输、复制、发布或以其他方式共享给第三方,包括通过电子或其他远程方式提供模型作为托管服务 - 例如基于 API 或 Web 访问。
“Bilibili Index”或“我们”是指上海宽娱数码科技有限公司或其任何关联公司。
“您”(或“您的”)是指行使本许可证授予的权限并/或出于任何目的和在任何使用领域使用模型的个人或法律实体,包括在最终使用应用程序(例如聊天机器人、翻译器等)中使用模型。
“第三方”是指与 Bilibili Index 或您没有共同控制的个人或法律实体。
“商业用途”是指使用 Bilibili Index 模型,直接或间接为实体或个人进行运营、推广或产生收入,或用于任何其他盈利目的。
第二部分、许可及许可限制
根据本许可协议的条款和条件,许可方特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销、免版税的版权许可。您可以出于非商业用途使用此许可。许可方对您使用 Bilibili Index 模型的输出或基于 Bilibili Index 模型得到的模型衍生品不主张任何权利,但您必须满足如下许可限制条件:
1. 您不得出于任何军事或非法目的使用、复制、修改、合并、发布、分发、复制或创建Bilibili Index 模型的全部或部分衍生品。
2. 如果您计划将 Bilibili Index 模型及模型衍生品用作商业用途,应当按照本协议附则提供的联络方式,事先向许可方登记并获得许可方的书面授权。
3. 您对 Bilibili Index 模型的使用和修改(包括使用 Bilibili Index 模型的输出或者基于 Bilibili Index 模型得到的模型衍生品)不得违反任何国家的法律法规,尤其是中华人民共和国的法律法规,不得侵犯任何第三方的合法权益,包括但不限于肖像权、名誉权、隐私权等人格权,著作权、专利权、商业秘密等知识产权,或者其他财产权益。
4. 您必须向 Bilibili Index 模型或其模型衍生品的任何第三方使用者提供 Bilibili Index 模型的来源以及本协议的副本。
5. 您修改 Bilibili Index 模型得到模型衍生品,必须以显著的方式说明修改的内容,且上述修改不得违反本协议的许可限制条件,也不能允许、协助或以其他方式使得第三方违反本协议中的许可限制条件。
第三部分:知识产权
1. Bilibili Index 模型的所有权及其相关知识产权,由许可方单独所有。
2. 在任何情况下,未经许可方事先书面同意,您不得使用许可方任何商标、服务标记、商号、域名、网站名称或其他显著品牌特征(以下统称为"标识"),包括但不限于明示或暗示您自身为“许可方”。未经许可方事先书面同意,您不得将本条款前述标识以单独或结合的任何方式展示、使用或申请注册商标、进行域名注册等,也不得向他人明示或暗示有权展示、使用、或以其他方式处理这些标识的权利。由于您违反本协议使用许可方上述标识等给许可方或他人造成损失的,由您承担全部法律责任。
3. 在许可范围内,您可以对 Bilibili Index 模型进行修改以得到模型衍生品,对于模型衍生品中您付出创造性劳动的部分,您可以主张该部分的知识产权。
第四部分、免责声明及责任限制
1. 在任何情况下,许可方不对您根据本协议使用 Bilibili Index 模型而产生或与之相关的任何直接、间接、附带的后果、以及其他损失或损害承担责任。若由此导致许可方遭受损失,您应当向许可方承担全部赔偿责任。
2. 模型中的模型参数仅仅是一种示例,如果您需要满足其他要求,需自行训练,并遵守相应数据集的许可协议。您将对 Bilibili Index 模型的输出及模型衍生品所涉及的知识产权风险或与之相关的任何直接、间接、附带的后果、以及其他损失或损害负责。
3. 尽管许可方在 Bilibili Index 模型训练的所有阶段,都坚持努力维护数据的合规性和准确性,但受限于 Bilibili Index 模型的规模及其概率固有的随机性因素影响,其输出结果的准确性无法得到保证,模型存在被误导的可能。因此,许可方在此声明,许可方不承担您因使用 Bilibili Index 模型及其源代码而导致的数据安全问题、声誉风险,或任何涉及 Bilibili Index 模型被误导、误用、传播或不正当使用而产生的任何风险和责任。
4. 本协议所称损失或损害包括但不限于下列任何损失或损害(无论此类损失或损害是不可预见的、可预见的、已知的或其他的):(i)收入损失;(ii)实际或预期利润损失;(ii)货币使用损失;(iv)预期节约的损失;(v)业务损失;(vi)机会损失;(vii)商誉、声誉损失;(viii)软件的使用损失;或(x)任何间接、附带的特殊或间接损害损失。
5. 除非适用的法律另有要求或经过许可方书面同意否则许可方将按“现状”授予Bilibili Index 模型的许可。针对本协议中的 Bilibili Index 模型,许可方不提供任何明示、暗示的保证,包括但不限于:关于所有权的任何保证或条件、关于适销性的保证或条件、适用于任何特定目的的保证或条件、过去、现在或未来关于 Bilibili Index 模型不侵权的任何类型的保证、以及因任何交易过程、贸易使用(如建议书、规范或样品)而产生的任何保证。您将对其通过使用、复制或再分发等方式利用 Bilibili Index 模型所产生的风险与后果,独自承担责任。
6. 您充分知悉并理解同意Bilibili Index 模型中可能包含个人信息。您承诺将遵守所有适用的法律法规进行个人信息的处理,特别是遵守《中华人民共和国个人信息保护法》的相关规定。请注意,许可方给予您使用 Bilibili Index 模型的授权,并不意味着您已经获得处理相关个人信息的合法性基础。您作为独立的个人信息处理者,需要保证在处理 Bilibili Index 模型中可能包含的个人信息时,完全符合相关法律法规的要求,包括但不限于获得个人信息主体的授权同意等,并愿意独自承担由此可能产生的任何风险和后果。
7. 您充分理解并同意,许可方有权依合理判断对违反有关法律法规或本协议规定的行为进行处理,对您的违法违规行为采取适当的法律行动,并依据法律法规保存有关信息向有关部门报告等,您应独自承担由此而产生的一切法律责任。
第五部分、品牌曝光与显著标识
1. 您同意并理解,如您将您基于 Bilibili Index 模型二次开发的模型衍生品在国内外的开源社区提供开源许可的,您需要在该开源社区以显著方式标注该模型衍生品系基于 Bilibili Index 模型进行的二次开发标注内容包括但不限于“Bilibili Index ”以及与 Bilibili Index 模型相关的品牌的其他元素。
2. 您同意并理解,如您将 Bilibili Index 模型二次开发的模型衍生品参加国内外任何组织和个人举行的排名活动,包括但不限于针对模型性能、准确度、算法、算力等任何维度的排名活动,您均需在模型说明中以显著方式标注该模型衍生品系基于 Bilibili Index 模型进行的二次开发标注内容包括但不限于“Blibili Index Inside”以及与 Bilibili Index 模型相关的品牌的其他元素。
七、其他
1. 许可方在法律法规许可的范围内对协议条款享有最终解释权。
2. 本协议的订立、效力、解释、履行、修改和终止,使用 Bilibili Index 模型以及争议的解决均适用中华人民共和国大陆地区(仅为本协议之目的,不包括香港、澳门和台湾)法律,
并排除冲突法的适用。
3. 因使用 Bilibili Index 模型而发生的任何争议,各方应首先通过友好协商的方式加以解决。协商不成时,向许可方所在地人民法院提起诉讼。
附则
1. 若您期望基于本协议的许可条件与限制,将 Bilibili Index 模型或其衍生品用作商业用途请您按照如下方式联系许可方以进行登记并向许可方申请书面授权联系邮箱opensource@bilibili.com
附件 A 使用限制
您同意不使用模型或模型的衍生物:
以任何违反任何适用的国家或国际法律或法规或侵犯任何第三方合法权益的方式;
用于任何军事目的;
以任何方式用于剥削、伤害或企图剥削或伤害未成年人;
生成或传播可验证的虚假信息和/或内容,意图伤害他人;
生成或传播受适用监管要求限制的不适当内容;
在未经适当授权或不合理使用的情况下生成或传播个人可识别信息;
诽谤、贬低或以其他方式骚扰他人;
用于对个人的法律权利产生不利影响或创建或修改具有约束力的可执行义务的完全自动化决策;
用于基于在线或离线社会行为或已知或预测的个人或个性特征对个人或群体进行歧视或伤害的任何目的;
为了对特定群体的个人造成或可能造成身体或心理伤害,利用该群体的年龄、社会、身体或心理特征的任何漏洞,从而严重扭曲属于该群体的个人的行为;
用于任何旨在或具有基于法律保护的特征或类别对个人或群体进行歧视的目的

View File

@ -1,3 +1,55 @@
# Index-1.9B-Chat_a14114230254694400156844 ---
license: other
license_name: license
license_link: LICENSE
---
<div align="center">
<h1>
Index-1.9B-Chat
</h1>
</div>
Index-1.9B-Chat ## 模型介绍
我们很高兴首次发布Index系列模型中的轻量版本Index-1.9B系列
本次开源的Index-1.9B 系列包含以下模型:
- Index-1.9B base : 基座模型,具有 19亿 非词嵌入参数量在2.8T 中英文为主的语料上预训练,多个评测基准上与同级别模型比处于领先。
- Index-1.9B pure : 基座模型的对照组与base具有相同的参数和训练策略不同之处在于我们严格过滤了该版本语料中所有指令相关的数据以此来验证指令对benchmark的影响。
- **Index-1.9B chat(本仓库模型)** : 基于index-1.9B base通过SFT和DPO对齐后的对话模型我们发现由于预训练中引入了较多定向清洗的对话类语料聊天的趣味性明显更强。
- Index-1.9B character : 在SFT和DPO的基础上引入了RAG来实现fewshots角色扮演定制。
更多细节详见我们的[GitHub](https://github.com/bilibili/Index-1.9B)和[Index-1.9B技术报告](https://github.com/bilibili/Index-1.9B/blob/main/Index-1.9B%20%E6%8A%80%E6%9C%AF%E6%8A%A5%E5%91%8A.pdf)
### Transformers 加载方式
可通过以下代码加载 Index-1.9B-Chat 模型来进行对话:
```python
import argparse
from transformers import AutoTokenizer, pipeline
# 注意!目录不能含有".",可以替换成"_"
parser = argparse.ArgumentParser()
parser.add_argument('--model_path', default="./IndexTeam/Index-1.9B-Chat", type=str, help="")
parser.add_argument('--device', default="cpu", type=str, help="") # also could be "cuda" or "mps" for Apple silicon
args = parser.parse_args()
tokenizer = AutoTokenizer.from_pretrained(args.model_path, trust_remote_code=True)
generator = pipeline("text-generation",
model=args.model_path,
tokenizer=tokenizer, trust_remote_code=True,
device=args.device)
system_message = "你是由哔哩哔哩自主研发的大语言模型名为“Index”。你能够根据用户传入的信息帮助用户完成指定的任务并生成恰当的、符合要求的回复。"
query = "续写 天不生我金坷垃"
model_input = []
model_input.append({"role": "system", "content": system_message})
model_input.append({"role": "user", "content": query})
model_output = generator(model_input, max_new_tokens=300, top_k=5, top_p=0.8, temperature=0.3, repetition_penalty=1.1, do_sample=True)
print('User:', query)
print('Model:', model_output)
```

32
config.json Normal file
View File

@ -0,0 +1,32 @@
{
"_name_or_path": "Index_1_9B",
"architectures": [
"IndexForCausalLM"
],
"auto_map": {
"AutoConfig": "configuration_index.IndexConfig",
"AutoModelForCausalLM": "modeling_index.IndexForCausalLM",
"AutoModelForSequenceClassification": "modeling_index.IndexForSequenceClassification"
},
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 0,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.01,
"intermediate_size": 5888,
"max_length": 4096,
"max_position_embeddings": 4096,
"model_type": "index",
"num_attention_heads": 16,
"num_key_value_heads": 16,
"num_hidden_layers": 36,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"tie_word_embeddings": false,
"norm_head":1,
"torch_dtype": "bfloat16",
"transformers_version": "4.39.2",
"use_cache": true,
"vocab_size": 65029
}

1
configuration.json Normal file
View File

@ -0,0 +1 @@
{"framework":"Pytorch","task":"text-generation"}

183
configuration_index.py Normal file
View File

@ -0,0 +1,183 @@
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Index model configuration"""
from transformers.configuration_utils import PretrainedConfig
from transformers.utils import logging
logger = logging.get_logger(__name__)
INDEX_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
class IndexConfig(PretrainedConfig):
r"""
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 65029):
Vocabulary size of the Index model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`IndexModel`]
hidden_size (`int`, *optional*, defaults to 4096):
Dimension of the hidden representations.
intermediate_size (`int`, *optional*, defaults to 11008):
Dimension of the MLP representations.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the Transformer decoder.
num_attention_heads (`int`, *optional*, defaults to 32):
Number of attention heads for each attention layer in the Transformer decoder.
num_key_value_heads (`int`, *optional*):
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
by meanpooling all the original heads within that group. For more details checkout [this
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
`num_attention_heads`.
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
max_position_embeddings (`int`, *optional*, defaults to 2048):
The maximum sequence length that this model might ever be used with. Index 1 supports up to 2048 tokens,
Index 2 up to 4096, CodeIndex up to 16384.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the rms normalization layers.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the model should return the last key/values attentions (not used by all models). Only
relevant if `config.is_decoder=True`.
pad_token_id (`int`, *optional*):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 1):
Beginning of stream token id.
eos_token_id (`int`, *optional*, defaults to 2):
End of stream token id.
pretraining_tp (`int`, *optional*, defaults to 1):
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism) to understand more about it. This value is
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
issue](https://github.com/pytorch/pytorch/issues/76232).
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
Whether to tie weight embeddings
rope_theta (`float`, *optional*, defaults to 10000.0):
The base period of the RoPE embeddings.
rope_scaling (`Dict`, *optional*):
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
`max_position_embeddings` to the expected new maximum. See the following thread for more information on how
these scaling strategies behave
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
Whether to use a bias in the query, key, value and output projection layers during self-attention.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
```python
>>> from transformers import IndexModel, IndexConfig
>>> configuration = IndexConfig()
>>> model = IndexModel(configuration)
>>> configuration = model.config
```"""
model_type = "index"
keys_to_ignore_at_inference = ["past_key_values"]
def __init__(
self,
vocab_size=65029,
hidden_size=4096,
intermediate_size=11008,
num_hidden_layers=32,
num_attention_heads=32,
num_key_value_heads=None,
hidden_act="silu",
max_position_embeddings=2048,
initializer_range=0.02,
rms_norm_eps=1e-6,
use_cache=True,
pad_token_id=None,
bos_token_id=1,
eos_token_id=2,
pretraining_tp=1,
tie_word_embeddings=False,
norm_head=False,
rope_theta=10000.0,
rope_scaling=None,
attention_bias=False,
attention_dropout=0.0,
**kwargs,
):
self.vocab_size = vocab_size
self.max_position_embeddings = max_position_embeddings
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
# for backward compatibility
if num_key_value_heads is None:
num_key_value_heads = num_attention_heads
self.num_key_value_heads = num_key_value_heads
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.rms_norm_eps = rms_norm_eps
self.pretraining_tp = pretraining_tp
self.use_cache = use_cache
self.rope_theta = rope_theta
self.rope_scaling = rope_scaling
self._rope_scaling_validation()
self.attention_bias = attention_bias
self.attention_dropout = attention_dropout
self.norm_head = norm_head
super().__init__(
pad_token_id=pad_token_id,
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
def _rope_scaling_validation(self):
"""
Validate the `rope_scaling` configuration.
"""
if self.rope_scaling is None:
return
if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
raise ValueError(
"`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
f"got {self.rope_scaling}"
)
rope_scaling_type = self.rope_scaling.get("type", None)
rope_scaling_factor = self.rope_scaling.get("factor", None)
if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
raise ValueError(
f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
)
if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
raise ValueError(f"`rope_scaling`'s factor field must be a float > 1, got {rope_scaling_factor}")

12
generation_config.json Normal file
View File

@ -0,0 +1,12 @@
{
"_from_model_config": true,
"bos_token_id": 1,
"eos_token_id": 2,
"pad_token_id": 0,
"transformers_version": "4.39.2",
"top_k": 5,
"top_p": 0.8,
"temperature": 0.3,
"repetition_penalty":1.1,
"do_sample": true
}

1048
modeling_index.py Normal file

File diff suppressed because it is too large Load Diff

BIN
pytorch_model.bin (Stored with Git LFS) Normal file

Binary file not shown.

23
special_tokens_map.json Normal file
View File

@ -0,0 +1,23 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

266
tokenization_index.py Normal file
View File

@ -0,0 +1,266 @@
# coding=utf-8
# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
#
# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
# and OPT implementations in this library. It has been modified from its
# original forms to accommodate minor architectural differences compared
# to GPT-NeoX and OPT used by the Meta AI team that trained the model.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tokenization classes for Index, Modify from llama tokenzier."""
import os
from shutil import copyfile
from typing import Any, Dict, List, Optional, Tuple
import sentencepiece as spm
from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
from transformers.utils import logging
logger = logging.get_logger(__name__)
VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"}
PRETRAINED_VOCAB_FILES_MAP = {}
SPIECE_UNDERLINE = ""
class IndexTokenizer(PreTrainedTokenizer):
"""
Construct a Index tokenizer. Based on Byte-Pair-Encoding.
Args:
vocab_file (`str`):
Path to the vocabulary file.
"""
vocab_files_names = VOCAB_FILES_NAMES
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
model_input_names = ["input_ids", "attention_mask"]
def __init__(
self,
vocab_file,
unk_token="<unk>",
bos_token="<s>",
eos_token="</s>",
pad_token=None,
sp_model_kwargs: Optional[Dict[str, Any]] = None,
add_bos_token=False,
add_eos_token=False,
decode_with_prefix_space=False,
clean_up_tokenization_spaces=False,
legacy=False,
**kwargs,
):
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token
eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token
unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token
pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token
self.legacy = legacy
self.vocab_file = vocab_file
self.add_bos_token = add_bos_token
self.add_eos_token = add_eos_token
self.decode_with_prefix_space = decode_with_prefix_space
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
self.sp_model.Load(vocab_file)
self._no_prefix_space_tokens = None
super().__init__(
bos_token=bos_token,
eos_token=eos_token,
unk_token=unk_token,
pad_token=pad_token,
add_bos_token=add_bos_token,
add_eos_token=add_eos_token,
sp_model_kwargs=self.sp_model_kwargs,
decode_with_prefix_space=decode_with_prefix_space,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
legacy=legacy,
**kwargs,
)
@property
def no_prefix_space_tokens(self):
if self._no_prefix_space_tokens is None:
vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("")}
return self._no_prefix_space_tokens
@property
def vocab_size(self):
"""Returns vocab size"""
return self.sp_model.get_piece_size()
@property
def bos_token_id(self) -> Optional[int]:
return self.sp_model.bos_id()
@property
def eos_token_id(self) -> Optional[int]:
return self.sp_model.eos_id()
def get_vocab(self):
"""Returns vocab as a dict"""
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
vocab.update(self.added_tokens_encoder)
return vocab
def _tokenize(self, text):
"""Returns a tokenized string."""
return self.sp_model.encode(text, out_type=str)
def _convert_token_to_id(self, token):
"""Converts a token (str) in an id using the vocab."""
return self.sp_model.piece_to_id(token)
def _convert_id_to_token(self, index):
"""Converts an index (integer) in a token (str) using the vocab."""
token = self.sp_model.IdToPiece(index)
return token
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
# since we manually add the prefix space, we have to remove it when decoding
if tokens[0].startswith(SPIECE_UNDERLINE):
tokens[0] = tokens[0][1:]
current_sub_tokens = []
out_string = ""
prev_is_special = False
for i, token in enumerate(tokens):
# make sure that special tokens are not decoded using sentencepiece model
if token in self.all_special_tokens:
if not prev_is_special and i != 0 and self.legacy:
out_string += " "
out_string += self.sp_model.decode(current_sub_tokens) + token
prev_is_special = True
current_sub_tokens = []
else:
current_sub_tokens.append(token)
prev_is_special = False
out_string += self.sp_model.decode(current_sub_tokens)
return out_string
def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
"""
Save the vocabulary and special tokens file to a directory.
Args:
save_directory (`str`):
The directory in which to save the vocabulary.
Returns:
`Tuple(str)`: Paths to the files saved.
"""
if not os.path.isdir(save_directory):
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
return
out_vocab_file = os.path.join(
save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
)
if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
copyfile(self.vocab_file, out_vocab_file)
elif not os.path.isfile(self.vocab_file):
with open(out_vocab_file, "wb") as fi:
content_spiece_model = self.sp_model.serialized_model_proto()
fi.write(content_spiece_model)
return (out_vocab_file,)
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
output = bos_token_id + token_ids_0 + eos_token_id
if token_ids_1 is not None:
output = output + bos_token_id + token_ids_1 + eos_token_id
return output
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
Whether or not the token list is already formatted with special tokens for the model.
Returns:
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
"""
if already_has_special_tokens:
return super().get_special_tokens_mask(
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
)
bos_token_id = [1] if self.add_bos_token else []
eos_token_id = [1] if self.add_eos_token else []
if token_ids_1 is None:
return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id
return (
bos_token_id
+ ([0] * len(token_ids_0))
+ eos_token_id
+ bos_token_id
+ ([0] * len(token_ids_1))
+ eos_token_id
)
def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
sequence pair mask has the following format:
```
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
| first sequence | second sequence |
```
if token_ids_1 is None, only returns the first portion of the mask (0s).
Args:
token_ids_0 (`List[int]`):
List of ids.
token_ids_1 (`List[int]`, *optional*):
Optional second list of IDs for sequence pairs.
Returns:
`List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
"""
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
output = [0] * len(bos_token_id + token_ids_0 + eos_token_id)
if token_ids_1 is not None:
output += [1] * len(bos_token_id + token_ids_1 + eos_token_id)
return output

BIN
tokenizer.model (Stored with Git LFS) Normal file

Binary file not shown.

69
tokenizer_config.json Normal file
View File

@ -0,0 +1,69 @@
{
"add_bos_token": false,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"__type": "AddedToken",
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"__type": "AddedToken",
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": true
},
"added_tokens_decoder": {
"3": {
"content": "reserved_0",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"4": {
"content": "reserved_1",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"clean_up_tokenization_spaces": false,
"spaces_between_special_tokens": false,
"add_prefix_space": false,
"legacy": false,
"model_max_length": 1e+30,
"sp_model_kwargs": {},
"tokenizer_class": "IndexTokenizer",
"auto_map": {
"AutoTokenizer": [
"tokenization_index.IndexTokenizer",
null
]
},
"use_default_system_prompt": false,
"chat_template": "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ unk_token + system_message }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ 'reserved_0' + content.strip() + 'reserved_1' }}{% elif message['role'] == 'assistant' %}{{ content.strip() }}{% endif %}{% endfor %}"
}