Compare commits
2 Commits
3bfc3e974d
...
88c3c44410
Author | SHA1 | Date |
---|---|---|
xxl | 88c3c44410 | |
xxl | 10a1f14b69 |
285
README.md
285
README.md
|
@ -1,3 +1,284 @@
|
||||||
# Falcon3-3B-Instruct_a13975362023845888371315
|
---
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
- fr
|
||||||
|
- es
|
||||||
|
- pt
|
||||||
|
tags:
|
||||||
|
- falcon3
|
||||||
|
base_model: tiiuae/Falcon3-3B-Instruct
|
||||||
|
license: other
|
||||||
|
license_name: falcon-llm-license
|
||||||
|
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
|
||||||
|
library_name: transformers
|
||||||
|
---
|
||||||
|
|
||||||
Falcon3-3B-Instruct
|
<div align="center">
|
||||||
|
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
# Falcon3-3B-Instruct
|
||||||
|
|
||||||
|
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
|
||||||
|
|
||||||
|
**Falcon3-3B-Instruct** achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
|
||||||
|
Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
|
||||||
|
|
||||||
|
## Model Details
|
||||||
|
- Architecture
|
||||||
|
- Transformer-based causal decoder-only architecture
|
||||||
|
- 22 decoder blocks
|
||||||
|
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
|
||||||
|
- Wider head dimension: 256
|
||||||
|
- High RoPE value to support long context understanding: 1000042
|
||||||
|
- Uses SwiGLU and RMSNorm
|
||||||
|
- 32K context length
|
||||||
|
- 131K vocab size
|
||||||
|
- Pruned and healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
|
||||||
|
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
|
||||||
|
- Supports EN, FR, ES, PT
|
||||||
|
- Developed by [Technology Innovation Institute](https://www.tii.ae)
|
||||||
|
- License: TII Falcon-LLM License 2.0
|
||||||
|
- Model Release Date: December 2024
|
||||||
|
|
||||||
|
|
||||||
|
## Getting started
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary> Click to expand </summary>
|
||||||
|
|
||||||
|
```python
|
||||||
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||||
|
|
||||||
|
model_name = "tiiuae/Falcon3-3B-Instruct"
|
||||||
|
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(
|
||||||
|
model_name,
|
||||||
|
torch_dtype="auto",
|
||||||
|
device_map="auto"
|
||||||
|
)
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||||
|
|
||||||
|
prompt = "How many hours in one day?"
|
||||||
|
messages = [
|
||||||
|
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
|
||||||
|
{"role": "user", "content": prompt}
|
||||||
|
]
|
||||||
|
text = tokenizer.apply_chat_template(
|
||||||
|
messages,
|
||||||
|
tokenize=False,
|
||||||
|
add_generation_prompt=True
|
||||||
|
)
|
||||||
|
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
||||||
|
|
||||||
|
generated_ids = model.generate(
|
||||||
|
**model_inputs,
|
||||||
|
max_new_tokens=1024
|
||||||
|
)
|
||||||
|
generated_ids = [
|
||||||
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||||
|
]
|
||||||
|
|
||||||
|
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||||
|
print(response)
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<br>
|
||||||
|
|
||||||
|
## Benchmarks
|
||||||
|
We report in the following table our internal pipeline benchmarks.
|
||||||
|
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
|
||||||
|
- We report **raw scores** obtained by applying chat template **without fewshot_as_multiturn** (unlike Llama3.1).
|
||||||
|
- We use same batch-size across all models.
|
||||||
|
|
||||||
|
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
|
||||||
|
<colgroup>
|
||||||
|
<col style="width: 10%;">
|
||||||
|
<col style="width: 10%;">
|
||||||
|
<col style="width: 7%;">
|
||||||
|
<col style="width: 7%;">
|
||||||
|
<col style="width: 7%;">
|
||||||
|
<col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
|
||||||
|
</colgroup>
|
||||||
|
<thead>
|
||||||
|
<tr>
|
||||||
|
<th>Category</th>
|
||||||
|
<th>Benchmark</th>
|
||||||
|
<th>Llama-3.2-3B-Instruct</th>
|
||||||
|
<th>Qwen2.5-3B-Instruct</th>
|
||||||
|
<th>Nemotron-Mini-4B-Instruct</th>
|
||||||
|
<th>Falcon3-3B-Instruct</th>
|
||||||
|
</tr>
|
||||||
|
</thead>
|
||||||
|
<tbody>
|
||||||
|
<tr>
|
||||||
|
<td rowspan="3">General</td>
|
||||||
|
<td>MMLU (5-shot)</td>
|
||||||
|
<td>29.3</td>
|
||||||
|
<td>56.2</td>
|
||||||
|
<td><b>56.4</b></td>
|
||||||
|
<td>55.7</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>MMLU-PRO (5-shot)</td>
|
||||||
|
<td>11.9</td>
|
||||||
|
<td>17.2</td>
|
||||||
|
<td>23.3</td>
|
||||||
|
<td><b>29.7</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>IFEval</td>
|
||||||
|
<td><b>73.9</b></td>
|
||||||
|
<td>64.2</td>
|
||||||
|
<td>66.5</td>
|
||||||
|
<td>68.3</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td rowspan="3">Math</td>
|
||||||
|
<td>GSM8K (5-shot)</td>
|
||||||
|
<td>68.5</td>
|
||||||
|
<td>58.5</td>
|
||||||
|
<td>46.9</td>
|
||||||
|
<td><b>71.9</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>GSM8K (8-shot, COT)</td>
|
||||||
|
<td><b>74.5</b></td>
|
||||||
|
<td>64.0</td>
|
||||||
|
<td>46.5</td>
|
||||||
|
<td>71.6</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>MATH Lvl-5 (4-shot)</td>
|
||||||
|
<td>2.4</td>
|
||||||
|
<td>0.0</td>
|
||||||
|
<td>0.0</td>
|
||||||
|
<td><b>19.9</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td rowspan="5">Reasoning</td>
|
||||||
|
<td>Arc Challenge (25-shot)</td>
|
||||||
|
<td>38.9</td>
|
||||||
|
<td>50.0</td>
|
||||||
|
<td>51.2</td>
|
||||||
|
<td><b>58.5</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>GPQA (0-shot)</td>
|
||||||
|
<td>28.1</td>
|
||||||
|
<td>29.2</td>
|
||||||
|
<td>27.0</td>
|
||||||
|
<td><b>29.6</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>GPQA (0-shot, COT)</td>
|
||||||
|
<td>11.3</td>
|
||||||
|
<td>11.0</td>
|
||||||
|
<td>12.2</td>
|
||||||
|
<td><b>26.5</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>MUSR (0-shot)</td>
|
||||||
|
<td>34.9</td>
|
||||||
|
<td><b>40.2</b></td>
|
||||||
|
<td>38.9</td>
|
||||||
|
<td>39.0</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>BBH (3-shot)</td>
|
||||||
|
<td>33.1</td>
|
||||||
|
<td>44.1</td>
|
||||||
|
<td>38.1</td>
|
||||||
|
<td><b>45.4</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td rowspan="4">CommonSense Understanding</td>
|
||||||
|
<td>PIQA (0-shot)</td>
|
||||||
|
<td>74.6</td>
|
||||||
|
<td>73.8</td>
|
||||||
|
<td>74.6</td>
|
||||||
|
<td><b>75.6</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>SciQ (0-shot)</td>
|
||||||
|
<td>77.2</td>
|
||||||
|
<td>60.7</td>
|
||||||
|
<td>71.0</td>
|
||||||
|
<td><b>95.5</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Winogrande (0-shot)</td>
|
||||||
|
<td>-</td>
|
||||||
|
<td>-</td>
|
||||||
|
<td>-</td>
|
||||||
|
<td><b>65.0</b></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>OpenbookQA (0-shot)</td>
|
||||||
|
<td>40.8</td>
|
||||||
|
<td>41.2</td>
|
||||||
|
<td><b>43.2</b></td>
|
||||||
|
<td>42.2</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td rowspan="2">Instructions following</td>
|
||||||
|
<td>MT-Bench (avg)</td>
|
||||||
|
<td>7.1</td>
|
||||||
|
<td><b>8.0</b></td>
|
||||||
|
<td>6.7</td>
|
||||||
|
<td>7.2</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Alpaca (WC)</td>
|
||||||
|
<td><b>19.4</b></td>
|
||||||
|
<td>19.4</td>
|
||||||
|
<td>9.6</td>
|
||||||
|
<td>15.5</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Tool use</td>
|
||||||
|
<td>BFCL AST (avg)</td>
|
||||||
|
<td><b>85.2</b></td>
|
||||||
|
<td>84.8</td>
|
||||||
|
<td>59.8</td>
|
||||||
|
<td>65.3</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td rowspan="2">Code</td>
|
||||||
|
<td>EvalPlus (0-shot) (avg)</td>
|
||||||
|
<td>55.2</td>
|
||||||
|
<td><b>69.4<b></td>
|
||||||
|
<td>40.0</td>
|
||||||
|
<td>52.9</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td>Multipl-E (0-shot) (avg)</td>
|
||||||
|
<td>31.6</td>
|
||||||
|
<td>29.2</td>
|
||||||
|
<td>19.6</td>
|
||||||
|
<td><b>32.9</b></td>
|
||||||
|
</tr>
|
||||||
|
</tbody>
|
||||||
|
</table>
|
||||||
|
|
||||||
|
## Useful links
|
||||||
|
- View our [release blogpost](https://huggingface.co/blog/falcon3).
|
||||||
|
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
|
||||||
|
|
||||||
|
## Technical Report
|
||||||
|
Coming soon....
|
||||||
|
|
||||||
|
## Citation
|
||||||
|
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
|
||||||
|
|
||||||
|
```
|
||||||
|
@misc{Falcon3,
|
||||||
|
title = {The Falcon 3 Family of Open Models},
|
||||||
|
url = {https://huggingface.co/blog/falcon3},
|
||||||
|
author = {Falcon-LLM Team},
|
||||||
|
month = {December},
|
||||||
|
year = {2024}
|
||||||
|
}
|
||||||
|
```
|
|
@ -0,0 +1,28 @@
|
||||||
|
{
|
||||||
|
"architectures": [
|
||||||
|
"LlamaForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_bias": false,
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"eos_token_id": 11,
|
||||||
|
"head_dim": 256,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 3072,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 9216,
|
||||||
|
"max_position_embeddings": 32768,
|
||||||
|
"mlp_bias": false,
|
||||||
|
"model_type": "llama",
|
||||||
|
"num_attention_heads": 12,
|
||||||
|
"num_hidden_layers": 22,
|
||||||
|
"num_key_value_heads": 4,
|
||||||
|
"pretraining_tp": 1,
|
||||||
|
"rms_norm_eps": 1e-06,
|
||||||
|
"rope_scaling": null,
|
||||||
|
"rope_theta": 1000042,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "bfloat16",
|
||||||
|
"transformers_version": "4.46.1",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 131072
|
||||||
|
}
|
|
@ -0,0 +1,6 @@
|
||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"bos_token_id": 11,
|
||||||
|
"eos_token_id": 11,
|
||||||
|
"transformers_version": "4.46.1"
|
||||||
|
}
|
Binary file not shown.
Binary file not shown.
|
@ -0,0 +1,208 @@
|
||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"total_size": 6455310336
|
||||||
|
},
|
||||||
|
"weight_map": {
|
||||||
|
"lm_head.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.embed_tokens.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.10.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.11.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.12.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.13.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.14.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.15.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.16.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.17.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.18.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.19.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.19.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.2.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.20.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.20.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.input_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.mlp.down_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.mlp.gate_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.mlp.up_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.post_attention_layernorm.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.self_attn.k_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.self_attn.o_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.self_attn.q_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.21.self_attn.v_proj.weight": "model-00002-of-00002.safetensors",
|
||||||
|
"model.layers.3.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.6.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.7.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.8.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.input_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.mlp.down_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.mlp.gate_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.mlp.up_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.post_attention_layernorm.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.self_attn.k_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.self_attn.o_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.self_attn.q_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.layers.9.self_attn.v_proj.weight": "model-00001-of-00002.safetensors",
|
||||||
|
"model.norm.weight": "model-00002-of-00002.safetensors"
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,41 @@
|
||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
">>TITLE<<",
|
||||||
|
">>ABSTRACT<<",
|
||||||
|
">>INTRODUCTION<<",
|
||||||
|
">>SUMMARY<<",
|
||||||
|
">>COMMENT<<",
|
||||||
|
">>ANSWER<<",
|
||||||
|
">>QUESTION<<",
|
||||||
|
">>DOMAIN<<",
|
||||||
|
">>EMAIL_ADDRESS<<",
|
||||||
|
">>IP_ADDRESS<<",
|
||||||
|
"<|startoftext|>",
|
||||||
|
">>IP_ADDRESS_0<<",
|
||||||
|
">>IP_ADDRESS_1<<",
|
||||||
|
">>IP_ADDRESS_2<<",
|
||||||
|
">>IP_ADDRESS_3<<",
|
||||||
|
">>IP_ADDRESS_4<<",
|
||||||
|
">>IP_ADDRESS_5<<",
|
||||||
|
">>IP_ADDRESS_6<<",
|
||||||
|
">>IP_ADDRESS_7<<",
|
||||||
|
">>IP_ADDRESS_8<<",
|
||||||
|
">>IP_ADDRESS_9<<",
|
||||||
|
">>PASSWORD<<",
|
||||||
|
">>KEY<<"
|
||||||
|
],
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "<|pad|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue