first commit

This commit is contained in:
xxl 2025-01-09 10:44:30 +08:00
parent 78ba32ac96
commit 177c9b4d96
16 changed files with 2776 additions and 1 deletions

152
README.md
View File

@ -1,3 +1,153 @@
---
license: apache-2.0
tags:
- DNA
- RNA
- genomic
- metagenomic
language:
- en
---
# METAGENE-1
METAGENE-1
## **Model Overview**
**METAGENE-1** is a 7B parameter metagenomic foundation model designed for pandemic monitoring, trained on over 1.5T base pairs of DNA and RNA sequenced from wastewater.
![METAGENE-1 Overview](overview.png)
**METAGENE-1** is a 7-billion-parameter autoregressive transformer language model, which we refer to as a *metagenomic foundation model*, that was trained on a novel corpus of diverse metagenomic DNA and RNA sequences comprising over 1.5 trillion base pairs. This dataset is sourced from a large collection of human wastewater samples, processed and sequenced using deep metagenomic (next-generation) sequencing methods. Unlike genomic models that focus on individual genomes or curated sets of specific species, the aim of METAGENE-1 is to capture the full distribution of genomic information present across the human microbiome. After pretraining, this model is designed to aid in tasks in the areas of biosurveillance, pandemic monitoring, and pathogen detection.
We carry out byte-pair encoding (BPE) tokenization on our dataset, tailored for metagenomic sequences, and then pretrain our model. We detail the pretraining data, tokenization strategy, and model architecture, highlighting the considerations and design choices that enable the effective modeling of metagenomic data, in our technical report.
## **Usage**
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("metagene-ai/METAGENE-1")
model = AutoModelForCausalLM.from_pretrained("metagene-ai/METAGENE-1", torch_dtype=torch.bfloat16)
# Example input sequence
input_sequence = "TCACCGTTCTACAATCCCAAGCTGGAGTCAAGCTCAACAGGGTCTTC"
# Tokenize the input sequence and remove the [EOS] token for generation
input_tokens = tokenizer.encode(input_sequence, return_tensors="pt", add_special_tokens=False)
# Generate output from the model
generated_tokens = model.generate(input_tokens, max_length=32)
# Decode the generated output and clean up the result
generated_sequence = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
generated_sequence = generated_sequence.replace(" ", "").replace("_", "")
# Generated output: A Hexamita inflata 5.8S ribosomal RNA gene sequence
print(f"🔬 Generated Sequence:\n{generated_sequence}")
# TCACCGTTCTACAATCCCAAGCTGGAGTCAAGCTCAACAGGGTCTTCTTGCCCCGCTGAGGGTTACACTCGCCCGTTCCCGAGTCTGTGGTTTCGCGAAGATATGACCAGGGACAGTAAGAACC
```
## **Benchmark Performance**
We evaluate METAGENE-1 across three tasks: pathogen detection, zero-shot embedding benchmarks (**Gene-MTEB**), and genome understanding (**GUE**), achieving state-of-the-art performance on most benchmarks. For more details, check out our [paper](https://arxiv.org/abs/2501.02045).
### **Pathogen Detection**
The pathogen detection benchmark evaluates **METAGENE-1**s ability to classify sequencing reads as human pathogens or non-pathogens across four distinct datasets, each derived from different sequencing deliveries and designed to mimic real-world conditions with limited training data.
| | **DNABERT-2** | **DNABERT-S** | **NT-2.5b-Multi** | **NT-2.5b-1000g** | **METAGENE-1** |
|-------------------------------|---------------|---------------|-------------------|-------------------|----------------|
| **Pathogen-Detect (avg.)** | 87.92 | 87.02 | 82.43 | 79.02 | **92.96** |
| **Pathogen-Detect-1** | 86.73 | 85.43 | 83.80 | 77.52 | **92.14** |
| **Pathogen-Detect-2** | 86.90 | 85.23 | 83.53 | 80.38 | **90.91** |
| **Pathogen-Detect-3** | 88.30 | 89.01 | 82.48 | 79.83 | **93.70** |
| **Pathogen-Detect-4** | 89.77 | 88.41 | 79.91 | 78.37 | **95.10** |
### **Gene-MTEB**
The Gene-MTEB benchmark evaluates **METAGENE-1**s ability to produce high-quality, zero-shot genomic representations through eight classification and eight clustering tasks.
| | **DNABERT-2** | **DNABERT-S** | **NT-2.5b-Multi** | **NT-2.5b-1000g** | **METAGENE-1** |
|--------------------------------|---------------|---------------|-------------------|-------------------|----------------|
| **Human-Virus (avg.)** | 0.564 | 0.570 | 0.675 | 0.710 | **0.775** |
| Human-Virus-1 | 0.594 | 0.605 | 0.671 | 0.721 | **0.828** |
| Human-Virus-2 | 0.507 | 0.510 | 0.652 | 0.624 | **0.742** |
| Human-Virus-3 | 0.606 | 0.612 | 0.758 | 0.740 | **0.835** |
| Human-Virus-4 | 0.550 | 0.551 | 0.620 | **0.755** | 0.697 |
| **HMPD (avg.)** | 0.397 | 0.403 | 0.449 | 0.451 | **0.465** |
| HMPD-single | 0.292 | 0.293 | 0.285 | 0.292 | **0.297** |
| HMPD-disease | 0.480 | 0.486 | 0.498 | 0.489 | **0.542** |
| HMPD-sex | 0.366 | 0.367 | 0.487 | 0.476 | **0.495** |
| HMPD-source | 0.451 | 0.465 | 0.523 | **0.545** | 0.526 |
| **HVR (avg.)** | 0.479 | 0.479 | 0.546 | 0.524 | **0.550** |
| HVR-p2p | 0.548 | 0.550 | 0.559 | **0.650** | 0.466 |
| HVR-s2s-align | 0.243 | 0.241 | 0.266 | **0.293** | 0.267 |
| HVR-s2s-small | 0.373 | 0.372 | 0.357 | 0.371 | **0.467** |
| HVR-s2s-tiny | 0.753 | 0.753 | 1.000 | 0.782 | **1.000** |
| **HMPR (avg.)** | 0.347 | 0.351 | 0.348 | 0.403 | **0.476** |
| HMPR-p2p | 0.566 | **0.580** | 0.471 | 0.543 | 0.479 |
| HMPR-s2s-align | 0.127 | 0.129 | 0.144 | **0.219** | 0.140 |
| HMPR-s2s-small | 0.419 | 0.421 | 0.443 | **0.459** | 0.432 |
| HMPR-s2s-tiny | 0.274 | 0.274 | 0.332 | 0.391 | **0.855** |
| **Global Average** | 0.475 | 0.479 | 0.525 | 0.545 | **0.590** |
### **GUE**
Next, we evaluate **METAGENE-1** on the GUE multi-species classification benchmark proposed in [DNABERT-2](https://arxiv.org/abs/2306.15006). This experiment is designed to assess the viability of **METAGENE-1** as a general-purpose genome foundation model.
| | **CNN** | **HyenaDNA** | **DNABERT** | **NT-2.5B-Multi** | **DNABERT-2** | **METAGENE-1** |
|--------------------------------|---------|--------------|-------------|-------------------|---------------|----------------|
| **TF-Mouse (avg.)** | 45.3 | 51.0 | 57.7 | 67.0 | 68.0 | **71.4** |
| 0 | 31.1 | 35.6 | 42.3 | **63.3** | 56.8 | 61.5 |
| 1 | 59.7 | 80.5 | 79.1 | 83.8 | **84.8** | 83.7 |
| 2 | 63.2 | 65.3 | 69.9 | 71.5 | 79.3 | **83.0** |
| 3 | 45.5 | 54.2 | 55.4 | 69.4 | 66.5 | **82.2** |
| 4 | 27.2 | 19.2 | 42.0 | 47.1 | **52.7** | 46.6 |
| **TF-Human (avg.)** | 50.7 | 56.0 | 64.4 | 62.6 | **70.1** | 68.3 |
| 0 | 54.0 | 62.3 | 68.0 | 66.6 | **72.0** | 68.9 |
| 1 | 63.2 | 67.9 | 70.9 | 66.6 | **76.1** | 70.8 |
| 2 | 45.2 | 46.9 | 60.5 | 58.7 | **66.5** | 65.9 |
| 3 | 29.8 | 41.8 | 53.0 | 51.7 | **58.5** | 58.1 |
| 4 | 61.5 | 61.2 | 69.8 | 69.3 | 77.4 | **77.9** |
| **EMP (avg.)** | 37.6 | 44.9 | 49.5 | 58.1 | 56.0 | **66.0** |
| H3 | 61.5 | 67.2 | 74.2 | 78.8 | 78.3 | **80.2** |
| H3K14ac | 29.7 | 32.0 | 42.1 | 56.2 | 52.6 | **64.9** |
| H3K36me3 | 38.6 | 48.3 | 48.5 | 62.0 | 56.9 | **66.7** |
| H3K4me1 | 26.1 | 35.8 | 43.0 | 55.3 | 50.5 | **55.3** |
| H3K4me2 | 25.8 | 25.8 | 31.3 | 36.5 | 31.1 | **51.2** |
| H3K4me3 | 20.5 | 23.1 | 28.9 | 40.3 | 36.3 | **58.5** |
| H3K79me3 | 46.3 | 54.1 | 60.1 | 64.7 | 67.4 | **73.0** |
| H3K9ac | 40.0 | 50.8 | 50.5 | 56.0 | 55.6 | **65.5** |
| H4 | 62.3 | 73.7 | 78.3 | 81.7 | 80.7 | **82.7** |
| H4ac | 25.5 | 38.4 | 38.6 | 49.1 | 50.4 | **61.7** |
| **PD (avg.)** | 77.1 | 35.0 | 84.6 | **88.1** | 84.2 | 82.3 |
| All | 75.8 | 47.4 | 90.4 | **91.0** | 86.8 | 86.0 |
| No-TATA | 85.1 | 52.2 | 93.6 | **94.0** | **94.3** | 93.7 |
| TATA | 70.3 | 5.3 | 69.8 | **79.4** | 71.6 | 67.4 |
| **CPD (avg.)** | 62.5 | 48.4 | **73.0** | 71.6 | 70.5 | 69.9 |
| All | 58.1 | 37.0 | **70.9** | 70.3 | 69.4 | 66.4 |
| No-TATA | 60.1 | 35.4 | 69.8 | **71.6** | 68.0 | 68.3 |
| TATA | 69.3 | 72.9 | **78.2** | 73.0 | 74.2 | 75.1 |
| **SSD** | 76.8 | 72.7 | 84.1 | **89.3** | 85.0 | 87.8 |
| **COVID** | 22.2 | 23.3 | 62.2 | **73.0** | 71.9 | 72.5 |
| **Global Win %** | 0.0 | 0.0 | 7.1 | 21.4 | 25.0 | **46.4** |
## **Safety Considerations**
**METAGENE-1** provides valuable capabilities for biosurveillance and genomic anomaly detection, showing state-of-the-art results on a broad coverage of benchmarks. While its current version poses minimal risk, we carefully weighed its benefits against potential misuse, particularly in synthetic biology, and emphasize the need for stricter safety considerations for future, more capable models.
**Purpose and Capabilities**: **METAGENE-1** is specifically optimized to detect anomalies in short metagenomic reads (100-300 base pairs), making it well-suited for tasks like pathogen detection and biosurveillance. The models architectural constraints, such as its 512-token context length, limit its applicability to complex sequence design tasks, reducing misuse risks.
**Open Source Impact**: We believe the open release of **METAGENE-1** will foster research in pathogen detection and biosurveillance by providing a valuable tool for scientists; it will also facilitate interpretability and controllability research in scientific foundation models. However, we emphasize the need for more rigorous safety evaluations before open-sourcing larger or more capable genomic models in the future.
We have included more in-depth discussions on safety considerations in our [paper](https://arxiv.org/abs/2501.02045).
## **Model Details**
- **Release Date**: Jan 06 2025
- **Model License**: Apache 2.0
## **BibTeX**
```BibTeX
@misc{liu2025metagene1metagenomicfoundationmodel,
title={METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring},
author={Ollie Liu and Sami Jaghouar and Johannes Hagemann and Shangshang Wang and Jason Wiemels and Jeff Kaufman and Willie Neiswanger},
year={2025},
eprint={2501.02045},
archivePrefix={arXiv},
primaryClass={q-bio.GN},
url={https://arxiv.org/abs/2501.02045},
}
```

35
config.json Normal file
View File

@ -0,0 +1,35 @@
{
"_name_or_path": "/project/neiswang_1391/MGFM/MGFM-serving/model_ckpts/safetensors/step-00086000",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 3,
"eos_token_id": 4,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"mask_token_id": 5,
"max_position_embeddings": 512,
"max_sequence_length": 2048,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"sep_token_id": 2,
"tie_word_embeddings": false,
"torch_dtype": "float32",
"transformers_version": "4.46.3",
"unk_token_id": 1,
"use_cache": true,
"vocab_size": 1024
}

1
configuration.json Normal file
View File

@ -0,0 +1 @@
{"framework": "pytorch", "task": "others", "allow_remote": true}

7
generation_config.json Normal file
View File

@ -0,0 +1,7 @@
{
"_from_model_config": true,
"bos_token_id": 3,
"eos_token_id": 4,
"pad_token_id": 0,
"transformers_version": "4.46.3"
}

BIN
model-00001-of-00006.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00002-of-00006.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00003-of-00006.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00004-of-00006.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00005-of-00006.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00006-of-00006.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,298 @@
{
"metadata": {
"total_size": 25938640896
},
"weight_map": {
"lm_head.weight": "model-00006-of-00006.safetensors",
"model.embed_tokens.weight": "model-00001-of-00006.safetensors",
"model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.0.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.1.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.10.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.11.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.12.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.12.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.12.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.12.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.12.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.12.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.12.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.13.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.13.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.13.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.14.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.14.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.15.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.16.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.gate_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.mlp.up_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.17.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.18.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.18.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.18.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.18.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.18.self_attn.k_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.q_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.18.self_attn.v_proj.weight": "model-00003-of-00006.safetensors",
"model.layers.19.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.19.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.19.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.19.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.19.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.19.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.19.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.19.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.19.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.2.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.20.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.20.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.20.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.20.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.21.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.21.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.22.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.22.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.gate_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.mlp.up_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.23.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.24.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.24.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.24.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.24.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.24.self_attn.k_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.q_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.24.self_attn.v_proj.weight": "model-00004-of-00006.safetensors",
"model.layers.25.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.25.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.25.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.25.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.25.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.25.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.25.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.25.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.25.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.26.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.26.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.26.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.26.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.26.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.26.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.26.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.26.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.26.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.27.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.27.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.27.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.28.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.28.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.input_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.29.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.mlp.gate_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.mlp.up_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.29.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.3.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.30.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.30.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.30.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.30.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.30.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.30.self_attn.k_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.q_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.30.self_attn.v_proj.weight": "model-00005-of-00006.safetensors",
"model.layers.31.input_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.31.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.31.mlp.gate_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.31.mlp.up_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.31.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
"model.layers.31.self_attn.k_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.31.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.31.self_attn.q_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.31.self_attn.v_proj.weight": "model-00006-of-00006.safetensors",
"model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.4.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.input_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.gate_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.mlp.up_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.k_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.5.self_attn.v_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.6.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.6.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.6.self_attn.q_proj.weight": "model-00001-of-00006.safetensors",
"model.layers.6.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.7.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.8.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.gate_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.mlp.up_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.k_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.q_proj.weight": "model-00002-of-00006.safetensors",
"model.layers.9.self_attn.v_proj.weight": "model-00002-of-00006.safetensors",
"model.norm.weight": "model-00006-of-00006.safetensors"
}
}

BIN
overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 592 KiB

8
special_tokens_map.json Normal file
View File

@ -0,0 +1,8 @@
{
"bos_token": "[BOS]",
"eos_token": "[EOS]",
"mask_token": "[MASK]",
"pad_token": "[PAD]",
"sep_token": "[SEP]",
"unk_token": "[UNK]"
}

2191
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

BIN
tokenizer.model (Stored with Git LFS) Normal file

Binary file not shown.

64
tokenizer_config.json Normal file
View File

@ -0,0 +1,64 @@
{
"added_tokens_decoder": {
"0": {
"content": "[PAD]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "[UNK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "[SEP]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "[BOS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"4": {
"content": "[EOS]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"5": {
"content": "[MASK]",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"bos_token": "[BOS]",
"clean_up_tokenization_spaces": false,
"cls_token": null,
"do_lower_case": null,
"eos_token": "[EOS]",
"extra_special_tokens": {},
"mask_token": "[MASK]",
"model_max_length": 512,
"pad_token": "[PAD]",
"sep_token": "[SEP]",
"tokenizer_class": "PreTrainedTokenizerFast",
"unk_token": "[UNK]"
}