first commit
This commit is contained in:
parent
b52c640559
commit
5d033971f9
129
README.md
129
README.md
|
@ -1,3 +1,128 @@
|
|||
# LlamaV-o1
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- omkarthawakar/VRC-Bench
|
||||
- Xkev/LLaVA-CoT-100k
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- meta-llama/Llama-3.2-11B-Vision-Instruct
|
||||
pipeline_tag: question-answering
|
||||
---
|
||||
|
||||
|
||||
## LlamaV-o1
|
||||
|
||||
<center><img src="logo2.png" alt="LlamaV-o1 logo" width="150"/></center>
|
||||
|
||||
## Overview
|
||||
**LlamaV-o1** is an advanced multimodal large language model (LLM) designed for complex visual reasoning tasks.
|
||||
Built on a foundation of cutting-edge curriculum learning and optimized with techniques like Beam Search,
|
||||
LlamaV-o1 demonstrates exceptional performance across diverse benchmarks.
|
||||
It is fine-tuned for step-by-step reasoning, enabling it to tackle tasks in domains such as visual perception,
|
||||
mathematical reasoning, social and cultural contexts, medical imaging, and document understanding.
|
||||
|
||||
The model is designed with a focus on interpretability and precision. By leveraging a structured reasoning approach,
|
||||
LlamaV-o1 provides coherent and accurate explanations for its decisions, making it an excellent tool for research
|
||||
and applications requiring high levels of reasoning. With over 4,000 manually verified reasoning steps in its benchmark evaluations,
|
||||
LlamaV-o1 sets a new standard for multimodal reasoning, delivering consistent and reliable results across challenging scenarios.
|
||||
|
||||
### Key Features:
|
||||
- **Model Size:** 11 billion parameters.
|
||||
- **Architecture:** Based on the Llama (Large Language Model Architecture) family.
|
||||
- **Fine-Tuning:** Enhanced for instruction-following, chain-of-thought reasoning, and robust generalization across tasks.
|
||||
- **Applications:** Ideal for use cases such as conversational agents, educational tools, content creation, and more.
|
||||
---
|
||||
## Model Details
|
||||
- **Developed By:** MBZUAI
|
||||
- **Model Version:** v0.1
|
||||
- **Release Date:** 13th January 2025
|
||||
- **Training Dataset:** Diverse multilingual corpus, including high-quality sources for instruction tuning, chain-of-thought datasets, and general-purpose corpora.
|
||||
- **Framework:** Pytorch
|
||||
---
|
||||
|
||||
## Intended Use
|
||||
**LlamaV-o1** is designed for a wide range of NLP tasks, including but not limited to:
|
||||
- Text Generation
|
||||
- Sentiment Analysis
|
||||
- Text Summarization
|
||||
- Question Answering
|
||||
- Chain-of-Thought Reasoning
|
||||
|
||||
### Out-of-Scope Use
|
||||
The model should not be used in applications requiring high-stakes decision-making, such as healthcare diagnosis, financial predictions, or any scenarios involving potential harm.
|
||||
---
|
||||
|
||||
## Training Procedure
|
||||
- **Fine-Tuning:** The model was fine-tuned on a dataset optimized for reasoning, coherence, and diversity, leveraging instruction-tuning techniques to enhance usability in downstream applications.
|
||||
- **Optimizations:** Includes inference scaling optimizations to balance performance and computational efficiency.
|
||||
---
|
||||
## Evaluation
|
||||
|
||||
### Benchmarks
|
||||
LlamaV-o1 has been evaluated on a suite of benchmark tasks:
|
||||
- **Reasoning:** [VRC-Bench](https://huggingface.co/datasets/omkarthawakar/VRC-Bench)
|
||||
|
||||
|
||||
### Limitations
|
||||
While the model performs well on a broad range of tasks, it may struggle with:
|
||||
- Highly technical, domain-specific knowledge outside the training corpus.
|
||||
- Generating accurate outputs for ambiguous or adversarial prompts.
|
||||
---
|
||||
## Usage
|
||||
```python
|
||||
from transformers import MllamaForConditionalGeneration, AutoProcessor
|
||||
|
||||
model_id = "omkarthawakar/LlamaV-o1"
|
||||
|
||||
model = MllamaForConditionalGeneration.from_pretrained(
|
||||
model_id,
|
||||
torch_dtype=torch.bfloat16,
|
||||
device_map="auto",
|
||||
)
|
||||
processor = AutoProcessor.from_pretrained(model_id)
|
||||
```
|
||||
|
||||
Please refer to [llamav-o1.py](https://github.com/mbzuai-oryx/LlamaV-o1/blob/main/eval/llamav-o1.py) for inference.
|
||||
|
||||
### Results
|
||||
**Table 1:** Comparison of models based on Final Answer accuracy and Reasoning Steps performance on the proposed VRC-Bench. The best results in each case (closed-source and open-source) are in bold. Our LlamaV-o1 achieves superior performance compared to its open-source counterpart (Llava-CoT) while also being competitive against the closed-source models.
|
||||
|
||||
| **Model** | **GPT-4o** | **Claude-3.5** | **Gemini-2.0** | **Gemini-1.5 Pro** | **Gemini-1.5 Flash** | **GPT-4o Mini** | **Llama-3.2 Vision** | **Mulberry** | **Llava-CoT** | **LlamaV-o1 (Ours)** |
|
||||
|-------------|------------|----------------|----------------|-------------------|--------------------|----------------|--------------------|-------------|--------------|-------------------|
|
||||
| **Final Answer** | 59.28 | **61.35** | 61.16 | **61.35** | 54.99 | 56.39 | 48.40 | 51.90 | 54.09 | **56.49** |
|
||||
| **Reasoning Steps** | **76.68** | 72.12 | 74.08 | 72.12 | 71.86 | 74.05 | 58.37 | 63.86 | 66.21 | **68.93** |
|
||||
---
|
||||
|
||||
### Training Data
|
||||
|
||||
LlamaV-o1 is trained on the [LLaVA-CoT-100k dataset](https://huggingface.co/datasets/Xkev/LLaVA-CoT-100k).
|
||||
We have formatted training sample for multi-step reasoning.
|
||||
|
||||
### Training Procedure
|
||||
|
||||
LlamaV-o1 model is finetuned on [llama-recipes](https://github.com/Meta-Llama/llama-recipes).
|
||||
Detailed Training procedure will be coming soon!
|
||||
|
||||
### Citation
|
||||
If you find this paper useful, please consider staring 🌟 our [Github](https://github.com/mbzuai-oryx/LlamaV-o1) repo and citing 📑 our paper:
|
||||
```
|
||||
@misc{thawakar2025llamavo1,
|
||||
title={LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs},
|
||||
author={Omkar Thawakar and Dinura Dissanayake and Ketan More and Ritesh Thawkar and Ahmed Heakl and Noor Ahsan and Yuhao Li and Mohammed Zumri and Jean Lahoud and Rao Muhammad Anwer and Hisham Cholakkal and Ivan Laptev and Mubarak Shah and Fahad Shahbaz Khan and Salman Khan},
|
||||
year={2025},
|
||||
eprint={2501.06186},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CV},
|
||||
url={https://arxiv.org/abs/2501.06186},
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
LlamaV-o1
|
File diff suppressed because one or more lines are too long
|
@ -0,0 +1,224 @@
|
|||
{
|
||||
"_name_or_path": "meta-llama/Llama-3.2-11B-Vision-Instruct",
|
||||
"architectures": [
|
||||
"MllamaForConditionalGeneration"
|
||||
],
|
||||
"image_token_index": 128256,
|
||||
"model_type": "mllama",
|
||||
"text_config": {
|
||||
"_name_or_path": "",
|
||||
"add_cross_attention": false,
|
||||
"architectures": null,
|
||||
"bad_words_ids": null,
|
||||
"begin_suppress_tokens": null,
|
||||
"bos_token_id": 128000,
|
||||
"chunk_size_feed_forward": 0,
|
||||
"cross_attention_hidden_size": null,
|
||||
"cross_attention_layers": [
|
||||
3,
|
||||
8,
|
||||
13,
|
||||
18,
|
||||
23,
|
||||
28,
|
||||
33,
|
||||
38
|
||||
],
|
||||
"decoder_start_token_id": null,
|
||||
"diversity_penalty": 0.0,
|
||||
"do_sample": false,
|
||||
"dropout": 0,
|
||||
"early_stopping": false,
|
||||
"encoder_no_repeat_ngram_size": 0,
|
||||
"eos_token_id": [
|
||||
128001,
|
||||
128008,
|
||||
128009
|
||||
],
|
||||
"exponential_decay_length_penalty": null,
|
||||
"finetuning_task": null,
|
||||
"forced_bos_token_id": null,
|
||||
"forced_eos_token_id": null,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 4096,
|
||||
"id2label": {
|
||||
"0": "LABEL_0",
|
||||
"1": "LABEL_1"
|
||||
},
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 14336,
|
||||
"is_decoder": false,
|
||||
"is_encoder_decoder": false,
|
||||
"label2id": {
|
||||
"LABEL_0": 0,
|
||||
"LABEL_1": 1
|
||||
},
|
||||
"length_penalty": 1.0,
|
||||
"max_length": 20,
|
||||
"max_position_embeddings": 131072,
|
||||
"min_length": 0,
|
||||
"model_type": "mllama_text_model",
|
||||
"no_repeat_ngram_size": 0,
|
||||
"num_attention_heads": 32,
|
||||
"num_beam_groups": 1,
|
||||
"num_beams": 1,
|
||||
"num_hidden_layers": 40,
|
||||
"num_key_value_heads": 8,
|
||||
"num_return_sequences": 1,
|
||||
"output_attentions": false,
|
||||
"output_hidden_states": false,
|
||||
"output_scores": false,
|
||||
"pad_token_id": 128004,
|
||||
"prefix": null,
|
||||
"problem_type": null,
|
||||
"pruned_heads": {},
|
||||
"remove_invalid_values": false,
|
||||
"repetition_penalty": 1.0,
|
||||
"return_dict": true,
|
||||
"return_dict_in_generate": false,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": {
|
||||
"factor": 8.0,
|
||||
"high_freq_factor": 4.0,
|
||||
"low_freq_factor": 1.0,
|
||||
"original_max_position_embeddings": 8192,
|
||||
"rope_type": "llama3"
|
||||
},
|
||||
"rope_theta": 500000.0,
|
||||
"sep_token_id": null,
|
||||
"suppress_tokens": null,
|
||||
"task_specific_params": null,
|
||||
"temperature": 1.0,
|
||||
"tf_legacy_loss": false,
|
||||
"tie_encoder_decoder": false,
|
||||
"tie_word_embeddings": false,
|
||||
"tokenizer_class": null,
|
||||
"top_k": 50,
|
||||
"top_p": 1.0,
|
||||
"torch_dtype": "bfloat16",
|
||||
"torchscript": false,
|
||||
"typical_p": 1.0,
|
||||
"use_bfloat16": false,
|
||||
"use_cache": true,
|
||||
"vocab_size": 128256
|
||||
},
|
||||
"torch_dtype": "bfloat16",
|
||||
"transformers_version": "4.45.1",
|
||||
"vision_config": {
|
||||
"_name_or_path": "",
|
||||
"add_cross_attention": false,
|
||||
"architectures": null,
|
||||
"attention_heads": 16,
|
||||
"bad_words_ids": null,
|
||||
"begin_suppress_tokens": null,
|
||||
"bos_token_id": null,
|
||||
"chunk_size_feed_forward": 0,
|
||||
"cross_attention_hidden_size": null,
|
||||
"decoder_start_token_id": null,
|
||||
"diversity_penalty": 0.0,
|
||||
"do_sample": false,
|
||||
"early_stopping": false,
|
||||
"encoder_no_repeat_ngram_size": 0,
|
||||
"eos_token_id": null,
|
||||
"exponential_decay_length_penalty": null,
|
||||
"finetuning_task": null,
|
||||
"forced_bos_token_id": null,
|
||||
"forced_eos_token_id": null,
|
||||
"hidden_act": "gelu",
|
||||
"hidden_size": 1280,
|
||||
"id2label": {
|
||||
"0": "LABEL_0",
|
||||
"1": "LABEL_1"
|
||||
},
|
||||
"image_size": 560,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_layers_indices": [
|
||||
3,
|
||||
7,
|
||||
15,
|
||||
23,
|
||||
30
|
||||
],
|
||||
"intermediate_size": 5120,
|
||||
"is_decoder": false,
|
||||
"is_encoder_decoder": false,
|
||||
"label2id": {
|
||||
"LABEL_0": 0,
|
||||
"LABEL_1": 1
|
||||
},
|
||||
"length_penalty": 1.0,
|
||||
"max_length": 20,
|
||||
"max_num_tiles": 4,
|
||||
"min_length": 0,
|
||||
"model_type": "mllama_vision_model",
|
||||
"no_repeat_ngram_size": 0,
|
||||
"norm_eps": 1e-05,
|
||||
"num_beam_groups": 1,
|
||||
"num_beams": 1,
|
||||
"num_channels": 3,
|
||||
"num_global_layers": 8,
|
||||
"num_hidden_layers": 32,
|
||||
"num_return_sequences": 1,
|
||||
"output_attentions": false,
|
||||
"output_hidden_states": false,
|
||||
"output_scores": false,
|
||||
"pad_token_id": null,
|
||||
"patch_size": 14,
|
||||
"prefix": null,
|
||||
"problem_type": null,
|
||||
"pruned_heads": {},
|
||||
"remove_invalid_values": false,
|
||||
"repetition_penalty": 1.0,
|
||||
"return_dict": true,
|
||||
"return_dict_in_generate": false,
|
||||
"sep_token_id": null,
|
||||
"supported_aspect_ratios": [
|
||||
[
|
||||
1,
|
||||
1
|
||||
],
|
||||
[
|
||||
1,
|
||||
2
|
||||
],
|
||||
[
|
||||
1,
|
||||
3
|
||||
],
|
||||
[
|
||||
1,
|
||||
4
|
||||
],
|
||||
[
|
||||
2,
|
||||
1
|
||||
],
|
||||
[
|
||||
2,
|
||||
2
|
||||
],
|
||||
[
|
||||
3,
|
||||
1
|
||||
],
|
||||
[
|
||||
4,
|
||||
1
|
||||
]
|
||||
],
|
||||
"suppress_tokens": null,
|
||||
"task_specific_params": null,
|
||||
"temperature": 1.0,
|
||||
"tf_legacy_loss": false,
|
||||
"tie_encoder_decoder": false,
|
||||
"tie_word_embeddings": true,
|
||||
"tokenizer_class": null,
|
||||
"top_k": 50,
|
||||
"top_p": 1.0,
|
||||
"torch_dtype": "bfloat16",
|
||||
"torchscript": false,
|
||||
"typical_p": 1.0,
|
||||
"use_bfloat16": false,
|
||||
"vision_output_dim": 7680
|
||||
}
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"bos_token_id": 128000,
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
128001,
|
||||
128008,
|
||||
128009
|
||||
],
|
||||
"pad_token_id": 128004,
|
||||
"temperature": 0.6,
|
||||
"top_p": 0.9,
|
||||
"transformers_version": "4.45.1"
|
||||
}
|
Binary file not shown.
Binary file not shown.
|
@ -0,0 +1,913 @@
|
|||
{
|
||||
"metadata": {
|
||||
"total_size": 21340441670
|
||||
},
|
||||
"weight_map": {
|
||||
"language_model.lm_head.weight": "model-00005-of-00005.safetensors",
|
||||
"language_model.model.embed_tokens.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn.k_norm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn.q_norm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn_attn_gate": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.cross_attn_mlp_gate": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.16.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.16.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.16.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.16.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.17.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.17.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn.k_norm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn.q_norm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn_attn_gate": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.cross_attn_mlp_gate": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.18.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.19.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn.k_norm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn.q_norm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn_attn_gate": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.cross_attn_mlp_gate": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.24.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.25.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.input_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.26.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.27.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.27.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.27.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.27.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.27.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.27.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.27.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.27.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.27.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn.k_norm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn.q_norm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn_attn_gate": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.cross_attn_mlp_gate": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.28.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.29.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn.k_norm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn.q_norm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn_attn_gate": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.cross_attn_mlp_gate": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.30.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.30.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.31.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.32.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn.k_norm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn.q_norm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn_attn_gate": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.cross_attn_mlp_gate": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.33.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.34.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.35.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.36.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.37.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn.k_norm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn.q_norm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn_attn_gate": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.cross_attn_mlp_gate": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.input_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.38.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.39.input_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"language_model.model.layers.39.mlp.down_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"language_model.model.layers.39.mlp.gate_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"language_model.model.layers.39.mlp.up_proj.weight": "model-00005-of-00005.safetensors",
|
||||
"language_model.model.layers.39.post_attention_layernorm.weight": "model-00005-of-00005.safetensors",
|
||||
"language_model.model.layers.39.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.39.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.39.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.39.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
|
||||
"language_model.model.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.5.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.5.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.5.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.5.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.5.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.5.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.5.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"language_model.model.layers.5.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.6.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.7.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn.k_norm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn.q_norm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn_attn_gate": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.cross_attn_mlp_gate": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
|
||||
"language_model.model.norm.weight": "model-00005-of-00005.safetensors",
|
||||
"multi_modal_projector.bias": "model-00005-of-00005.safetensors",
|
||||
"multi_modal_projector.weight": "model-00005-of-00005.safetensors",
|
||||
"vision_model.class_embedding": "model-00001-of-00005.safetensors",
|
||||
"vision_model.gated_positional_embedding.embedding": "model-00001-of-00005.safetensors",
|
||||
"vision_model.gated_positional_embedding.gate": "model-00001-of-00005.safetensors",
|
||||
"vision_model.gated_positional_embedding.tile_embedding.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.gate_attn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.gate_ffn": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.global_transformer.layers.7.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.layernorm_post.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.layernorm_post.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.layernorm_pre.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.layernorm_pre.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.patch_embedding.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.post_tile_positional_embedding.embedding.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.post_tile_positional_embedding.gate": "model-00001-of-00005.safetensors",
|
||||
"vision_model.pre_tile_positional_embedding.embedding.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.pre_tile_positional_embedding.gate": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.10.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.11.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.12.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.13.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.14.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.15.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.16.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.17.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.18.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.19.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.20.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.21.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.22.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.23.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.24.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.25.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.26.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.27.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.28.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.29.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.30.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.31.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.7.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.8.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.input_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.input_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.mlp.fc1.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.mlp.fc1.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.mlp.fc2.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.mlp.fc2.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.post_attention_layernorm.bias": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
|
||||
"vision_model.transformer.layers.9.self_attn.v_proj.weight": "model-00001-of-00005.safetensors"
|
||||
}
|
||||
}
|
|
@ -0,0 +1,26 @@
|
|||
{
|
||||
"do_convert_rgb": true,
|
||||
"do_normalize": true,
|
||||
"do_pad": true,
|
||||
"do_rescale": true,
|
||||
"do_resize": true,
|
||||
"image_mean": [
|
||||
0.48145466,
|
||||
0.4578275,
|
||||
0.40821073
|
||||
],
|
||||
"image_processor_type": "MllamaImageProcessor",
|
||||
"image_std": [
|
||||
0.26862954,
|
||||
0.26130258,
|
||||
0.27577711
|
||||
],
|
||||
"max_image_tiles": 4,
|
||||
"processor_class": "MllamaProcessor",
|
||||
"resample": 2,
|
||||
"rescale_factor": 0.00392156862745098,
|
||||
"size": {
|
||||
"height": 560,
|
||||
"width": 560
|
||||
}
|
||||
}
|
|
@ -0,0 +1,23 @@
|
|||
{
|
||||
"bos_token": {
|
||||
"content": "<|begin_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|eot_id|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|finetune_right_pad_id|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
Binary file not shown.
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue