first commit

This commit is contained in:
xxl 2024-12-26 14:15:45 +08:00
parent a9ea7ee6fe
commit a92ce365d7
18 changed files with 842665 additions and 2 deletions

14
NOTICE Normal file
View File

@ -0,0 +1,14 @@
Copyright (C) 2024 AIDC-AI
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
This model was trained based on the following models:
1. Gemma (https://huggingface.co/google/gemma-2-9b-it), license: (https://ai.google.dev/gemma/terms). Gemma is provided under and subject to the Gemma Terms of Use found at https://ai.google.dev/gemma/terms.
2. Siglip (https://huggingface.co/google/siglip-so400m-patch14-384), license: (https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md, SPDX-License-Identifier: Apache-2.0).

158
README.md
View File

@ -1,3 +1,157 @@
# Ovis1.6-Gemma2-9B_a14066570331942912351439
---
license: apache-2.0
datasets:
- AIDC-AI/Ovis-dataset
library_name: transformers
tags:
- MLLM
pipeline_tag: image-text-to-text
language:
- en
studios:
- AIDC-AI/Ovis1.6-Gemma2-9B
---
Ovis1.6-Gemma2-9B
# Ovis1.6-Gemma2-9B
<div align="center">
<img src=https://modelscope.oss-cn-beijing.aliyuncs.com/resource/ovis_logo.png width="30%"/>
</div>
## Introduction
[GitHub](https://github.com/AIDC-AI/Ovis) | [Demo](https://modelscope.cn/studios/AIDC-AI/Ovis1.6-Gemma2-9B) | [Paper](https://arxiv.org/abs/2405.20797)
We are excited to announce the open-sourcing of **Ovis-1.6**, our latest multi-modal large language model. Ovis is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.
<div align="center">
<img src="https://modelscope.oss-cn-beijing.aliyuncs.com/resource/Ovisarchitecture.png" width="100%" />
</div>
## Model
Built upon Ovis1.5, **Ovis1.6** further enhances high-resolution image processing, is trained on a larger, more diverse, and higher-quality dataset, and refines the training process with DPO training following instruction-tuning.
| Ovis MLLMs | ViT | LLM | Model Weights | Demo |
|:------------------|:-----------:|:------------------:|:---------------------------------------------------------------:|:----------------------------------------------------------------:|
| Ovis1.6-Gemma2-9B | Siglip-400M | Gemma2-9B-It | [ModelScope](https://modelscope.cn/models/AIDC-AI/Ovis1.6-Gemma2-9B) | [Studio](https://modelscope.cn/studios/AIDC-AI/Ovis1.6-Gemma2-9B) |
## Performance
With just **10B** parameters, **Ovis1.6-Gemma2-9B** leads the [OpenCompass](https://github.com/open-compass/VLMEvalKit) benchmark among open-source MLLMs within **30B** parameters.
<div align="center">
<img src="https://modelscope.oss-cn-beijing.aliyuncs.com/resource/Ovis_benchmark.png" width="100%" />
</div>
## Usage
Below is a code snippet to run Ovis with multimodal inputs. For additional usage instructions, including inference wrapper and Gradio UI, please refer to [Ovis GitHub](https://github.com/AIDC-AI/Ovis?tab=readme-ov-file#inference).
```bash
pip install torch==2.2.0 transformers==4.44.2 numpy==1.24.3 pillow==10.3.0
```
```python
import torch
from PIL import Image
from modelscope import AutoModelForCausalLM
# load model
model = AutoModelForCausalLM.from_pretrained("AIDC-AI/Ovis1.6-Gemma2-9B",
torch_dtype=torch.bfloat16,
multimodal_max_length=8192,
trust_remote_code=True).cuda()
text_tokenizer = model.get_text_tokenizer()
visual_tokenizer = model.get_visual_tokenizer()
# enter image path and prompt
image_path = input("Enter image path: ")
image = Image.open(image_path)
text = input("Enter prompt: ")
query = f'<image>\n{text}'
# format conversation
prompt, input_ids, pixel_values = model.preprocess_inputs(query, [image])
attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id)
input_ids = input_ids.unsqueeze(0).to(device=model.device)
attention_mask = attention_mask.unsqueeze(0).to(device=model.device)
pixel_values = [pixel_values.to(dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]
# generate output
with torch.inference_mode():
gen_kwargs = dict(
max_new_tokens=1024,
do_sample=False,
top_p=None,
top_k=None,
temperature=None,
repetition_penalty=None,
eos_token_id=model.generation_config.eos_token_id,
pad_token_id=text_tokenizer.pad_token_id,
use_cache=True
)
output_ids = model.generate(input_ids, pixel_values=pixel_values, attention_mask=attention_mask, **gen_kwargs)[0]
output = text_tokenizer.decode(output_ids, skip_special_tokens=True)
print(f'Output:\n{output}')
```
<details>
<summary>Batch inference</summary>
```python
batch_inputs = [
('example_image1.jpeg', 'Describe the content of this image.'),
('example_image2.jpeg', 'What is the equation in the image?')
]
batch_input_ids = []
batch_attention_mask = []
batch_pixel_values = []
for image_path, text in batch_inputs:
image = Image.open(image_path)
query = f'<image>\n{text}'
prompt, input_ids, pixel_values = model.preprocess_inputs(query, [image])
attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id)
input_ids = input_ids.unsqueeze(0).to(device=model.device)
attention_mask = attention_mask.unsqueeze(0).to(device=model.device)
pixel_values = [pixel_values.to(dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]
batch_input_ids.append(input_ids.squeeze())
batch_attention_mask.append(attention_mask.squeeze())
batch_pixel_values.append(pixel_values)
pad_batch_input_ids = torch.nn.utils.rnn.pad_sequence([i.flip(dims=[0]) for i in batch_input_ids],batch_first=True, padding_value=0.0).flip(dims=[1])
pad_batch_input_ids = pad_batch_input_ids[:,-model.config.multimodal_max_length:]
pad_batch_attention_mask = torch.nn.utils.rnn.pad_sequence([i.flip(dims=[0]) for i in batch_attention_mask],batch_first=True, padding_value=False).flip(dims=[1])
pad_batch_attention_mask = pad_batch_attention_mask[:,-model.config.multimodal_max_length:]
pad_batch_pixel_values = [item for sublist in batch_pixel_values for item in sublist]
# generate output
with torch.inference_mode():
gen_kwargs = dict(
max_new_tokens=1024,
do_sample=False,
top_p=None,
top_k=None,
temperature=None,
repetition_penalty=None,
eos_token_id=model.generation_config.eos_token_id,
pad_token_id=text_tokenizer.pad_token_id,
use_cache=True
)
output_ids = model.generate(pad_batch_input_ids, pixel_values=pad_batch_pixel_values, attention_mask=pad_batch_attention_mask, **gen_kwargs)
for i in range(len(batch_input_ids)):
output = text_tokenizer.decode(output_ids[i], skip_special_tokens=True)
print(f'Output_{i}:\n{output}')
```
</details>
## Citation
If you find Ovis useful, please cite the paper
```
@article{lu2024ovis,
title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model},
author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
year={2024},
journal={arXiv:2405.20797}
}
```
## License
The project is licensed under the Apache 2.0 License and is restricted to uses that comply with the license agreements of Gemma2 and Siglip.

248
config.json Normal file
View File

@ -0,0 +1,248 @@
{
"architectures": [
"Ovis"
],
"auto_map": {
"AutoConfig": "configuration_ovis.OvisConfig",
"AutoModelForCausalLM": "modeling_ovis.Ovis"
},
"conversation_formatter_class": "GemmaConversationFormatter",
"disable_tie_weight": false,
"hidden_size": 3584,
"llm_attn_implementation": "eager",
"llm_config": {
"_name_or_path": "google/gemma-2-9b-it",
"add_cross_attention": false,
"architectures": [
"Gemma2ForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": 50.0,
"bad_words_ids": null,
"begin_suppress_tokens": null,
"bos_token_id": 2,
"cache_implementation": "hybrid",
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 1,
"exponential_decay_length_penalty": null,
"final_logit_softcapping": 30.0,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"head_dim": 256,
"hidden_act": "gelu_pytorch_tanh",
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 3584,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 14336,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 8192,
"min_length": 0,
"model_type": "gemma2",
"no_repeat_ngram_size": 0,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 42,
"num_key_value_heads": 8,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"query_pre_attn_scalar": 256,
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"rms_norm_eps": 1e-06,
"rope_theta": 10000.0,
"sep_token_id": null,
"sliding_window": 4096,
"sliding_window_size": 4096,
"suppress_tokens": null,
"task_specific_params": null,
"temperature": 1.0,
"tf_legacy_loss": false,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": "bfloat16",
"torchscript": false,
"typical_p": 1.0,
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 256000
},
"model_type": "ovis",
"multimodal_max_length": 8192,
"torch_dtype": "bfloat16",
"transformers_version": "4.44.2",
"use_cache": true,
"visual_tokenizer_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"backbone_config": {
"_name_or_path": "google/siglip-so400m-patch14-384",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"begin_suppress_tokens": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"exponential_decay_length_penalty": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "gelu_pytorch_tanh",
"hidden_size": 1152,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"image_size": 384,
"intermediate_size": 4304,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-06,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "siglip_vision_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_channels": 3,
"num_hidden_layers": 27,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": null,
"patch_size": 14,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"suppress_tokens": null,
"task_specific_params": null,
"temperature": 1.0,
"tf_legacy_loss": false,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"typical_p": 1.0,
"use_bfloat16": false
},
"backbone_kwargs": {},
"bad_words_ids": null,
"begin_suppress_tokens": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"depths": null,
"diversity_penalty": 0.0,
"do_sample": false,
"drop_cls_token": false,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"exponential_decay_length_penalty": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_stride": 2,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "siglip_visual_tokenizer",
"no_repeat_ngram_size": 0,
"num_beam_groups": 1,
"num_beams": 1,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": null,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"suppress_tokens": null,
"task_specific_params": null,
"tau": 1.0,
"temperature": 1.0,
"tf_legacy_loss": false,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenize_function": "softmax",
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"typical_p": 1.0,
"use_bfloat16": false,
"vocab_size": 65536
}
}

1
configuration.json Normal file
View File

@ -0,0 +1 @@
{"framework":"Pytorch","task":"visual-question-answering", "pipeline":{"type":"ovis-vl"},"allow_remote":true}

201
configuration_ovis.py Normal file
View File

@ -0,0 +1,201 @@
from abc import ABC, abstractmethod
from typing import List, Dict, Union, Optional
from transformers import PretrainedConfig, AutoConfig
IGNORE_ID = -100
IMAGE_TOKEN_ID = -200
IMAGE_TOKEN = "<image>"
IMAGE_ATOM_ID = -300
IMAGE_INDICATOR_IDS = [-301, -302, -303, -304, -305]
# ----------------------------------------------------------------------
# Visual Tokenizer Configuration
# ----------------------------------------------------------------------
class BaseVisualTokenizerConfig(PretrainedConfig):
def __init__(
self,
vocab_size=16384,
tokenize_function="softmax",
tau=1.0,
depths=None,
drop_cls_token=False,
backbone_config: Optional[Union[PretrainedConfig, dict]] = None,
hidden_stride: int = 1,
**kwargs
):
super().__init__(**kwargs)
self.vocab_size = vocab_size
self.tokenize_function = tokenize_function
self.tau = tau
if isinstance(depths, str):
depths = [int(x) for x in depths.split('|')]
self.depths = depths
self.backbone_kwargs = {}
self.drop_cls_token = drop_cls_token
if backbone_config is not None:
assert isinstance(backbone_config, (PretrainedConfig, dict)), \
f"expect `backbone_config` to be instance of PretrainedConfig or dict, but got {type(backbone_config)} type"
if not isinstance(backbone_config, PretrainedConfig):
model_type = backbone_config['model_type']
backbone_config.pop('model_type')
backbone_config = AutoConfig.for_model(model_type, **backbone_config)
self.backbone_config = backbone_config
self.hidden_stride = hidden_stride
class SiglipVisualTokenizerConfig(BaseVisualTokenizerConfig):
model_type = "siglip_visual_tokenizer"
def __init__(self, **kwargs):
super().__init__(**kwargs)
if self.drop_cls_token:
self.drop_cls_token = False
if self.depths:
assert len(self.depths) == 1
self.backbone_kwargs['num_hidden_layers'] = self.depths[0]
AutoConfig.register("siglip_visual_tokenizer", SiglipVisualTokenizerConfig)
# ----------------------------------------------------------------------
# Ovis Configuration
# ----------------------------------------------------------------------
class OvisConfig(PretrainedConfig):
model_type = "ovis"
def __init__(
self,
llm_config: Optional[Union[PretrainedConfig, dict]] = None,
visual_tokenizer_config: Optional[Union[PretrainedConfig, dict]] = None,
multimodal_max_length=8192,
hidden_size=None,
conversation_formatter_class=None,
llm_attn_implementation=None,
disable_tie_weight=False,
**kwargs
):
super().__init__(**kwargs)
if llm_config is not None:
assert isinstance(llm_config, (PretrainedConfig, dict)), \
f"expect `llm_config` to be instance of PretrainedConfig or dict, but got {type(llm_config)} type"
if not isinstance(llm_config, PretrainedConfig):
model_type = llm_config['model_type']
llm_config.pop('model_type')
llm_config = AutoConfig.for_model(model_type, **llm_config)
self.llm_config = llm_config
if visual_tokenizer_config is not None:
assert isinstance(visual_tokenizer_config, (PretrainedConfig, dict)), \
f"expect `visual_tokenizer_config` to be instance of PretrainedConfig or dict, but got {type(visual_tokenizer_config)} type"
if not isinstance(visual_tokenizer_config, PretrainedConfig):
model_type = visual_tokenizer_config['model_type']
visual_tokenizer_config.pop('model_type')
visual_tokenizer_config = AutoConfig.for_model(model_type, **visual_tokenizer_config)
self.visual_tokenizer_config = visual_tokenizer_config
self.multimodal_max_length = multimodal_max_length
self.hidden_size = hidden_size
self.conversation_formatter_class = conversation_formatter_class
self.llm_attn_implementation = llm_attn_implementation
self.disable_tie_weight = disable_tie_weight
# ----------------------------------------------------------------------
# Conversation Formatter
# ----------------------------------------------------------------------
class ConversationFormatter(ABC):
support_tokenizer_types = None
def __init__(self, tokenizer):
tokenizer_type = type(tokenizer).__name__
assert tokenizer_type in self.support_tokenizer_types, \
f'Invalid tokenizer type, expected one from `{self.support_tokenizer_types}`, but got `{tokenizer_type}`'
self.tokenizer = tokenizer
self.image_token = IMAGE_TOKEN
self.image_token_id = IMAGE_TOKEN_ID
self.ignore_id = IGNORE_ID
def _tokenize_with_image_symbol(self, text):
text_chunks = [self.tokenizer(chunk, add_special_tokens=False).input_ids for chunk in
text.split(self.image_token)]
token_ids = []
num_chuck = len(text_chunks)
for i, chunk in enumerate(text_chunks):
token_ids.extend(chunk)
if i < num_chuck - 1:
token_ids.append(self.image_token_id)
return token_ids
@abstractmethod
def format(self, conversations: List[Dict], generation_preface=None):
pass
@abstractmethod
def format_query(self, query, generation_preface=""):
pass
class GemmaConversationFormatter(ConversationFormatter):
support_tokenizer_types = ['GemmaTokenizer', 'GemmaTokenizerFast']
def __init__(self, tokenizer):
super().__init__(tokenizer)
# Gemma does not support system prompt
self.from2role = {
"human": "<start_of_turn>user\n",
"gpt": "<start_of_turn>model\n",
}
self.gpt_token_num = None
self.im_end = "<end_of_turn>\n"
self.bos_token = "<bos>"
self.bos_token_ids = None
def format(self, conversations: List[Dict], generation_preface=None):
if self.gpt_token_num is None:
self.gpt_token_num = len(self.tokenizer(self.from2role["gpt"], add_special_tokens=False).input_ids)
if self.bos_token_ids is None:
self.bos_token_ids = self.tokenizer(self.bos_token, add_special_tokens=False).input_ids
if conversations[0]["from"] == "system":
raise ValueError("Gemma does not support system prompt")
if generation_preface is not None:
conversations.append({
"from": "gpt",
"value": generation_preface
})
prompt = "" + self.bos_token
input_ids = [] + self.bos_token_ids
labels = [] + [IGNORE_ID] * len(input_ids)
num_conversation = len(conversations)
for i, conversation in enumerate(conversations):
frm = conversation["from"]
role = self.from2role[frm]
message = conversation["value"].strip()
text = role + message
if i < num_conversation - 1 or generation_preface is None:
text += self.im_end
prompt += text
token_ids = self._tokenize_with_image_symbol(text)
input_ids.extend(token_ids)
label_ids = [self.ignore_id] * len(token_ids)
if frm == "gpt":
# learning `\n` following `im_end` is meaningless, so the last `\n` token is ignored in label
label_ids[self.gpt_token_num:-1] = token_ids[self.gpt_token_num:-1]
labels.extend(label_ids)
assert self._tokenize_with_image_symbol(prompt) == input_ids
assert len(input_ids) == len(labels)
return prompt, input_ids, labels
def format_query(self, query, generation_preface=""):
prompt, input_ids, _ = self.format([{
"from": "human",
"value": query
}], generation_preface=generation_preface)
return prompt, input_ids

11
generation_config.json Normal file
View File

@ -0,0 +1,11 @@
{
"_from_model_config": true,
"bos_token_id": 2,
"cache_implementation": "hybrid",
"eos_token_id": [
1,
107
],
"pad_token_id": 0,
"transformers_version": "4.44.2"
}

BIN
model-00001-of-00005.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00002-of-00005.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00003-of-00005.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00004-of-00005.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

BIN
model-00005-of-00005.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,923 @@
{
"metadata": {
"total_size": 20413821036
},
"weight_map": {
"llm.model.embed_tokens.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.input_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.post_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.pre_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.input_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.post_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.pre_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.10.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.13.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.17.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.18.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.19.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.2.input_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.post_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.pre_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.20.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.20.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.20.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.20.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.20.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.20.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.20.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.20.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.20.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.21.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.23.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.24.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.25.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.26.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.27.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.28.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.29.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.3.input_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.post_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.pre_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.3.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.30.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.30.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.input_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.mlp.down_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.post_attention_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.post_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.pre_feedforward_layernorm.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.31.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.32.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.32.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.32.mlp.gate_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.32.mlp.up_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.32.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.32.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.32.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.32.self_attn.k_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.32.self_attn.o_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.32.self_attn.q_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.32.self_attn.v_proj.weight": "model-00003-of-00005.safetensors",
"llm.model.layers.33.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.33.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.34.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.35.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.36.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.37.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.38.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.39.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.4.input_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.post_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.pre_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.40.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.40.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.input_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.mlp.down_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.mlp.gate_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.mlp.up_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.post_attention_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.post_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.pre_feedforward_layernorm.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.self_attn.o_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.41.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"llm.model.layers.5.input_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.post_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.pre_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.5.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.input_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.mlp.down_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.post_attention_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.post_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.pre_feedforward_layernorm.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.6.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.7.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.7.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.7.mlp.gate_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.7.mlp.up_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.7.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.7.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.7.self_attn.k_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.7.self_attn.o_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.7.self_attn.q_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.7.self_attn.v_proj.weight": "model-00001-of-00005.safetensors",
"llm.model.layers.8.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.8.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.input_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.mlp.down_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.mlp.up_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.post_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.pre_feedforward_layernorm.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00005.safetensors",
"llm.model.norm.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.embeddings.patch_embedding.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.embeddings.patch_embedding.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.embeddings.position_embedding.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.0.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.1.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.10.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.11.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.12.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.13.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.14.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.15.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.16.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.17.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.18.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.19.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.2.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.20.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.21.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.22.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.23.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.24.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.25.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.26.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.3.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.4.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.5.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.6.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.7.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.8.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.layer_norm1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.layer_norm1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.layer_norm2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.layer_norm2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.k_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.k_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.q_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.q_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.v_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.encoder.layers.9.self_attn.v_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.attention.in_proj_bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.attention.in_proj_weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.attention.out_proj.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.attention.out_proj.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.layernorm.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.layernorm.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.mlp.fc1.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.mlp.fc1.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.mlp.fc2.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.mlp.fc2.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.head.probe": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.post_layernorm.bias": "model-00004-of-00005.safetensors",
"visual_tokenizer.backbone.vision_model.post_layernorm.weight": "model-00004-of-00005.safetensors",
"visual_tokenizer.head.0.weight": "model-00005-of-00005.safetensors",
"visual_tokenizer.head.1.bias": "model-00005-of-00005.safetensors",
"visual_tokenizer.head.1.weight": "model-00005-of-00005.safetensors",
"vte.weight": "model-00005-of-00005.safetensors"
}
}

620
modeling_ovis.py Normal file
View File

@ -0,0 +1,620 @@
import logging
import os
from packaging import version
from importlib import import_module
from typing import List, Callable, Union, Optional, Dict
import PIL.Image
import torch
import transformers
from torch import Tensor
from torch.nn import init
from torch.nn.functional import softmax, gumbel_softmax, pad
from transformers import PreTrainedModel, AutoModel, AutoTokenizer, AutoModelForCausalLM, AutoImageProcessor
from transformers import SiglipImageProcessor, SiglipVisionModel
from transformers.cache_utils import HybridCache
from transformers.generation.utils import GenerateOutput
from .configuration_ovis import BaseVisualTokenizerConfig, SiglipVisualTokenizerConfig
from .configuration_ovis import OvisConfig, ConversationFormatter
from .configuration_ovis import IGNORE_ID, IMAGE_ATOM_ID, IMAGE_INDICATOR_IDS, IMAGE_TOKEN_ID
# ----------------------------------------------------------------------
# Visual Tokenizer
# ----------------------------------------------------------------------
class BaseVisualTokenizer(PreTrainedModel):
base_model_prefix = "backbone"
main_input_name = None
_image_processor_class = None
_image_processor_kwargs = {}
_backbone_class = None
_backbone_name_or_path = None
def __init__(self, config: BaseVisualTokenizerConfig, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
self.image_processor = AutoImageProcessor.from_pretrained(kwargs['image_processor_name_or_path'])
self.backbone = AutoModel.from_config(self.config.backbone_config)
head_dim = self.config.vocab_size - len(IMAGE_INDICATOR_IDS) # reserved tokens for IMAGE_INDICATORS
self.head = torch.nn.Sequential(
torch.nn.Linear(
self.backbone.config.hidden_size * self.config.hidden_stride * self.config.hidden_stride, head_dim,
bias=False
),
torch.nn.LayerNorm(head_dim)
)
assert all((self.image_processor.do_resize,
not getattr(self.image_processor, 'do_center_crop', False),
self.image_processor.do_rescale,
self.image_processor.do_normalize
)), f"image_processor `{self.image_processor}` is not supported currently"
def get_backbone(self):
return self.backbone
def get_image_processor(self):
return self.image_processor
def mock_input(self):
height, width = self.get_image_size()
return torch.zeros(1, 3, height, width), self.construct_image_placeholders((1, 1))
def get_head(self):
return self.head
def get_image_size(self):
raise NotImplementedError
@staticmethod
def construct_image_placeholders(grid):
image_placeholders = [IMAGE_INDICATOR_IDS[0], IMAGE_ATOM_ID, IMAGE_INDICATOR_IDS[1]]
if grid[0] * grid[1] > 1:
for r in range(grid[0]):
for c in range(grid[1]):
image_placeholders.append(IMAGE_ATOM_ID)
if c < grid[1] - 1:
image_placeholders.append(IMAGE_INDICATOR_IDS[2])
if r < grid[0] - 1:
image_placeholders.append(IMAGE_INDICATOR_IDS[3])
image_placeholders.append(IMAGE_INDICATOR_IDS[4])
return image_placeholders
def preprocess_image(self, image: PIL.Image.Image, max_partition=9, covering_threshold=0.9, convert_to_rgb=True):
def _preprocess(img: PIL.Image.Image, side):
# first resize and preprocess
w, h = img.size
if w == h:
new_width = new_height = side
elif w > h:
new_width = side
new_height = int(h / w * new_width)
else:
new_height = side
new_width = int(w / h * new_height)
new_size = dict(height=new_height, width=new_width)
pixel_values = self.image_processor.preprocess(img, size=new_size, return_tensors='pt')['pixel_values']
# then pad to square
square_values = torch.zeros([1, 3, side, side], dtype=pixel_values.dtype, device=pixel_values.device)
new_height, new_width = pixel_values.shape[2:]
if new_height == new_width:
square_values[:, :, :, :] = pixel_values
elif new_height > new_width:
from_index = (side - new_width) // 2
square_values[:, :, :, from_index:from_index + new_width] = pixel_values
else:
from_index = (side - new_height) // 2
square_values[:, :, from_index:from_index + new_height, :] = pixel_values
return square_values
def _partition(img, grid):
w, h = img.size
row_height = h // grid[0]
col_width = w // grid[1]
partition = []
for row in range(grid[0]):
for col in range(grid[1]):
left = col * col_width
upper = row * row_height
right = w if col == grid[1] - 1 else (col + 1) * col_width
lower = h if row == grid[0] - 1 else (row + 1) * row_height
partition.append((left, upper, right, lower))
return partition
def _covering_area(left, upper, right, lower, side):
w = right - left
h = lower - upper
w, h = max(w, h), min(w, h)
if w > side:
h = h / w * side
w = side
return w * h
def _get_best_grid(img, side):
img_area = img.size[0] * img.size[1]
candidate_grids = []
for i in range(1, max_partition + 1):
for j in range(1, max_partition + 1):
if i * j <= max_partition:
candidate_grids.append((i, j))
all_grids = []
good_grids = []
for grid in candidate_grids:
partition = _partition(img, grid)
covering_ratio = sum([_covering_area(*p, side) for p in partition]) / img_area
assert covering_ratio <= 1.0
all_grids.append((grid, covering_ratio))
if covering_ratio > covering_threshold:
good_grids.append((grid, covering_ratio))
if len(good_grids) > 0:
# pick the good partition with minimum #sub_images and break the tie using covering_ratio
return sorted(good_grids, key=lambda x: (x[0][0] * x[0][1], -x[1]))[0][0]
else:
# pick the partition with maximum covering_ratio and break the tie using #sub_images
return sorted(all_grids, key=lambda x: (-x[1], x[0][0] * x[0][1]))[0][0]
if convert_to_rgb and image.mode != 'RGB':
image = image.convert('RGB')
sides = self.get_image_size()
if sides[0] != sides[1]:
raise ValueError('get_image_size() returns non-square size')
side = sides[0]
grid = _get_best_grid(image, side)
partition = _partition(image, grid)
crops = [image.crop(p) for p in partition]
if len(crops) > 1:
crops.insert(0, image)
pixel_values = torch.cat([_preprocess(crop, side) for crop in crops], dim=0)
image_placeholders = self.construct_image_placeholders(grid)
return pixel_values, image_placeholders
def tokenize(self, logits):
def st_argmax(y_soft, dim): # straight-through softmax
index = y_soft.max(dim, keepdim=True)[1]
y_hard = torch.zeros_like(y_soft, memory_format=torch.legacy_contiguous_format).scatter_(dim, index, 1.0)
ret = y_hard - y_soft.detach() + y_soft
return ret
if self.config.tokenize_function == 'softmax':
tokens = softmax(logits, dim=-1)
elif self.config.tokenize_function == 'gumbel_argmax':
tokens = gumbel_softmax(logits, tau=self.config.tau, hard=True)
elif self.config.tokenize_function == 'st_argmax':
tokens = st_argmax(logits, dim=-1)
else:
raise ValueError(
f'Invalid `max_type`, expected softmax or gumbel_argmax or st_argmax, but got {self.config.tokenize_function}')
return tokens
def encode(self, pixel_values):
output = self.backbone(pixel_values, output_hidden_states=True, return_dict=True)
features = output.hidden_states[-1]
if self.config.drop_cls_token:
features = features[:, 1:, :]
# merge number of `hidden_stride * hidden_stride` hidden states together to reduce token sequence length
# e.g., for hidden_stride=3, this leads to a token length reduction: 729 -> 81 for siglip
if self.config.hidden_stride > 1:
n, l, d = features.shape # this `d` maybe different from the above `d
sqrt_l = int(l ** 0.5)
assert sqrt_l ** 2 == l, "The token sequence length should be a perfect square."
features = features.reshape(n, sqrt_l, sqrt_l, d)
pl = (self.config.hidden_stride - (sqrt_l % self.config.hidden_stride)) % self.config.hidden_stride
features = pad(features, (0, 0, 0, pl, 0, pl), "constant", 0)
sqrt_l += pl
features = features.reshape(n, sqrt_l // self.config.hidden_stride, self.config.hidden_stride,
sqrt_l // self.config.hidden_stride, self.config.hidden_stride, d)
features = features.permute(0, 1, 3, 2, 4, 5) # [n, sqrt_l/hs, sqrt_l/hs, hs, hs, d]
features = features.flatten(3) # [n, sqrt_l/hs, sqrt_l/hs, hs*hs*d]
features = features.reshape(
n, -1, self.config.hidden_stride * self.config.hidden_stride * d)
return features
def forward(self, pixel_values) -> torch.Tensor: # [BatchSize, ImageShape] -> [BatchSize, #Token, VocabSize]
features = self.encode(pixel_values)
logits = self.head(features)
tokens = self.tokenize(logits)
# tokens' shape is [BatchSize, #Token, VocabSize-5], so padding with [BatchSize, #Token, 5], after
# which, tokens' shape should become [BatchSize, #Token, VocabSize]
batch_size, token_len, _ = tokens.shape
padding_tensor = torch.zeros(size=(batch_size, token_len, len(IMAGE_INDICATOR_IDS)),
dtype=tokens.dtype,
device=tokens.device,
layout=tokens.layout,
requires_grad=False)
tokens = torch.cat((tokens, padding_tensor), dim=2)
return tokens
class SiglipVisualTokenizer(BaseVisualTokenizer):
config_class = SiglipVisualTokenizerConfig
supports_gradient_checkpointing = True
_no_split_modules = ["SiglipVisionTransformer"]
_image_processor_class = SiglipImageProcessor
_image_processor_kwargs = {}
_backbone_class = SiglipVisionModel
_backbone_name_or_path = "google/siglip-so400m-patch14-384"
def get_image_size(self):
height = self.image_processor.size["height"]
width = self.image_processor.size["width"]
return height, width
AutoModel.register(SiglipVisualTokenizerConfig, SiglipVisualTokenizer)
# ----------------------------------------------------------------------
# Ovis
# ----------------------------------------------------------------------
class VisualEmbedding(torch.nn.Embedding):
def forward(self, visual_tokens: Tensor) -> Tensor:
if visual_tokens.dtype in [torch.int8, torch.int16, torch.int32, torch.int64, torch.long]:
return super().forward(visual_tokens)
return torch.matmul(visual_tokens, self.weight)
def reset_parameters(self, mean=0., std=1.) -> None:
init.normal_(self.weight, mean=mean, std=std)
self._fill_padding_idx_with_zero()
class OvisPreTrainedModel(PreTrainedModel):
config_class = OvisConfig
base_model_prefix = "ovis"
class Ovis(OvisPreTrainedModel):
def __init__(self, config: OvisConfig, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
attn_kwargs = dict()
if self.config.llm_attn_implementation:
attn_kwargs['attn_implementation'] = self.config.llm_attn_implementation
self.llm = AutoModelForCausalLM.from_config(self.config.llm_config, **attn_kwargs)
assert self.config.hidden_size == self.llm.config.hidden_size, "hidden size mismatch"
self.text_tokenizer = AutoTokenizer.from_pretrained(self.config.name_or_path)
self.visual_tokenizer = AutoModel.from_config(self.config.visual_tokenizer_config,
image_processor_name_or_path=self.config.name_or_path)
self.vte = VisualEmbedding(
self.config.visual_tokenizer_config.vocab_size,
self.config.hidden_size,
device=self.visual_tokenizer.device,
dtype=self.visual_tokenizer.dtype
)
def _merge_modules(modules_list: tuple):
merged_modules = []
for modules in modules_list:
merged_modules.extend(modules if modules else [])
return merged_modules
self._no_split_modules = _merge_modules((self.llm._no_split_modules, self.visual_tokenizer._no_split_modules))
self._skip_keys_device_placement = self.llm._skip_keys_device_placement
self._keep_in_fp32_modules = _merge_modules(
(self.llm._keep_in_fp32_modules, self.visual_tokenizer._keep_in_fp32_modules))
self.is_parallelizable = all((self.llm.is_parallelizable, self.visual_tokenizer.is_parallelizable))
self.supports_gradient_checkpointing = all(
(self.llm.supports_gradient_checkpointing, self.visual_tokenizer.supports_gradient_checkpointing))
self._supports_flash_attn_2 = all(
(self.llm._supports_flash_attn_2, self.visual_tokenizer._supports_flash_attn_2))
self._supports_sdpa = all((self.llm._supports_sdpa, self.visual_tokenizer._supports_sdpa))
def get_text_tokenizer(self):
return self.text_tokenizer
def get_visual_tokenizer(self):
return self.visual_tokenizer
def tie_weights(self):
if not self.config.disable_tie_weight:
self.get_llm().tie_weights()
def get_llm(self):
return self.llm
def get_vte(self):
return self.vte
def get_wte(self):
return self.llm.get_input_embeddings()
def get_conversation_formatter(self) -> ConversationFormatter:
if getattr(self, 'conversation_formatter', None) is None:
self.conversation_formatter = getattr(import_module(".configuration_ovis", __package__),
self.config.conversation_formatter_class)(self.text_tokenizer)
return self.conversation_formatter
def forward(
self,
input_ids: torch.Tensor,
attention_mask: torch.Tensor,
labels: Optional[torch.Tensor],
pixel_values: List[Optional[torch.Tensor]],
**kwargs
):
assert self.training, "`forward` can only be used in training. For inference, use `generate`."
_, inputs_embeds, labels, attention_mask = self.merge_multimodal(
text_input_ids=input_ids,
text_attention_masks=attention_mask,
text_labels=labels,
pixel_values=pixel_values
)
return self.llm(inputs_embeds=inputs_embeds, labels=labels, attention_mask=attention_mask, **kwargs)
def merge_multimodal(
self,
text_input_ids: torch.Tensor,
text_attention_masks: torch.Tensor,
text_labels: Optional[torch.Tensor],
pixel_values: List[Optional[torch.Tensor]],
left_padding: bool = False
):
input_device = text_input_ids.device
visual_vocab_szie = self.get_visual_tokenizer().config.vocab_size
visual_indicator_embeds = self.get_vte()(
torch.tensor(
list(range(visual_vocab_szie - 5, visual_vocab_szie)),
dtype=torch.long,
device=self.get_visual_tokenizer().device
)
).to(device=input_device)
if self.training:
# When training, to be compatible with deepspeed zero, each sample has to include pixel_value tensor.
# For text-only sample, one can simply use a full zero tensor as pixel_value, which will be ignored
# (see below in this function); so, the gradient will not be affected.
num_images = [x.shape[0] for x in pixel_values]
visual_tokens = self.visual_tokenizer(torch.cat([x for x in pixel_values], dim=0))
visual_embeds = torch.split(self.get_vte()(visual_tokens).to(dtype=self.dtype, device=input_device),
split_size_or_sections=num_images, dim=0)
visual_input_ids = torch.split(torch.argmax(visual_tokens, dim=-1).to(device=input_device),
split_size_or_sections=num_images, dim=0)
visual_labels = [torch.full(x.shape, IGNORE_ID, dtype=torch.long, device=input_device) for x in
visual_input_ids]
else:
# When inference, sample can include only text with `None` pixel_value
num_images = [x.shape[0] if x is not None else 0 for x in pixel_values]
if sum(num_images) > 0:
visual_tokens = self.visual_tokenizer(torch.cat([x for x in pixel_values if x is not None], dim=0))
visual_embeds = torch.split(self.get_vte()(visual_tokens).to(dtype=self.dtype, device=input_device),
split_size_or_sections=num_images, dim=0)
visual_input_ids = torch.split(torch.argmax(visual_tokens, dim=-1).to(device=input_device),
split_size_or_sections=num_images, dim=0)
visual_labels = [torch.full(x.shape, IGNORE_ID, dtype=torch.long, device=input_device) for x in
visual_input_ids]
else:
# just placeholders
visual_embeds = [None] * len(num_images)
visual_input_ids = [None] * len(num_images)
visual_labels = [None] * len(num_images)
if text_labels is None:
text_labels = torch.full(text_input_ids.shape, IGNORE_ID, dtype=torch.long, device=input_device)
input_embeds = []
attention_masks = []
labels = []
for text_input_id, text_label, text_attention_mask, visual_embed, visual_input_id, visual_label in zip(
text_input_ids, text_labels, text_attention_masks, visual_embeds, visual_input_ids, visual_labels
):
placeholder_token_mask = torch.lt(text_input_id, 0)
text_embed = self.get_wte()(torch.masked_fill(text_input_id, placeholder_token_mask, 0))
for i, indicator_id in enumerate(IMAGE_INDICATOR_IDS):
text_embed[text_input_id == indicator_id] = visual_indicator_embeds[i]
image_atom_positions = torch.where(torch.eq(text_input_id, IMAGE_ATOM_ID))[0].tolist()
if len(image_atom_positions) > 0:
input_embed_parts = []
attention_mask_parts = []
label_parts = []
prev_image_atom_position = -1
for index, image_atom_position in enumerate(image_atom_positions):
input_embed_parts.append(
text_embed[prev_image_atom_position + 1:image_atom_position, :])
label_parts.append(
text_label[prev_image_atom_position + 1:image_atom_position])
attention_mask_parts.append(
text_attention_mask[prev_image_atom_position + 1:image_atom_position])
input_embed_parts.append(visual_embed[index])
attention_mask_parts.append(
torch.ones_like(visual_label[index], dtype=torch.bool))
label_parts.append(visual_label[index])
prev_image_atom_position = image_atom_position
if prev_image_atom_position + 1 < text_input_id.shape[0]:
input_embed_parts.append(
text_embed[prev_image_atom_position + 1:, :])
attention_mask_parts.append(
text_attention_mask[prev_image_atom_position + 1:])
label_parts.append(
text_label[prev_image_atom_position + 1:])
input_embed = torch.cat(input_embed_parts, dim=0)
attention_mask = torch.cat(attention_mask_parts, dim=0)
label = torch.cat(label_parts, dim=0)
else:
input_embed = text_embed
attention_mask = text_attention_mask
label = text_label
if self.training:
# Make visual_embed & visual_indicator_embeds involved in the backward graph,
# to be compatible with deepspeed zero and ddp.
input_embed += torch.sum(visual_embed * 0.0) + torch.sum(visual_indicator_embeds * 0.0)
input_embeds.append(input_embed)
attention_masks.append(attention_mask)
labels.append(label)
if self.training: # padding to self.config.multimodal_max_length for increased training speed
padding_size = max(0, self.config.multimodal_max_length - len(input_embeds[0]))
input_embeds[0] = torch.nn.ConstantPad2d((0, 0, 0, padding_size), 0.0)(input_embeds[0])
attention_masks[0] = torch.nn.ConstantPad1d((0, padding_size), False)(attention_masks[0])
labels[0] = torch.nn.ConstantPad1d((0, padding_size), IGNORE_ID)(labels[0])
batch_input_embeds = self.pad_truncate_sequence(input_embeds, batch_first=True, padding_value=0.0, left_padding=left_padding)
batch_attention_mask = self.pad_truncate_sequence(attention_masks, batch_first=True, padding_value=False, left_padding=left_padding)
batch_labels = self.pad_truncate_sequence(labels, batch_first=True, padding_value=IGNORE_ID, left_padding=left_padding)
return visual_input_ids, batch_input_embeds, batch_labels, batch_attention_mask
def pad_truncate_sequence(self, sequences: List[torch.Tensor], batch_first: bool = True, padding_value: float = 0.0, left_padding: bool = False) -> torch.Tensor:
if left_padding == False:
pad_sequence = torch.nn.utils.rnn.pad_sequence(sequences, batch_first=batch_first, padding_value=padding_value)
return pad_sequence[:,:self.config.multimodal_max_length]
else:
pad_sequence = torch.nn.utils.rnn.pad_sequence([i.flip(dims=[0]) for i in sequences],batch_first=True, padding_value=padding_value).flip(dims=[1])
return pad_sequence[:,-self.config.multimodal_max_length:]
def preprocess_inputs(
self,
text_or_conversations: Union[List[Dict], str],
images: Optional[List[PIL.Image.Image]],
max_partition=9,
generation_preface='',
return_labels=False,
propagate_exception=True
):
# convert text to conversations
if isinstance(text_or_conversations, str):
conversations = [{
"from": "human",
"value": text_or_conversations
}]
elif isinstance(text_or_conversations, list):
conversations = text_or_conversations
else:
raise ValueError(f'Invalid type of `text_or_conversations`, expected `List[Dict]` or `str`,'
f' but got {type(text_or_conversations)}')
# format conversations
prompt, raw_input_ids, raw_labels = self.get_conversation_formatter().format(
conversations, generation_preface=generation_preface)
# place image placeholders
input_ids = []
labels = []
pixel_values = []
invalidate_label = False
image_token_indices = [i for i, v in enumerate(raw_input_ids) if v == IMAGE_TOKEN_ID]
last_image_token_index = -1
for i in range(len(image_token_indices)):
head = 0 if i == 0 else image_token_indices[i - 1] + 1
tail = image_token_indices[i]
last_image_token_index = tail
input_ids.extend(raw_input_ids[head:tail])
labels.extend(raw_labels[head:tail])
try:
image = images[i]
raw_pixel_values, image_placeholders = self.visual_tokenizer.preprocess_image(
image, max_partition=max_partition)
except Exception as e:
if propagate_exception:
raise e
logging.exception(e)
invalidate_label = True
raw_pixel_values, image_placeholders = self.visual_tokenizer.mock_input()
input_ids.extend(image_placeholders)
labels.extend([IGNORE_ID] * len(image_placeholders))
pixel_values.append(raw_pixel_values)
input_ids.extend(raw_input_ids[last_image_token_index + 1:])
labels.extend(raw_labels[last_image_token_index + 1:])
# return tensors
input_ids = torch.tensor(input_ids, dtype=torch.long)
labels = torch.tensor([IGNORE_ID] * len(labels) if invalidate_label else labels, dtype=torch.long)
pixel_values = torch.cat(pixel_values, dim=0) if len(pixel_values) > 0 else None
if return_labels:
return prompt, input_ids, pixel_values, labels
else:
return prompt, input_ids, pixel_values
def save_pretrained(
self,
save_directory: Union[str, os.PathLike],
is_main_process: bool = True,
state_dict: Optional[dict] = None,
save_function: Callable = torch.save,
push_to_hub: bool = False,
max_shard_size: Union[int, str] = "5GB",
safe_serialization: bool = True,
variant: Optional[str] = None,
token: Optional[Union[str, bool]] = None,
save_peft_format: bool = True,
**kwargs
):
super().save_pretrained(save_directory,
is_main_process=is_main_process,
state_dict=state_dict,
save_function=save_function,
safe_serialization=safe_serialization)
self.get_text_tokenizer().save_pretrained(save_directory)
self.get_visual_tokenizer().get_image_processor().save_pretrained(save_directory)
def _get_hybrid_cache_for_llm(self, batch_size: int, max_cache_len: int):
cache_cls = HybridCache
llm = self.get_llm()
if version.parse(transformers.__version__) >= version.parse("4.46.0"):
need_new_cache = (
not hasattr(llm, "_cache")
or (not isinstance(llm._cache, cache_cls))
or llm._cache.batch_size != batch_size
or llm._cache.max_cache_len < max_cache_len
)
else:
need_new_cache = (
not hasattr(llm, "_cache")
or (not isinstance(llm._cache, cache_cls))
or llm._cache.max_batch_size != batch_size
or llm._cache.max_cache_len < max_cache_len
)
if need_new_cache:
if hasattr(llm.config, "_pre_quantization_dtype"):
cache_dtype = llm.config._pre_quantization_dtype
else:
cache_dtype = llm.dtype
if version.parse(transformers.__version__) >= version.parse("4.46.0"):
llm._cache = cache_cls(
config=llm.config,
batch_size=batch_size,
max_cache_len=max_cache_len,
device=llm.device,
dtype=cache_dtype,
)
else:
llm._cache = cache_cls(
config=llm.config,
max_batch_size=batch_size,
max_cache_len=max_cache_len,
device=llm.device,
dtype=cache_dtype,
)
else:
llm._cache.reset()
return llm._cache
# TODO: support batch generation
def generate(
self,
inputs: Optional[torch.Tensor] = None,
**kwargs
) -> Union[GenerateOutput, torch.LongTensor]:
_, inputs_embeds, labels, attention_mask = self.merge_multimodal(
text_input_ids=inputs,
text_attention_masks=kwargs.pop('attention_mask'),
text_labels=None,
pixel_values=kwargs.pop('pixel_values'),
left_padding=True
)
if getattr(self.generation_config, 'cache_implementation') == 'hybrid': # mainly for Gemma2
kwargs['past_key_values'] = self._get_hybrid_cache_for_llm(
getattr(kwargs, "num_beams", inputs_embeds.shape[0]), kwargs['max_new_tokens'] + inputs_embeds.shape[-2])
self.get_llm()._supports_cache_class = True
kwargs['cache_implementation'] = None
return self.llm.generate(inputs=None, inputs_embeds=inputs_embeds, attention_mask=attention_mask, **kwargs)

24
preprocessor_config.json Normal file
View File

@ -0,0 +1,24 @@
{
"do_convert_rgb": null,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.5,
0.5,
0.5
],
"image_processor_type": "SiglipImageProcessor",
"image_std": [
0.5,
0.5,
0.5
],
"processor_class": "SiglipProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"height": 384,
"width": 384
}
}

34
special_tokens_map.json Normal file
View File

@ -0,0 +1,34 @@
{
"additional_special_tokens": [
"<start_of_turn>",
"<end_of_turn>"
],
"bos_token": {
"content": "<bos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<eos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

838658
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

BIN
tokenizer.model (Stored with Git LFS) Normal file

Binary file not shown.

1757
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff